00:00:10 -- The interpretation provided for this presentation is live and unrehearsed.
00:00:15 -- Interpreters assigned may or may not have had materials in advance for preparation.
00:00:21 -- Inaccuracies related to the content of the material maybe due
00:00:25 -- to imperfections in the interpreting process.
00:00:29 -- This interpretation has not been reviewed by the presenter.
00:00:33 -- [ Pause ]
00:00:53 -- Good afternoon everyone.
00:00:55 -- I'm thrilled that you're all able to come to our next lecture in the VL2 Lecture Series.
00:01:06 -- Today we have a special treat.
00:01:07 -- We have invited two members of our team from Georgetown University and you may
00:01:16 -- or may not know that Georgetown is one of our main partnering institutions as part
00:01:22 -- of the VL2 Center and one of our Co-science directors is Dr. Guinevere Eden
00:01:29 -- who is seated right here on the second row, if you could stand and be recognized.
00:01:32 -- Okay.
00:01:33 -- [ Pause ]
00:01:40 -- And among a number of things that she does for the center, one of the things that she does
00:01:46 -- and how we found her is that she has been involved with working with Dr. Carol LaSasso
00:01:52 -- for a number of years on studies of learning among deaf individuals.
00:02:02 -- Dr. Eden had also been working with a Gallaudet graduate by the name
00:02:07 -- of Dr. Daniel Koo who is also here with us.
00:02:11 -- Dr. Koo will be making the presentation today.
00:02:16 -- He attended Gallaudet where he attained his masters in linguistics.
00:02:22 -- From there he went to the University of Rochester
00:02:25 -- where he attained his PhD in neuro and cognitive sciences.
00:02:36 -- The most recent lecture form Matt Dye also came from the same center.
00:02:42 -- So we are continuing to interact with a number of people that are part of the center.
00:02:49 -- This work is very important to VL2.
00:02:52 -- This is around how the brain processes information coming
00:02:56 -- from different sensory inputs using different languages and how this gets to the same place
00:03:04 -- in terms of meaning in the brain.
00:03:09 -- This question has posed a puzzle for researchers.
00:03:13 -- So Dr. Koo as well as Dr. Eden's and LaSasso's work has worked towards understanding
00:03:22 -- this question.
00:03:23 -- That's about all I know about the topic [laughter], so I'm gonna turn this
00:03:25 -- over to Danny who knows a lot more than I do.
00:03:28 -- Danny.
00:03:29 -- [ Pause ]
00:03:34 -- Thank you.
00:03:36 -- I may not know as much as you think I do.
00:03:40 -- Last night I was watching television Dances with the Stars --
00:03:43 -- Dancing with the Stars and I just was amazed
00:03:48 -- at how good they are doing the certain dances like the tango.
00:03:50 -- I'm very glad that you didn't invite Guinevere and myself to do the tango here, however.
00:03:55 -- What's important is our research topic today.
00:03:59 -- Our title --
00:04:03 -- [ Pause ]
00:04:16 -- First I want to talk a little bit about the background of ASL and cued speech
00:04:21 -- and the research that is related to reading among deaf populations.
00:04:25 -- Now you all know of ASL and I hope I don't have to go into a very much detail, correct?
00:04:31 -- Yes, good.
00:04:32 -- But I do wanna then also talk about some of the functional neuroanatomy of reading in deaf
00:04:40 -- and hearing people which relates to our research.
00:04:44 -- [ Pause ]
00:04:54 -- Now we know that ASL is an independent language.
00:04:57 -- It has its distinct phonology morphology that has no relationship to English.
00:05:06 -- It is a visual-spatial language as well as manual.
00:05:10 -- There is no orthography or written system.
00:05:13 -- There are written systems that exist but they're not widely used.
00:05:17 -- Some have argued that fingerspelling might have this relationship to orthography.
00:05:24 -- Fingerspelling is what we in linguistics term are morphings.
00:05:31 -- When it comes to the printed letter, there is not a one to one mapping but there is a one
00:05:38 -- to one mapping for fingerspelling to written text but not in spoken languages.
00:05:42 -- Spoken languages, it's phonemes to graphemes, there are 44 phonemes and 26 graphemes.
00:05:52 -- People learn this map and develop the reading skill.
00:05:56 -- [ Pause ]
00:06:08 -- I want to talk a little bit about cued speech.
00:06:10 -- It was developed here at Gallaudet College in 1966 by Dr. Cornett.
00:06:14 -- At that time when he first came to Gallaudet, he saw that there were many deaf people
00:06:23 -- who were struggling with the English language.
00:06:27 -- And when looking into the reasons perhaps why,
00:06:29 -- it turns out that most deaf people are born into families of hearing people.
00:06:34 -- And so their access to the English language is very difficult even if using the oral method,
00:06:42 -- and so they lack the critical period of learning language fro m 0 to 5.
00:06:47 -- So he developed this manual system called cued speech.
00:06:51 -- And it is a manual system that distinguishes the phonemes of the language and then
00:06:56 -- in turn makes it easier for deaf individuals to lip read and understand English.
00:07:01 -- For example the two words mom and Bob look the same on the lips.
00:07:07 -- There's no other visual cue to separate them and can be easily misunderstood.
00:07:15 -- But that's an important piece of information because what if somebody says, "Hey,
00:07:19 -- mom had an affair with somebody right before you were born".
00:07:24 -- And you're not sure if they were saying "mom had an affair"
00:07:27 -- or "Bob had an affair" that could really throw you off.
00:07:32 -- That's just a very simple example of how it distinguishes phonemes.
00:07:39 -- And it does this by the use of hand shapes for consonants and location to represent vowels.
00:07:47 -- I can show you an example.
00:07:49 -- [ Pause ]
00:08:03 -- This is the entire cued speech system in total.
00:08:07 -- There are eight hand shapes and four locations.
00:08:10 -- The word bean as in green bean or mean look very similar on the lips
00:08:20 -- but the hand shapes will distinguish them.
00:08:24 -- Bean -- mean.
00:08:28 -- This is the letter M -- the phoneme M is represented by this hand shape.
00:08:38 -- Now what if you have a word such as Ben --
00:08:43 -- bean or Ben look also the same but their vowels are different so we use location
00:08:49 -- to show the difference of bean and Ben.
00:08:52 -- [ Pause ]
00:08:59 -- It's called cued speech currently.
00:09:03 -- Apparently, that doesn't seem to be the appropriate title
00:09:07 -- because it doesn't involve speech.
00:09:08 -- It is a phonemic system based fully on the visual characteristics.
00:09:15 -- So perhaps instead we should call it cued American English.
00:09:23 -- [ Pause ]
00:09:29 -- There was a study done by Fleetwood and Metzger that has proposed
00:09:34 -- that we not call it cued speech because there actually is no speech involved
00:09:39 -- in the use of the cueing.
00:09:41 -- [ Pause ]
00:09:52 -- Deaf people who have grown up with using cued English
00:09:56 -- in the classroom are using a transliterator, a cued language transliterator,
00:10:01 -- but the transliterator isn't using their voice, isn't using speech,
00:10:05 -- so how is it that they're still understanding what's being talked about?
00:10:08 -- And this happens not only in the classroom but for cuers who cue among each other.
00:10:14 -- They're not using their speech.
00:10:16 -- So due to this, they have proposed that since speech isn't involved,
00:10:22 -- we call it cued American English because you're still fully representing a language
00:10:28 -- in a visual way.
00:10:30 -- [ Pause ]
00:10:40 -- If we use the rules of spoken language phonology and apply it in a manual system,
00:10:47 -- you still have manual restrictions as to movement and location and hand shape,
00:10:52 -- but you would still apply the same principles.
00:10:55 -- So children who have been cuing have incorporated those principles
00:11:00 -- as along with the manual restrictions.
00:11:07 -- Let me give you an example of what cuing looks like.
00:11:10 -- [ Pause ]
00:11:27 -- There was no speech, there was no sound on this tape, but I understood her fully.
00:11:32 -- She said "Hi, my name is Beth.
00:11:34 -- I will be taking about an experiment that you'll be working
00:11:37 -- on today using cued English all throughout".
00:11:42 -- [ Pause ]
00:11:54 -- Now what is that we do know about reading in deaf populations?
00:11:57 -- There have been several studies on deaf signers and they have shown
00:12:02 -- to have a different range of reading fluency.
00:12:06 -- I want to focus currently on what is considered good readers.
00:12:11 -- Hanson and Fowler do the study in which subjects were asked to participate
00:12:18 -- in a speed lexical decision task.
00:12:21 -- [ Pause ]
00:12:29 -- A computer was used to give the stimuli, a word at a time, and the subjects were to respond
00:12:37 -- as to whether they're not the word was a real word or a pseudoword as quickly as possible.
00:12:44 -- The idea behind this being that hearing people use phonology while reading.
00:12:50 -- So they do this task much quicker on words
00:12:55 -- that are phonologically similar and orthographically similar.
00:13:00 -- Then they do in these tasks whereat the words are orthographically similar
00:13:09 -- and phonologically dissimilar.
00:13:12 -- It's the response speed is slower.
00:13:16 -- They wanted to see if this happened among the deaf population as well,
00:13:20 -- would they have the same sort of results.
00:13:22 -- [ Pause ]
00:13:27 -- And what they found was that deaf people do have a phonology and when words are similar,
00:13:35 -- their reactions will be this faster that when they're dissimilar.
00:13:40 -- And again, we're talking about deaf readers who are good readers.
00:13:44 -- [ Pause ]
00:13:53 -- Now there have been other studies that have talked
00:13:56 -- about phonological awareness among subjects.
00:13:59 -- There was a study in Belgium and a couple of studies in several here in the United States,
00:14:04 -- I'm not gonna go into detail about each of them, but I did wanna talk about one in particular.
00:14:10 -- [ Pause ]
00:14:19 -- Shankweiler and Liber asked their subjects to look at two words at the same time
00:14:26 -- and respond whether or not they rhymed.
00:14:34 -- [Pause] R plus L O plus means phonologically similar, orthographically similar;
00:14:41 -- phonologically similar but orthographically dissimilar; phonologically dissimilar and
00:14:51 -- but speech reading is similar; and then our last group is dissimilar
00:14:58 -- in phonology and dissimilar on the lips.
00:15:01 -- Now they used the French language and their participants were age 10 to 12 years old.
00:15:10 -- But among these, they found -- oh, let me step back.
00:15:16 -- Cuers H means those people who use cued language in the home at school, Cuers S us with those
00:15:26 -- who only used cuing in school but in the home use some other form of communication.
00:15:32 -- Deaf cuers who cued in the home performed no differently than the hearing controls in terms
00:15:37 -- of detecting Phonology and in the rhyme judgment.
00:15:42 -- Deaf cuers who use the system at school only were significantly different.
00:15:54 -- And the effect of whether the words look similar on the lips did not have an effect.
00:16:01 -- [ Pause ]
00:16:12 -- This leads us to a problem.
00:16:14 -- Some studies have studied deaf signers, some deaf cuers,
00:16:18 -- but these two particular studies were done separately.
00:16:24 -- [ Pause ]
00:16:30 -- So there has been another couple of studies now that have tried to look at both
00:16:35 -- of these paradigms and both of these subject groups.
00:16:40 -- There are three subject groups that were looked at: Hearing controls, deaf cuers,
00:16:49 -- and deaf noncuers, and these were all collegiate students.
00:16:53 -- They were asked to create a list of rhyming words with a particular stimulus word,
00:17:00 -- as many as they could come up with off the top of their heads.
00:17:04 -- Similar to Hanson and Megar [phonetic], their belief was that if deaf people respond
00:17:20 -- with orthographically dissimilar words that are phonologically similar,
00:17:24 -- that means they do have a handle on phonologically and they wanted
00:17:28 -- to see how often the responses were orthographically dissimilar
00:17:33 -- but phonologically similar.
00:17:35 -- [ Pause ]
00:17:41 -- Whether they were orthographically similar or dissimilar,
00:17:45 -- these are the results for each of the subject groups.
00:17:50 -- The deaf noncuers were less accurate when it was orthographically dissimilar.
00:17:58 -- They didn't often respond with orthographically dissimilar items.
00:18:05 -- And these were -- here are some examples of the types of responses they may come up with.
00:18:14 -- [ Pause ]
00:18:22 -- When they look at the types of responses each subject group came up with,
00:18:28 -- they found these numbers and looked
00:18:32 -- at whether the responses were orthographically similar or dissimilar.
00:18:37 -- The hearing control group responded with orthographically dissimilar rhymed words such as
00:18:45 -- "blue" and "through", for example, as did the deaf cuers.
00:18:53 -- The deaf noncuers primarily used orthographically similar rhymed words rather
00:19:00 -- than dissimilar.
00:19:05 -- But again this is one way of measuring phonology.
00:19:10 -- There are other ways to measure phonology.
00:19:12 -- Let me share one with you.
00:19:14 -- [ Pause ]
00:19:20 -- We used in our study what we call the phoneme detection tests.
00:19:24 -- We simply asked our subjects whether a particular phoneme was represented
00:19:29 -- in the given word.
00:19:31 -- For example 'kuh', it is -- is it in the word sent?
00:19:36 -- The response would be, "No.
00:19:37 -- It's in the word cat"; the response would be "Yes."
00:19:39 -- This is done on a computer and the subjects are to respond as quickly as possible.
00:19:45 -- Ands what we found was --
00:19:48 -- [ Pause ]
00:19:55 -- Now understand we did separate our subject groups with hearing controls, deaf cuers,
00:20:02 -- hearing signers, and deaf signers.
00:20:05 -- [Pause ]
00:20:10 -- In terms of accuracy, we did find a significant difference of the deaf signers compared
00:20:17 -- to the other three subject groups, but among the other three subject groups,
00:20:22 -- there was no significant difference.
00:20:26 -- Deaf signers also took a little bit longer to answer thinking, whereas the deaf cuers,
00:20:37 -- hearing signers, and hearing nonsigners, there was no significant difference.
00:20:46 -- So this is a behavioral measure of the difference.
00:20:50 -- We see some similarities and some dissimilarities,
00:20:53 -- but this leads us to the question then how does this supply to the brain?
00:20:57 -- Would there be any neural differences in these subject groups?
00:21:01 -- So we used functional magnetic resonance imaging technology to see if we could get
00:21:07 -- at that fact whether or not they were neurological differences.
00:21:11 -- [ Pause ]
00:21:26 -- >>So, this is our question --
00:21:29 -- [ Pause ]
00:21:43 -- We wanna know what is the relationship
00:21:46 -- between the neural signature and behavioral aspects.
00:21:52 -- Is there a sensory impact or a language impact on individuals in terms
00:22:02 -- of developing reading skills and --
00:22:06 -- [ Pause ]
00:22:17 -- Before I get into that and our specific research, I wanna talk a little bit
00:22:23 -- about what we know from the literature of studying hearing people, typical readers.
00:22:29 -- [ Pause ]
00:22:36 -- The left occipitotemporal region is an area that has been known to facilitate direct access
00:22:47 -- from orthography to lexical items or the lexical schema, what we call highly familiar words.
00:22:56 -- There's more direct access perhaps in this area than unfamiliar words.
00:23:04 -- In the left temporoparietal region, that area is known in terms
00:23:14 -- of rule-based grapheme-to-phoneme analysis, when trying to figure out what the phoneme to --
00:23:22 -- grapheme-to-phoneme relationship is.
00:23:28 -- The left inferofrontal gyrus is known to facilitate articulatory assembly
00:23:40 -- or phonological assembly to help sequence a word out.
00:23:47 -- So what is it then we know about deaf people?
00:23:54 -- There have been only two studies on deaf people
00:23:57 -- and reading thus far, let me share those with you.
00:24:01 -- [ Pause ]
00:24:13 -- In their study, they wanted to show that uses
00:24:19 -- of ASL actually are dominantly using their left hemisphere as to English users.
00:24:27 -- Their subjects while in the fMRI machine were shown words one at a time that connect
00:24:34 -- and then they were at the end of it given a word and asked to decide whether
00:24:38 -- that word had appeared previously or not.
00:24:40 -- And the hearing people showed strong left hemispheric activation,
00:24:46 -- but the deaf people did not.
00:24:51 -- They showed only robust right brain activity.
00:24:56 -- But in contrasting these two groups we find that there are two possible differences.
00:25:00 -- There is a sensory experience difference
00:25:02 -- and the language experience difference contributing to this.
00:25:07 -- [ Pause ]
00:25:15 -- The top two showed the hearing subjects.
00:25:18 -- And again the left hemispheric activity, the middle two are the deaf subjects,
00:25:24 -- and that's we see right hemispheric activity.
00:25:28 -- And then with hearing signers, they found left hemispheric activity.
00:25:33 -- So what they concluded was --
00:25:36 -- [ Pause ]
00:25:44 -- That deaf people because of the use
00:25:46 -- of a manual language were depending on the right hemisphere.
00:25:50 -- There was another study --
00:25:52 -- [ Pause ]
00:26:01 -- By Aparicio et al.
00:26:06 -- It was conducted in France.
00:26:08 -- Their subjects were shown a word and asked
00:26:11 -- to make a lexical decision, is this a true word, yes or no?
00:26:16 -- And they found that deaf people did activate the left inferofrontal gyrus,
00:26:21 -- the left occipitotemporal, and inferoparietal regions as did the hearing subjects.
00:26:28 -- [ Pause ]
00:26:42 -- The deaf subjects also showed higher levels of activation
00:26:47 -- in these three particular areas more so than the hearing controls.
00:26:53 -- Their interpretation --
00:26:55 -- [ Pause ]
00:27:01 -- Was that an increased activation in those areas were serving as a compensatory mechanism
00:27:08 -- because deaf people didn't have auditory access.
00:27:12 -- And it wasn't very clearly explained, again,
00:27:16 -- to the deaf subjects what communication methods they were using,
00:27:20 -- what language background they had.
00:27:25 -- So, and because of this, this help lead to our study.
00:27:29 -- We wanted to further differentiate between groups of deaf people.
00:27:34 -- [ Pause ]
00:27:40 -- So some of these studies have used hearing speakers of English
00:27:44 -- as a control and have studied deaf signers.
00:27:47 -- But again as I mentioned earlier, and what we saw in Neville study is that they differ
00:27:53 -- in two ways: their sensory experience and their early language experience,
00:27:58 -- so hearing signers were added to the Neville study.
00:28:03 -- [ Pause ]
00:28:11 -- To be able to make this comparison.
00:28:13 -- But then again we're talking about only the one group.
00:28:18 -- The hearing speakers of English were lonely, nobody else was using English.
00:28:23 -- So we added a fourth group, deaf cuers of English.
00:28:28 -- As a result, we're able to make this contrast in terms of language --
00:28:33 -- [ Pause ]
00:28:45 -- Or sensory experience.
00:28:47 -- And we can make direct correlations between these subject groups.
00:28:52 -- [ Pause ]
00:29:09 -- All of our subjects were adults, college educated, right handed.
00:29:17 -- None of them used cochlear implants or had any sort of psychiatric disorders.
00:29:23 -- They were all skilled readers.
00:29:26 -- And the most important point to make here is in terms of the reading fluency,
00:29:30 -- they were equally fluent in the reading of English.
00:29:33 -- The deaf individuals were born deaf or deafened by the age of 2 years old.
00:29:38 -- They continue to use ASL or cued English from the age -- since before the age of 5.
00:29:48 -- And for our hearing signers; they had at least one deaf parent
00:29:52 -- and used ASL before the age of two.
00:29:55 -- [ Pause ]
00:30:07 -- Now this is how we study the reading of English in the fMRI machine.
00:30:14 -- We used what is called an implicit reading task and let me tell a little bit about that.
00:30:19 -- In the fMRI machines, subjects sees screen and they're shown words and we ask these subjects,
00:30:29 -- "Does this word have a tall letter in it?
00:30:32 -- Yes or no?"
00:30:33 -- For example, the word alarm does,
00:30:36 -- it's the letter L. The word parry does not have a tall letter.
00:30:43 -- So it's very easy for this subjects to answer.
00:30:46 -- But when they try to identify whether there's a tall letter, they can't help
00:30:52 -- but actually read the word, we see similar, the similar effect when reading colored texts
00:31:00 -- and asking them to identify the color of the text at the same time,
00:31:04 -- subjects can't help but read what the word is.
00:31:08 -- We presented the same task with what is called pseudofont.
00:31:15 -- This pseudofont measurement becomes our baseline
00:31:19 -- because in fMRI technology, we use cognitive subtraction.
00:31:26 -- If we're asking a subject to press --
00:31:28 -- press the button and another subject to do the same thing --
00:31:35 -- we'd ask the subject to do the same thing with a different activity, we were able to subtract
00:31:41 -- out things such as motor movements.
00:31:45 -- So our subjects were asked to do the same thing with blue words and pseudofont.
00:31:49 -- And the measurements we get from the pseudofont condition were able to subtract out
00:31:54 -- and we subtract out motor responses and other types of responses and the areas
00:31:59 -- that are left are the areas dedicated towards reading within the brain.
00:32:05 -- Cathy Price has shown that this type of approach, the implicit reading approach recruits
00:32:10 -- from the same areas that explicit reading does.
00:32:14 -- Now a nice thing about this task is subjects don't have to verbally respond.
00:32:24 -- It's just a button push.
00:32:26 -- So the subjects in all groups respond the same way.
00:32:31 -- [ Pause ]
00:32:37 -- Their task performance is equivalent
00:32:41 -- so it's almost impossible to make a mistake, so to speak.
00:32:45 -- [ Pause ]
00:32:55 -- With our subjects, there is no statistical difference among the four groups
00:33:02 -- so that any areas of activation we find, we can attribute to sensory experience
00:33:09 -- or language experience depending on the contrast that we're making.
00:33:13 -- [ Pause ]
00:33:25 -- This is the imaging protocol that we use.
00:33:27 -- If you're interested, the details are on the right-left, and if not,
00:33:31 -- you can look at the pretty picture of the fMRI machine on the right.
00:33:35 -- [ Pause ]
00:33:45 -- We use a common practice that among fMRI practitioners something that's called
00:33:51 -- block design.
00:33:54 -- You present each condition within a set of block and then you have a fixation period
00:34:00 -- or rest period and then you present more stimuli.
00:34:03 -- And so, ultimately it ends up looking like this.
00:34:07 -- [ Pause ]
00:34:14 -- And the subjects again are just pushing a button while they're in the machine.
00:34:18 -- [ Pause ]
00:34:30 -- Now remember earlier I had mentioned these three areas comes from the Pew [phonetic] model.
00:34:37 -- We wanted to think about what could we to predict for each of our subject groups
00:34:43 -- in this three areas for ASL users based on what we know of phonology in deaf subjects.
00:34:57 -- We would predict that deaf people do not rely very heavily on areas for phonology
00:35:04 -- that they be -- maybe memory related or visually related.
00:35:08 -- So in these three areas, we would expect that the hearing subject groups
00:35:15 -- which show more activations than the deaf.
00:35:18 -- Again, all subjects are fluent readers, skilled readers.
00:35:22 -- There is one area where we may find no difference
00:35:25 -- or we may see deaf activate more than hearing.
00:35:31 -- For low level access when we're talking about orthographic to orthography to lexicon,
00:35:37 -- they may perform similarly or it may be that deaf people are relying on that area.
00:35:42 -- We didn't know that we wanted to take a look at the data and see what we saw.
00:35:49 -- And now I'd like to share with you what each group did in it self what we call in-group
00:35:57 -- and then I will show you a comparison between two groups.
00:36:01 -- [ Pause ]
00:36:15 -- Now, I'm not gonna talk about each region mentioned here but these images show
00:36:21 -- where people activated in a real word condition more than pseudofont condition.
00:36:27 -- And the point here is that in deaf signers, they're recruiting some
00:36:33 -- of the various same areas that hearing nonsigners do, which is what we would expect.
00:36:39 -- But when we look at deaf signers, specifically --
00:36:42 -- [ Pause ]
00:36:55 -- We do see a lot of left hemisphere activation and a little bit on the right which seems
00:37:01 -- to contradict Neville's findings.
00:37:04 -- They found deaf people had activate --
00:37:07 -- deaf signers had activation in the right hemisphere and not the left.
00:37:12 -- Here it shows that deaf people are using some of these same regions as the hearing signers.
00:37:19 -- We wanted to create a more direct comparison
00:37:24 -- so that we could now compare the hearing signers and the deaf signers.
00:37:28 -- [ Pause ]
00:37:37 -- The blue areas are areas where the hearing signer showed more --
00:37:41 -- relatively more activation than the deaf signers.
00:37:46 -- Maps were taken from each group and then contrasted in a software program called MEDx.
00:37:52 -- In doing this comparison, we found two differences in the left hemisphere.
00:38:01 -- We wanted to take a closer look at the specific regions and the activation pattern there within.
00:38:11 -- [ Pause ]
00:38:19 -- What we did was we looked at the real-word condition compared versus a fixed condition,
00:38:25 -- and then that comparison was done with the pseudofont versus fixed condition.
00:38:32 -- First, we wanted to make sure that we were identifying regions
00:38:39 -- that showed significant difference
00:38:42 -- in the real-word condition versus the pseudofont condition.
00:38:45 -- And if there was a significant difference within looked across groups to see
00:38:54 -- if there was a significant difference.
00:38:56 -- In the left inferior parietal lobule, it's significantly more active in the hearing signers
00:39:02 -- than the deaf signers, and not only there, but here, in the left inferofrontal gyrus.
00:39:14 -- It seems to support our predictions that deaf people aren't relying very much
00:39:22 -- on a phonological system as much as hearing signers.
00:39:26 -- [ Pause ]
00:39:39 -- Which seems to be more active in the hearing signers
00:39:42 -- than the deaf signers possibly due to their sensory difference.
00:39:48 -- It could also be -- again this area has been activated in bilinguals more than monolinguals.
00:40:00 -- Now that particular piece of research seems to have suggested
00:40:07 -- that that area modulates language control when dealing with two languages.
00:40:15 -- But again, we feel that most likely this is due to a sensory experience difference.
00:40:24 -- [ Pause ]
00:40:31 -- You also find in the right hemisphere
00:40:34 -- that the hearing signers activate the right inferior frontal gyrus more
00:40:38 -- than the deaf signers which is a little bit puzzling
00:40:41 -- because in the Aparicio study they showed that their deaf population,
00:40:45 -- their deaf subjects activated that area more than in their hearing subjects.
00:40:50 -- But here we're finding the reverse that it's more active
00:40:54 -- in the hearing signers than the deaf signers.
00:40:57 -- Again, this could possible be due to the sensory experience.
00:41:01 -- But in which areas do we see deaf signers activating more than hearing signers?
00:41:07 -- We did find one.
00:41:09 -- [ Pause ]
00:41:21 -- There was a significant difference more in the deaf signers than the hearing signers.
00:41:29 -- They showed relatively greater activity in the hearing signers in the pseudofont condition
00:41:36 -- but not in the real-word condition.
00:41:39 -- Again we can predict this is possible due to the sensory difference.
00:41:44 -- [ Pause ]
00:41:55 -- So for this first comparison what we can summarize is the following --
00:42:01 -- [ Pause ]
00:42:12 -- Both deaf and hearing signers seem to rely heavily on the left hemisphere
00:42:18 -- and that they also had the left fusiform and left inferior frontal gyri as well
00:42:24 -- as the bilateral middle temporal gyri in common.
00:42:28 -- When looking at the contrast of the two groups --
00:42:33 -- [ Pause ]
00:42:41 -- The important point to make here is that a deaf signer, whether they are fluent reader
00:42:46 -- or not, can still read without relying heavily on phonological information.
00:42:52 -- [ Pause ]
00:43:03 -- And also, it does not support the idea --
00:43:06 -- it seems to support the idea of that compensatory mechanism.
00:43:11 -- [ Pause ]
00:43:26 -- Now, even though the right insula as I mentioned before it's possible due to increase
00:43:32 -- in mental effort, I really believe that it's more active
00:43:37 -- in the deaf signers due to the sensory difference.
00:43:41 -- [ Pause ]
00:43:51 -- Now, I'd like to look at the other comparison
00:43:55 -- for the two users -- two groups that use English.
00:43:58 -- Many studies have used deaf signers and compared them to hearing signers,
00:44:04 -- but what about deaf people who use other communication methods growing up.
00:44:09 -- There are some who are native English users, some who use other methods,
00:44:15 -- and we wanted to look at those groups.
00:44:16 -- So, when we're looking at two groups of English users,
00:44:20 -- they are a couple of predictions we would make.
00:44:25 -- Now, cuer, deaf cuers did not show any behavioral difference in that task thus far.
00:44:32 -- So we would predict that there would be no functional neuroanatomy anatomical difference.
00:44:39 -- [ Pause ]
00:44:51 -- In our hearing nonsigners, we found these areas activated
00:44:56 -- which is consistent with the group literature.
00:45:00 -- [ Pause ]
00:45:10 -- Our deaf cuers looked very similar, show this strong robust activity in the left hemisphere.
00:45:20 -- [ Pause ]
00:45:31 -- When we do a direct comparison now of our English users,
00:45:36 -- we wanted to see where the differences were.
00:45:40 -- [Pause] There were no differences.
00:45:47 -- I was a little bit surprised, but it is what we expected
00:45:54 -- because they're both using phonological processing,
00:45:57 -- so ultimately there would no difference.
00:46:07 -- However, there is a slight difference when where we look
00:46:09 -- at where deaf cuers were activating relatively more than the hearing English users.
00:46:17 -- [ Pause ]
00:46:27 -- They showed increased activity, the deaf cuers show increased activity
00:46:31 -- in the left superior temporal drivers.
00:46:35 -- Now this particular part of the superior temporal drivers is more posterior
00:46:43 -- than is what is considered the classic area
00:46:46 -- for phonological processing or auditory processing anterior.
00:46:57 -- Now, this possibly could be sensory difference because we did set these groups
00:47:02 -- up to make sure they are matched for language experience.
00:47:05 -- The only difference between the groups is sensory experience,
00:47:08 -- so this could be due to sensory experience.
00:47:14 -- Some could argue perhaps that it's due to deaf individual's use of a visual-spacial system,
00:47:26 -- but that's not been justified quite yet.
00:47:30 -- [ Pause ]
00:47:38 -- So for our English users, as I mentioned before --
00:47:46 -- [ Pause ]
00:47:58 -- The left middle temporal gyrus is what appeared for both groups.
00:48:03 -- When we looked at the contrasts of the two groups --
00:48:06 -- [ Pause ]
00:48:16 -- It had no significant difference truly between them except again for that one area.
00:48:23 -- [ Pause ]
00:48:33 -- Now, so, what does this brings us to?
00:48:39 -- [ Pause ]
00:48:47 -- The point here was to say that people who have grown up signing, who are deaf,
00:48:51 -- don't always rely on phonological systems but yet becomes skilled readers,
00:48:55 -- so they're using other strategies or pathways to develop reading.
00:49:03 -- [ Pause ]
00:49:10 -- The deaf users of English had follow on phonology represented
00:49:16 -- and their fMRI results show very similar ones to hearing speakers of English.
00:49:23 -- [ Pause ]
00:49:31 -- So the sensory experience does have an effect on the neural signature.
00:49:38 -- So when looking at these populations, we need to think not only in terms
00:49:44 -- of what language experience effect this having but the sensory
00:49:49 -- and that deaf individuals don't have to rely on phonology that's they can develop other methods.
00:49:57 -- That's why I really appreciated the study because we read books who separate
00:50:01 -- out deaf signers and deaf users of English
00:50:05 -- and to identify the neural anatomy related to single word reading.
00:50:16 -- And there is a group of people I need to thank.
00:50:19 -- We have many research assistants who are able to help us,
00:50:25 -- the Center for Functional Molecular Imaging where the machine is.
00:50:29 -- I want to thank those individuals who let us use the machine.
00:50:32 -- Thank my interpreters, our interpreters who've been working with us.
00:50:37 -- And very much have to thank our volunteer subjects.
00:50:42 -- Thank you.
00:50:44 -- My question has to do with future directions for research.
00:50:48 -- Do you have any plans for any followups?
00:50:53 -- There are actually several that can come from this first looking
00:51:00 -- at children and in terms of development.
00:51:03 -- There is already research done with hearing speakers of English
00:51:07 -- and their development of reading.
00:51:10 -- We want to see perhaps if the same thing could be applied to children
00:51:16 -- who are using cued language, will they -- develop at the same rate.
00:51:23 -- We've already gathered some data regarding phonological informations specifically rhyme.
00:51:30 -- The task is very specific.
00:51:31 -- These people learn to use their phonological awareness skills to decode words
00:51:40 -- and they're asked to read things that sometimes are phonological tasks and sometimes not
00:51:46 -- and then we'll look at those data that's been collected but not analyzed yet.
00:51:52 -- I have a followup question with that, I was just thinking about the term skilled readers.
00:51:58 -- My impression is that phonological awareness isn't necessarily required then
00:52:05 -- to be able to read fluently.
00:52:08 -- Okay. So will there be any research in identifying what other parts of the brain
00:52:13 -- or what other systems there are that they are using then to become fluent readers?
00:52:21 -- It's a good question.
00:52:23 -- We're still finding which areas within areas what they're responsible for,
00:52:34 -- like the fusiform gyrus could be a direct lexical region but maybe looked at --
00:52:42 -- when we looked at whether or not the deaf signers were activating that area more
00:52:46 -- than others are not, they were behaving equally.
00:52:50 -- Are they using other structures?
00:52:52 -- I think there needs to be more research.
00:52:55 -- Basically, general research, what does each region of the brain possible do,
00:53:03 -- maybe it's more orthographically related or is it due to mental articulators
00:53:11 -- that are being used to help people develop reading.
00:53:15 -- Now, remember I commented earlier about Hanson study,
00:53:19 -- they found that skilled readers have phonology even
00:53:21 -- if they don't use speech or spoken language.
00:53:25 -- They have phenology but it's, the question is how they develop this phonology.
00:53:30 -- That's where this vary.
00:53:32 -- Nice presentation.
00:53:34 -- I was thinking about the population that you had of people -- deaf people that grew up cuing,
00:53:39 -- what about deaf people who grew up orally who didn't sign or cue
00:53:44 -- and had to rely solely on audition or lip reading.
00:53:48 -- Do you have information about that population that you could use to compare
00:53:53 -- to the two groups that you've already looked at?
00:53:57 -- Good question.
00:54:02 -- We have collected data on a group of deaf oral subjects.
00:54:08 -- We haven't done the analysis but I can guess a few things that would --
00:54:11 -- we would predict is that our deaf cuers and our deaf oralists would look very similar,
00:54:20 -- but they -- I don't know if their functional anatomy would be similar.
00:54:24 -- When you look at the behavioral data, the deaf oralists, who also took that phoneme detection
00:54:34 -- that I mentioned earlier, performed equally with the deaf cuers and the hearing nonsigners,
00:54:42 -- but their response time was slower, significantly slower.
00:54:47 -- So when you look then at the functional anatomy, what does that look like?
00:54:54 -- That's what we're still looking into.
00:54:58 -- I just want to make a comment in response to your question earlier, on question back.
00:55:11 -- You asked, I think the question you're asking was whether is an explanation how the skilled --
00:55:19 -- how are deaf signers who are skilled at reading, how did they accomplish reading,
00:55:24 -- is there some indication in the brain activity maps that indicate other pathways
00:55:29 -- that they maybe that will show you that that we really didn't see much that could account
00:55:34 -- for how perhaps they read in a way that maybe different
00:55:39 -- from the groups they were being compared to.
00:55:41 -- But one thing to keep in mind is that the task they were using sort
00:55:45 -- of examines the brain with a very broad brush.
00:55:48 -- We don't pull out that task whether they're accessing semantics or phonology or orthography,
00:55:54 -- we assume that orders are being accessed during that paradigm,
00:55:58 -- but we can't really tell you precisely which one.
00:56:00 -- And so, one way to take your question further would be
00:56:04 -- to most specifically probe our subjects during a task that requires them
00:56:09 -- to engage a phonological process.
00:56:12 -- And actually, that's another data set that Daniel has that we have
00:56:17 -- and Daniel is analyzing that data.
00:56:19 -- And I suspected he is suppressing the fact that he has that information
00:56:24 -- because he hadn't completely analyzed that yet.
00:56:28 -- And as you can imagine it's still overwhelming having all these data sets
00:56:31 -- with all these different groups, and so we've been going through them bit by bit.
00:56:34 -- But the truth is your -- the answer to your question is sitting somewhere deep
00:56:39 -- down in our data drives in which we don't know yet, but we'll have that answer eventually
00:56:44 -- and may be Danny will come back and talk about that another time.
00:56:47 -- [ Pause ]
00:56:59 -- Your presentation was really interesting.
00:57:01 -- We need to think more about what will come beyond this and I do have a few questions,
00:57:08 -- several Dr. Eden and I have talked at some length about the relationship of looking
00:57:14 -- at the framework that you're using and what assumptions are inherent in that.
00:57:18 -- And a lot of people have assumptions about children's use of phonology in reading and that
00:57:22 -- if phonological experience has to a precursor to reading.
00:57:26 -- But you've mentioned, I guess on your third slide.
00:57:29 -- Is there -- could you go back and show the third slide.
00:57:34 -- It's this one here.
00:57:36 -- [ Pause ]
00:57:51 -- So here when you talked about the operational definition of phonology
00:58:01 -- and then I wasn't sure what you meant by testing the different subject's experience
00:58:11 -- because they're presented with the word visually,
00:58:13 -- so they're seeing a print system not a sound system.
00:58:17 -- So that's where I'm getting confused because they're looking at it
00:58:21 -- and the stimulus is the same, I understand, but they're being, you know,
00:58:25 -- presented with like the one word that you used, the grapheme relationship.
00:58:31 -- You're not really looking at graphological processes, though, and that's where, you know,
00:58:41 -- deaf people are costumed to seeing the graphemes and the shapes.
00:58:47 -- So they get accustomed to the orthography of letters
00:58:50 -- without really being influenced by the phonology of the words.
00:58:52 -- And you said that they are not relying on phonology which I agree with but there seems
00:58:56 -- to be another point of view where you could take a look
00:59:00 -- at the graphological processes that take place in deaf readers.
00:59:05 -- Now, also when you mentioned fingerspelling as sign or an English correspondent like, you know,
00:59:12 -- when you talk about, let's take for example the word boy.
00:59:15 -- The sign for boy -- the fingerspelled word for boy doesn't exactly match
00:59:21 -- but boy has three distinct handshapes when you fingerspell them,
00:59:25 -- so you could use the handshapes and fingerspelling to be looked at sort of one
00:59:30 -- to one correspondents with graphology or you could look at it like with cheremes
00:59:36 -- like Sophie [phonetic] talked about on how cheremes can be assigned to specific graphemes
00:59:44 -- because ASL is rich with cheremes.
00:59:46 -- And then when you talk about the other 26 handshapes or see the shapes of English letters,
00:59:53 -- the 26 letters and that we have 26 handshapes to match the print,
00:59:57 -- I've seen deaf children really use -- they try to represent handshapes of fingerspelling
01:00:04 -- on paper when they're trying to write, but you don't see them drawing
01:00:08 -- out handshapes that are represented by signs.
01:00:12 -- So there is a different process around what deaf children are seeing and it's kind of aesthetic.
01:00:18 -- So it's not just what -- and it's not influenced by anything that they're hearing
01:00:23 -- because deaf people are having just that direct experience with the visual information.
01:00:29 -- So that's just one comment, but then could you talk about graphology
01:00:33 -- as possible another explanation for how deaf people are accessing texts
01:00:38 -- and becoming skilled readers as oppose to phonology, what are your thoughts on that?
01:00:46 -- Thanks.
01:00:46 -- The reason I mentioned fingerspelling is because when you fingerspell,
01:00:50 -- you do have the 26 handshapes that correspond to the graphemes.
01:00:55 -- There's the A that has a specific location and movement and it's considered a morpheme in ASL,
01:01:05 -- whereas spoken language, you can't produce just one morpheme, generally,
01:01:12 -- and have meaning unless there is meaning already in the language.
01:01:16 -- But when it comes to ASL, the letter A is a morpheme.
01:01:21 -- Now, when it comes to creating this morpheme to orthographic or graphemic correspondents,
01:01:31 -- the fingerspelled letters have it that in terms of graphology --
01:01:38 -- Sorry, to go back to what you just said.
01:01:40 -- Sorry, I just wanted to go back to what you just said, I understand the first part and I agree
01:01:45 -- with that about the sign but then about it as a handshape and the connection with meaning,
01:01:54 -- I think that's different than what I meant by cherological,
01:01:57 -- so like with boy when you fingerspell boy and you've got all three handshapes being produced
01:02:02 -- in sequence, it then takes on a different shape and that is closer to the English text, though,
01:02:07 -- than the sign for boy, so what -- so it becomes lexicalized, what's that relationship?
01:02:13 -- With the word boy, again, the elements of B-O-Y on the hands have
01:02:20 -- that correspondents to the text B-O-Y.
01:02:24 -- And it's easy then for deaf children to make that connection they learned.
01:02:31 -- So, it becomes a three-step process then for deaf kids: they learn the sign,
01:02:35 -- they learn the fingerspelling, and then they learned the print.
01:02:41 -- And then for signs that don't have this, that are not fingerspelled, it's how do you --
01:02:46 -- it becomes a two-step process had you go from the sign to the text.
01:02:52 -- And isn't that they're using strategies to make it a two or three-step process.
01:02:56 -- Sorry. If nobody mind.
01:02:59 -- I just that [inaudible] good example when you said, [inaudible].
01:03:05 -- Okay. So you're talking about [inaudible] fingerspelling the boy is [inaudible]
01:03:16 -- phonological level or not, the morphological level, but on the cherological level,
01:03:21 -- it fits and [inaudible] explanation.
01:03:26 -- So with spoken language, hearing kids sometimes access their phonologies when they're reading
01:03:33 -- and they are attaching what they already know in speech to their text using,
01:03:37 -- the deaf kids are doing that when they're presented
01:03:39 -- with texts when they're already signers.
01:03:42 -- As I mentioned before, we don't use the write test or method, I should say.
01:03:48 -- We have signs.
01:03:52 -- So deaf people don't have acces to decode, deaf parents have to sign and then they have
01:04:03 -- to create the fingerspelled version of it to help give the children that overt relationship,
01:04:11 -- see the overt relationship between the letters of the hand and the letters on the page.
01:04:19 -- Hearing people do it differently.
01:04:21 -- It's a different process, I think.
01:04:23 -- Now cognitively, maybe they're functioning very similar,
01:04:27 -- that's an interesting question to be researched.
01:04:30 -- [ Inaudible Remark ]
01:05:03 -- What I would predict, you're talking about in terms of the behavior, in their behavior?
01:05:08 -- [ Inaudible Remark ]
01:05:26 -- Again, I think you would see a difference in the behavior.
01:05:33 -- The ASL signers who are good readers are probably using more
01:05:38 -- of their own phonological encoding or decoding and it may not be the same type of coding
01:05:44 -- that hearing user of English has, for example, my wife, she's a perfect example.
01:05:52 -- My wife tends to say psychology on her lips.
01:05:58 -- Now in English, the P is silent but my wife doesn't know that, she doesn't care,
01:06:05 -- but she still has phonology, mental phonology that she's able
01:06:10 -- to gleam information while reading like the word through, meaning through the hallway.
01:06:21 -- There's only, what?
01:06:23 -- Basically three phonemes.
01:06:25 -- There's people who are able to make that grapheme
01:06:30 -- to whatever phonological system they're using, connection,
01:06:34 -- it makes then easier and quicker to read.
01:06:37 -- They're not memorizing letters having to recall letters.
01:06:41 -- It becomes a more automatic function than when you look at each of the graphemes.
01:06:47 -- Now for the -- when we talk about pseudowords, would that help?
01:06:57 -- I don't know.
01:06:58 -- It possible would help them do better when they read
01:07:04 -- like to develop towards the native reading level.
01:07:11 -- [ Inaudible Remark ]
01:07:55 -- If you're asking about the tall letter, task, there's no --
01:07:59 -- there would no performance difference.
01:08:01 -- But in the functional MRI, you may see a difference.
01:08:05 -- We haven't got in there yet, that could be another stage of our research looking
01:08:10 -- at deaf individuals who are fluent readers versus who are not,
01:08:14 -- Dr. Eden, do want to agree to that.
01:08:20 -- Yeah, just a comment -- [pause] one thing that's nice
01:08:25 -- about this particular paradigm is it's been used in hearing populations
01:08:31 -- of different reading abilities, so it's been used in novice readers
01:08:35 -- so to study the function anatomy reading and experience readers.
01:08:39 -- So to understand how the neural signature for reading changes as kids become good readers.
01:08:44 -- And it's also been use in hundred populations of kids who are struggling readers,
01:08:47 -- so children who have a reading disability.
01:08:50 -- And the important aspect of this design and trying to emphasize this is
01:08:55 -- that you can use this task and have all your subjects perform at the same level
01:09:01 -- in the scanner and that's important in imaging because if you have a performance difference,
01:09:06 -- it becomes a confront when you interpret what are the imaging results.
01:09:09 -- Now I can tell you that developmentally in children who are hearing the areas
01:09:15 -- that developmental trajectory goes from the back of the brain to the front of the brain.
01:09:19 -- So, the inferior frontal gyrus is the last one to kick in by the time they reach adulthood.
01:09:25 -- And I can tell you that children who are struggling readers who are hearing,
01:09:28 -- there is underactivity in the front of the brain in inferior frontal gyrus and in parietal areas.
01:09:34 -- But your question is what happens if you take deaf individuals who are signers
01:09:39 -- who haven't achieved the same kind of reading skill as the one so who have participated
01:09:43 -- in these studies, and that's a very good question because we just don't know.
01:09:47 -- But I think the important path of this was to say if you have a deaf individuals who use sign
01:09:53 -- where perhaps the understanding or perhaps even has scientific understanding
01:10:00 -- of how they acquire good reading skills is really not very well understood.
01:10:03 -- The answer is the neural representation of the skilled reader doesn't always look the same.
01:10:09 -- S, our hearing subjects and our deaf subjects, they have managed to get to the same levels
01:10:15 -- of reading that they do it using different systems in the brain.
01:10:19 -- We didn't see areas that were more active, we see areas that are under active.
01:10:23 -- And I think there's a lesson here again for the hearing community which is we see underactivity
01:10:28 -- in some of these areas in our kids who were struggling readers but they are not necessary
01:10:34 -- to achieve good reading skills because the presumption has been those are the areas
01:10:38 -- that you need to become a skilled reader and presumably if you look at the model
01:10:42 -- of teaching kids to become skilled readers,
01:10:45 -- those areas have to somehow become engaged in order to meet that goal.
01:10:48 -- But what's interesting that this studies is they don't.
01:10:51 -- You can get there without involving these areas and I think that's important to know.
01:10:58 -- [ Inaudible Remark ]
01:11:30 -- It is possible that these areas are engaged, but again remember we've used hearing signers
01:11:36 -- and deaf signers, so their motor representation in language would be the same or perhaps not.
01:11:45 -- It is difficult to identify as Dr. Eden said previously, we have these images
01:11:53 -- and we're looking at them with a broad stroke
01:11:56 -- and it's very difficult to find tune to these areas.
01:12:01 -- And we don't have the technology currently, maybe in the future.
01:12:04 -- With better technology, EKG, for example, we'll be able to find those minute differences.
01:12:11 -- But for motor response, a possible area, well the difficulty with motors,
01:12:18 -- we already know where the motor areas of the brain are, it tends to be this area right here.
01:12:28 -- But did we see anything show up in other areas and when we do see something show up,
01:12:34 -- we tend not to make the assumption that it's motor related.
01:12:38 -- When looking at other literature, we try to find what areas they've identified
01:12:43 -- to help us understand and interpret what we're finding.
01:12:47 -- But at the same you could say that because we're not making these fine distinctions,
01:12:52 -- we're looking at in our subjects, we're all matched for language and experience
01:12:57 -- and the only difference was a sensory difference.
01:13:01 -- So, any of the differences that we saw between the two groups, all thing being equal was
01:13:06 -- that there was a sensory difference but we'll have to dealt into that further to find anymore.
01:13:11 -- [Inaudible] but I think what you're getting at is and this is one
01:13:15 -- of the questions we had too is, when a user or sign sees the word,
01:13:19 -- do they recall the multi-program that they would do to perhaps say
01:13:23 -- that word and sign and we didn't see that.
01:13:26 -- And because of the time, there's isn't so much time that Daniel spent talking
01:13:32 -- about the analysis, but this is a whole brand new analysis.
01:13:34 -- So all the areas that could have been different would have been identified in this analysis,
01:13:38 -- so we didn't use a region of interest approach,
01:13:41 -- and so there were no differences in multi areas in these studies.
01:13:45 -- But yesterday, an interesting paper came out looking at the functional anatomy reading
01:13:50 -- in Chinese and comparing children who are skilled readers
01:13:53 -- and those who were struggling readers.
01:13:55 -- One thing that's interesting is when you look at a logographic writing system,
01:13:59 -- you're asking again a very different question about what is --
01:14:02 -- how does the brain deal with reading
01:14:05 -- when the information that's presented is logographic opposed to alphabetic.
01:14:10 -- And that other differences are found in the frontal brains areas, this is the region
01:14:15 -- of the brain that has been associated
01:14:18 -- with working memory skills but also with motor performance.
01:14:22 -- And it turns out that one of the skills that predicts reading in children
01:14:26 -- in China is how good they are at copying the logographs
01:14:31 -- and in fact that's how kids acquire a reading skills is
01:14:35 -- by just continuously copying these Chinese characters because they have to learn
01:14:39 -- so many of them, 600 in first grade.
01:14:41 -- And so it could be that the way the brain deals with it is to how
01:14:44 -- that very strong representation of the multiprogram to do that
01:14:47 -- and that that's something that perhaps isn't that well developed
01:14:50 -- in kids who are struggling readers.
01:14:51 -- I just want you to put that in because it's a new finding and I think it sort of get
01:14:55 -- to with the longer lines that you were thinking.
01:14:59 -- [ Pause ]
01:15:09 -- I was wondering for hearing readers, I don't know Guinevere if you look much [inaudible]
01:15:16 -- at hearing people, but thinking about monolingual hearing people as opposed
01:15:20 -- to bilingual hearing people, is there brain activity similar or is that different?
01:15:28 -- Funny you mentioned that.
01:15:29 -- We are currently, in reviewing the lit, you know, seeing what we know about hearing people
01:15:36 -- who are bilinguals or monolinguals, there's been no direct comparison as of yet,
01:15:42 -- but when we look at the bilingual research, I've mentioned earlier that many
01:15:51 -- of the bilinguals found the middle -- left middle frontal region in bilinguals.
01:16:00 -- But when looking at monolinguals, we don't see much activation in that particular area
01:16:06 -- in the monolinguals, not significantly more than we find it in bilinguals.
01:16:15 -- Their argument has been that possibly this is a regulatory function in this particular region
01:16:21 -- of the brain that an individual is moderating what language they're using.
01:16:27 -- They see a word in one language, they have to suppress the other language to be able to read
01:16:32 -- and process the one they're reading.
01:16:35 -- So maybe it's that suppression activity or is it a code switching activity,
01:16:40 -- we're not exactly sure as to what function it could be related there but there's not a lot
01:16:45 -- of bilingual studies quite yet out there.
01:16:50 -- So you were talking about hearing bilinguals, what about deaf bilinguals.
01:16:58 -- Sorry, I should stand up.
01:16:59 -- So I was just thinking about with your studies since you included hearing people,
01:17:04 -- the hearing unsigners, are they bilingual people or monolingual English users.
01:17:09 -- Currently, they're monolinguals but we're going to be incorporating another subject group
01:17:13 -- of bilinguals of two spoken languages.
01:17:16 -- Well because with deaf group, it seems like some of them were bilinguals
01:17:19 -- so you want to clean that up a little bit.
01:17:22 -- Currently we are collecting that hearing bilingual cohort and we'll proceed from there.
01:17:29 -- A production of Academic technology, eLearning, and video services.
01:17:34 -- Copyright 2008, Gallaudet University.
01:17:37 -- All rights reserved.