00:00:32 -- Good afternoon everyone welcome to our next VL2 lecture.
00:00:38 -- And we've compressed these lectures kind of like weekly for the month of October which is kind
00:00:42 -- of fun to come here every week and hear a nice and exciting lecture.
00:00:46 -- Today we are is probably our speaker who's traveled the farthest to get here.
00:00:51 -- This is Doctor Jacqueline Leber she is she works at the Free University of Brussels.
00:00:57 -- And I'm not gonna say a lot about here because I want to get into her talk
00:01:00 -- but I think you'll find her talk interesting.
00:01:02 -- She's done extensive research and is kind of a world leader in research on cued speech
00:01:08 -- and reading and the relationship of the two.
00:01:10 -- And but see I don't think -- you're not talking about that today; are you?
00:01:14 -- No.
00:01:14 -- No but she's not gonna talk about that today she's gonna talk about the importance
00:01:18 -- of visual information for people with cochlear implants I believe.
00:01:21 -- And so let's pay attention and here's Doctor Leber.
00:01:28 -- Thank you Tom and thank you for attending this seminar and thank you for inviting me
00:01:35 -- to participate in this series of seminars.
00:01:37 -- I'm very glad to be here.
00:01:39 -- And I changed a little bit my program because in the first chart I will talk
00:01:45 -- about audio visual integration in children with cochlear implant, and I've also a second part
00:01:56 -- about our degree of deaf cued speech user processed the cued speech information.
00:02:03 -- So its two recent work recent research that conducted in Brussels.
00:02:11 -- The first about audio visual integration in children with cochlear implant is the subject
00:02:21 -- of the PhD of [Inaudible] and you can see [Inaudible] there.
00:02:44 -- And wait - oops well the starting point is that speech we all know that the speech perception
00:02:53 -- for hearing normally hearing individual is not a purely or visually phenomenon
00:02:59 -- but rather involved integration of information obtained through the auditory channel
00:03:06 -- and the information recuperated on the lips through speech reading.
00:03:14 -- And we all know also that the audio visual speech perception is better than auditory alone,
00:03:22 -- that the visual information provides additional information
00:03:31 -- through the speech perception process and particularly in a noisy condition.
00:03:40 -- One demonstration of the irrepressive and automatic role
00:03:45 -- of the visual information is the McGurk effect so when auditory information
00:03:51 -- and visual lip reading information are put into conflict.
00:03:56 -- For example, when you hear apa and you see a mouth saying ka,
00:04:02 -- hearing individuals do not perceive fully the auditory information nor the visual information
00:04:09 -- but perceive an illusion and illusory perception which is the syllable da.
00:04:20 -- And so the McGurk affect this - this perception illusory perception in response
00:04:28 -- to conflict the stimuli is considered is a sign of audio visual integration.
00:04:46 -- Here our classical example of the McGurk effusion effect
00:04:51 -- like I said auditory apa plus visual lip read aka leads to the perception of ata.
00:05:03 -- And similarly, with their voice consignment aba and the visual aga then you perceive ada.
00:05:12 -- So in term of place of articulation it means
00:05:16 -- that an auditory bilabial plus visual valor leads to an audio visual labiodentals
00:05:27 -- so that there is an integrated the illusion is located at an integrated place of articulation.
00:05:39 -- And all of preoccupation interest was how do children with fitted
00:05:45 -- with the cochlear implant integrate audio visual speech information.
00:06:00 -- So we know that cochlear implants restore the speech perception
00:06:07 -- of [Inaudible] bilingually deaf children who can achieve extraordinary results
00:06:13 -- in speech perception in the development of language and in the contour of the voice.
00:06:20 -- However, unlike normally hearing cochlear implants does not allow an accurate coding
00:06:29 -- of the place of articulation; this is related to the way of cochlear
00:06:37 -- of functioning of the cochlear implants.
00:06:39 -- So this leads to confusion between words that defer only by the place of articulation
00:06:48 -- like example, in English duck and buck.
00:07:05 -- So by consequence cochlear implanted children or patient lean more on lip reading
00:07:13 -- than normally hearing listener because they don't get the information about place
00:07:18 -- of articulation through the auditory channel, they lean more on lip reading
00:07:24 -- to get this information about place of articulation.
00:07:28 -- And so we might hypothesize that audio visual integration is different in this population
00:07:35 -- than in normally hearing children.
00:07:37 -- And there is evidence already for that hypothesis because faced
00:07:43 -- with McGurk stimuli cochlear implanted children and adults generally respond
00:07:50 -- with the visual information that they capture on the lips which is also the safest modality
00:07:58 -- because when the cochlear implant is broken or when they have
00:08:03 -- to turn it off they rely on speech reading.
00:08:07 -- And so in this kind of a situation as we experiment
00:08:10 -- that situation they show few -- they show few or no fusions.
00:08:22 -- So for example, to take again they are the same example when presented with an auditory aba
00:08:31 -- and a visual aka normally hearing people would perceive ata the children
00:08:40 -- with cochlear implants report aka as in is -
00:08:44 -- as if the auditory information is completely absent or extinguished.
00:08:51 -- Only part of them those were fitted earlier
00:08:55 -- than 13 months experience fusions like normally hearing people.
00:09:12 -- So in other words, cochlear implanted patients could be described as having an imbalance
00:09:21 -- between auditory and vision in audio visual integration and that
00:09:27 -- for these patients their vision is stable and auditory information is less used at least
00:09:36 -- in the case where the two dimensions are conflicting
00:09:40 -- where there is like in the McGurk stimuli.
00:09:45 -- So we have made a series of experiment showing that replicating this kind of research
00:09:51 -- and then Olympia speaking to Olga [Inaudible] and she had the idea
00:09:58 -- of looking whether this imbalance could be modified in cochlear implanted children.
00:10:06 -- And she was wondering what would happen if these children are prevented to use lip reading
00:10:13 -- by a technique of visual reduction.
00:10:18 -- I will call this technique visual reduction blurring the visual image.
00:10:33 -- And what she designed a new paradigm which in testing whether a modification
00:10:39 -- of this audio visual imbalance leads to a modification of audio visual integration
00:10:45 -- in these children with cochlear implants.
00:10:49 -- But first of all, she tested the paradigm in normally in adults
00:10:54 -- and that's what I will explain now.
00:10:59 -- So we designed an experiment in which we presented aka,afa,ada,
00:11:19 -- fa syllables so that's what I mean by ah c ah, c for consonant.
00:11:26 -- In two condition and the first one is with visually clear information
00:11:31 -- and the other one is a visual reduction condition in which the quality
00:11:40 -- of the visual speech sequence was reduced by a technique of contrast modification.
00:11:46 -- I can't show you that I brought the disc with me but I had some problem with it so
00:11:52 -- but you can imagine just a visual image which is not very distinguishable.
00:11:58 -- You see the mouth movement but like - like in snow ball or something like that.
00:12:07 -- In addition, so that's the common that's the common two conditions for all the subject.
00:12:12 -- But in addition for the hearing people we presented the syllabus mass by auditory noise
00:12:21 -- and I won't give you too much detail about this but there was two kinds of noise,
00:12:28 -- stationary noise which is a constant noise which is very, very masking and another kind of noise
00:12:36 -- which is fluctuating noise in which there's a period
00:12:40 -- where there the noise is really masking the syllable and other period in which
00:12:46 -- that I call the valley, valley of noise in which the information can be recuperated.
00:12:53 -- And this migration was done to examine the degree of the deterioration
00:12:58 -- of auditory information can influence audio visual integration in hearing adults.
00:13:04 -- So to recap for -- for normally hearing adults with two manipulation, manipulation of noise
00:13:11 -- and manipulation of visual information and for children with cochlear implants
00:13:17 -- with only the manipulation of visual information.
00:13:33 -- So the first experiment is about hearing adults and the way -- well you see the details 40,
00:13:40 -- 20 in the visual clear group and 20 in the visual reduction group.
00:13:45 -- We presented this aba, ada, afa syllables in auditory alone condition that mean still face
00:13:55 -- and just the auditory syllable plus noise.
00:14:00 -- Audio visual condition meaning the face articulating the syllable and the song.
00:14:08 -- Most of the most of the stimuli were congruent similar facing apa and hearing apa
00:14:17 -- and some items some trials were incongruent the McGurk trials
00:14:22 -- so like hearing aba and seeing the face singing aka.
00:14:29 -- All of this stimuli were randomly presented so the subject didn't know whether it was congruent
00:14:35 -- or incongruent and they didn't know either whether if the following stimuli was auditory
00:14:42 -- or audio visual or visual, meaning just the movement
00:14:47 -- of the lips saying aba, aka, asa, result in song.
00:14:54 -- So there were voice and voiceless consonants and all these stimuli was devised and divided
00:15:08 -- into several blocks and we're gathering the McGurk stimuli we'll use the classical plus the
00:15:17 -- aba plus visual aga expected fusion aka.
00:15:22 -- Because those frequent stimuli like auditory asa, visual asha and expected fusion asa just
00:15:31 -- with more material and to extend a little bit the McGurk illusions.
00:15:46 -- So here I want to explain you what were expectations.
00:15:53 -- So for the visual clear group normally hearing subject in the stationary noise
00:16:03 -- where the syllable were really mass by the noise we expected maybe visual response
00:16:09 -- for the incongruent from the McGurk stimuli.
00:16:12 -- So aba combined with aka we expect aka responses.
00:16:19 -- In the fluctuating noise where it is possible to perceive some
00:16:25 -- of the phonetic information we expected more fusion response the illusory perception
00:16:31 -- and more auditory response.
00:16:34 -- And that was here it is the manipulation of about the visual information
00:16:41 -- so when the visual information is reduced generally speaking,
00:16:47 -- we expected a shift toward more auditory responses
00:16:53 -- and the degrees of the visual response.
00:16:57 -- I hope this will be clearer when I present the data.
00:17:12 -- So here are the data just for the auditory aba and visual aka and given diffusion aka.
00:17:19 -- Here you have it is expressed like this the percentage
00:17:26 -- of auditory response aba the percentage of visual response aka and the percentage
00:17:32 -- of fusion response; is that clear enough?
00:17:38 -- Yeah so, in addition you have two colors blue
00:17:43 -- and pink the blue is the visually clear information presentation
00:17:50 -- and the pink is the visual reduction information.
00:17:54 -- Okay and you have two graphs this is a little bit complicated.
00:17:59 -- And in the upper graph you have the percentage of a response of different type of response
00:18:06 -- when the syllable were embedded in with stationary noise with a very masking noise
00:18:12 -- and why the auditory information is not they were perceived.
00:18:17 -- And in the bottom graph you have the same percentage of response
00:18:22 -- with the fluctuating noise where it is possible to take some phonetic information.
00:18:28 -- So to describe these results with the very masking noise you have mainly individual clear
00:18:36 -- condition you have mainly visually visual responses and a certain amount of fusion.
00:18:46 -- When you degrade the visual information you decrease the amount of visual response
00:18:51 -- and you increase the amount of auditory response infusion response.
00:18:58 -- When the noise is fluctuating you have to compare this
00:19:01 -- and this you have less visual response more fusion, a little bit of auditory response.
00:19:09 -- And when the information visual is reduced again,
00:19:14 -- a decrease of the visual information rule significant modification of the fusion
00:19:20 -- and an increase of auditory inflammation.
00:19:33 -- So even in even in normally hearing subject you can modify the balance
00:19:39 -- between auditory information and visual information.
00:19:42 -- You can modify the pattern of response by altering by altering their degree of noise
00:19:51 -- and quality of the visual information.
00:19:56 -- So the second part here is about the same technique of visual reduction that applied
00:20:04 -- to a group of deaf user of cochlear implants.
00:20:11 -- Oops. And [Inaudible] studied 21 children fitted with the cochlear implant this is the mien age
00:20:31 -- of the children and the range of age.
00:20:35 -- The mien age of deafness diagnosis in months and the range, age implantation in year three year
00:20:44 -- and on average between one and eight and age these children were exposed children
00:20:52 -- or young adults when exposed to cued speech and that was the mien age and also the range
00:20:58 -- of first exposure to cued speech.
00:21:01 -- They were studying a cued speech camp which is takes place every summer somewhere in France.
00:21:11 -- Sorry. So as these were children with we use less trials
00:21:19 -- and we used only the voiceless consonants with aba, afa, ata, asa, aka,
00:21:27 -- asha presented in auditory, visual,
00:21:31 -- and audio visual modalities congruent and McGurk aspects of stimuli.
00:21:40 -- No noise was added and so they were tested with headphone in a quiet
00:21:47 -- and to certain blocks were contain clear visual information
00:21:56 -- and other blocks contained degraded visual information.
00:21:59 -- And it is the percentage of course is false in the auditory alone condition
00:22:18 -- and in the audio visual condition these are in French I'm sorry.
00:22:24 -- So you can see that in white the audio visual presentation leads to more response
00:22:33 -- than the auditory alone condition.
00:22:37 -- In other words, that they gain - they gain information from the presentation
00:22:44 -- of the visual information when this information is clearly accessible.
00:22:51 -- And in white you have the same comparison between auditory alone perception of syllables
00:22:57 -- and audio visual and you can see that the gain the visual gain is lower
00:23:03 -- than in the clear condition so the reduction
00:23:08 -- of the visual information has an effect on the visual gain.
00:23:24 -- That was for the stimuli presented in auditory alone condition and audio visual.
00:23:30 -- And therefore, the McGurk stimuli are present here the auditory asa and visual asha
00:23:38 -- which gives a usury response asa.
00:23:45 -- Notice that asha is really very salient from point of view of visual perception
00:23:56 -- because you have the lips going forward on asha.
00:24:03 -- So in visual clear presentation you have a maximum of a large amount of visual response,
00:24:12 -- little fusion and little auditory response, and when you reduce the visual information
00:24:18 -- in red you have a decrease of the visual response and an increase of auditory response
00:24:26 -- and of fusion, little increase of fusion responses.
00:24:31 -- So they were sensitive to this visual degradation
00:24:36 -- and when the visual information is degraded they shifted more towards the deception
00:24:42 -- of the auditory all the combining between auditory and visual response.
00:24:58 -- For the auditory aba and visual aka which is suppose
00:25:03 -- to give the fusion response aka the pattern is a little bit different
00:25:10 -- because visual aka is not very visible, it's not very visible and you see
00:25:17 -- that it's not a bilabial but you don't see exactly what it is
00:25:21 -- so in the visual clear condition you have a sharing between visual response
00:25:26 -- and fusion response and little auditory response.
00:25:29 -- And when the visual information is reduced you reduce the amount of visual response
00:25:37 -- and the fusion and increase the auditory responses.
00:25:57 -- So this is simply a description of what I have already described.
00:26:07 -- It indicates well that they -- that these children with cochlear implants are able
00:26:16 -- to relate to combine the cross model inputs not only when these two inputs are congruent
00:26:24 -- like hearing aba and seeing aba but also when these are incongruent.
00:26:29 -- In other words that audio visual integration can occur in cochlear implanted children.
00:26:46 -- And that modification of the visual information can modify the pattern
00:26:55 -- of audio visual integration of these cross model inputs.
00:27:02 -- They can give more weight through the auditory activation
00:27:07 -- when the visual information is reduced or less available.
00:27:24 -- So general conclusion about this work even if the cochlear implant gives very good results
00:27:34 -- on language development on contour of voice on speech perception in children,
00:27:44 -- our point is that audio visual integration is different in these children
00:27:50 -- than it is in normally hearing children.
00:27:54 -- That children with cochlear implant continue to rely more on visual information
00:28:01 -- in audio visual speech perception.
00:28:04 -- As I have shown this reliance on speech reading leads them to get
00:28:15 -- to increase the speech intelligibility between speech perception
00:28:20 -- when the two modalities are congruent but for the incongruent audio visual stimuli,no.
00:28:30 -- No for excuse me I'm a little bit lost here.
00:28:38 -- So for congruent stimuli visual information increase speech intelligibility
00:28:44 -- and this visual gain is larger when the vision information is clear than when it is reduced.
00:28:52 -- But also we are shown that when the two dimension auditory
00:28:58 -- and visual dimension are incompatible and conflict and when the visual image is reduced,
00:29:07 -- they would then induce or decrease reliance on visual information.
00:29:15 -- An important -- well a point which should be underlined is that this also show
00:29:23 -- that when they are in an incongruent situation it shows
00:29:28 -- that the information the auditory information is present somewhere in their brain
00:29:34 -- but that usually they are captured by the lip reading.
00:29:38 -- And if the lip reading deteriorated then they can rely more on the auditory information.
00:29:47 -- And while in future experiment which is which are in progress now we try
00:29:53 -- to compare this cochlear implant to children with normally hearing children
00:30:00 -- who receive degraded speech so we try to see whether by giving degraded speech
00:30:06 -- to normally hearing children we will show the same pattern
00:30:10 -- of result the labial result that's the one we observe in cochlear implanted children.
00:30:18 -- So is there any questions on this part of the talk?
00:30:25 -- Maybe it's better to
00:30:29 -- [Inaudible] because I don't know anything about the cochlear implant yet.
00:30:35 -- How do the implant in children perceive the sound if you ask them close eyes apa and aga;
00:30:45 -- do they hear different sound or do they have different shades of these two sounds?
00:30:51 -- So what is the situation?
00:30:53 -- Do they have any ability to distinguish?
00:30:56 -- Yeah, yeah of course, of course.
00:30:59 -- Well their ability the answer is here one answer so either you are in the situation
00:31:07 -- with auditory alone simulation I mean a still face just it's coming
00:31:15 -- and so for this relatively easy task of yours you have the choice between six syllables,
00:31:21 -- you have your choice between six you have performance
00:31:25 -- like eight between 70 and 80 percent.
00:31:30 -- So for some situation when the visual
00:31:35 -- in not clear they have reduced perception or reduced answer?
00:31:42 -- Usually when you compare the auditory stimulation alone
00:31:49 -- with the visual you have an increase okay that's the area in the white
00:31:56 -- but if the visual information is reduced if you degrade so instead
00:32:02 -- of showing your face you show something which is more difficult
00:32:07 -- to perceive then the gain the increase of perception is reduced as well.
00:32:16 -- So that is it possible to train then to rely
00:32:20 -- on audio information alone right; is that possible?
00:32:26 -- I'm sorry.
00:32:26 -- That is very good question.
00:32:29 -- Yes I think it is possible and people try to do that and papers have been published
00:32:37 -- on that topic but on the same time if you think that there is a limitation
00:32:46 -- of the cochlear implant to transmit fine auditory information then --
00:32:51 -- you almost fell down -- then training is also a limitation I think.
00:33:00 -- Okay is there another question; yes?
00:33:03 -- Do you think there's an influence of the kids learning cued speech in their use of the
00:33:10 -- or in their visual response and is there any research showing kids
00:33:14 -- who don't have any experience with cued speech at all
00:33:17 -- and will have will they have the same responses in your experiment?
00:33:24 -- Okay that's -- that's I understand the question.
00:33:33 -- I would say because cued speech is made of hand movement and speech reading
00:33:39 -- and attract the attention of the child to speech reading but --
00:33:45 -- but well for this experiment I mean with the visually clear information
00:33:50 -- and the visually reduced information we didn't compare the cued speech group
00:33:54 -- to another non cued speech group.
00:33:57 -- Perhaps I should say to we need to do that.
00:34:00 -- But for previous experiment we did make the comparison for previous experiment
00:34:05 -- with the McGurk effect and another material in other children we are to comparison
00:34:11 -- between a cued speech group and non cued speech group and we have the same results.
00:34:15 -- I mean that children with cochlear implant are really responding
00:34:20 -- with the visual component and in the conflicting stimuli.
00:34:25 -- Mainly responding with the visual component and just it is
00:34:29 -- as if the auditory component is absent in this conflicting stimuli.
00:34:35 -- And also the work done by other people like [Inaudible] who published [Inaudible]
00:34:43 -- and [Inaudible] also a French researcher showed similar patterns I mean
00:34:50 -- with conflicting stimuli McGurk stimuli cochlear implant in patient respond immediately
00:34:57 -- with lip reading component and the subject tested by him
00:35:03 -- or by [Inaudible] were not speech users and so I don't think that this is a crucial
00:35:12 -- but the question is obvious I agree, I agree.
00:35:15 -- And just to sort of dealt off of that I was curious about if your research sort
00:35:29 -- of indicated anything regarding age of implantation in reliance
00:35:33 -- on the visual did it still stay as strong or do those children that got implanted earlier sort
00:35:40 -- of lean more toward the auditory?
00:35:41 -- Yes we didn't find up to now we didn't find any effect of facial
00:35:48 -- at implantation so I didn't report about this.
00:35:53 -- We tried to make congregation on these two group those implanted before three years
00:35:58 -- of age those implanted after but we didn't find any difference.
00:36:02 -- But in the literature well this is the only the only results mentioning the difference
00:36:09 -- between those implanted earlier than 13 months which experience fusion and well part
00:36:16 -- of the children in a way implanted early show fusion that's not all the children
00:36:22 -- and those implanted later no.
00:36:25 -- So we were unable to replicate that and I have no explanation for
00:36:33 -- but the question is really understandable too and say oh.
00:36:41 -- Shall I continue?
00:36:42 -- Yeah? Okay so the second part.
00:36:51 -- It said that its work done in our laboratory it's yes it's about the all the brain
00:37:02 -- of native cued speech user processes the cued speech information this is really recent work
00:37:10 -- which is in progress it's not yet finished.
00:37:14 -- And done by researcher Mathew [Inaudible].
00:37:18 -- Hello, as you probably know 40 years ago a little bit more
00:37:43 -- than 40 years ago Orin Cornett invented the cued speech system and this is the French version
00:37:50 -- of the cued speech system while it is based it is not exactly the same
00:37:57 -- as the cued English cued speech but based on the same principle
00:38:01 -- to this ambiguous lip reading the mouth movements which are ambiguous
00:38:08 -- by adding hand gestures which are made of un-configuration
00:38:16 -- and un-position around the mouth.
00:38:20 -- And you know probably that syllables which are difficult to distinguish on the lip reading
00:38:30 -- by lip reading are comparing different hand gestures like pu, bu, and ma.
00:38:37 -- And that same the same hand gesture of configuration is used to convey information
00:38:45 -- about syllables that are clearly distinguishable like ba, da,
00:38:49 -- and ja ba, da, this is for ba, da and ja.
00:38:54 -- So the combination of lip reading and the hand gestures
00:38:58 -- and they are positioned deliver completely accurate information
00:39:04 -- about phonological structure in syllable and in phonics in French as it is in English as it is
00:39:13 -- in Spanish or Portuguese or Italian and so on.
00:39:32 -- So in dangerous of cued -- well one interest of cued speech is
00:39:37 -- that the same information is conveyed in the visual modality as the information we get
00:39:45 -- through the auditory modality so its comparison between visual modality and auditory modality
00:39:54 -- at the same time as a comparison between normally hearing and deaf person.
00:40:11 -- So cued speech is another modality to convey same information as the spoken language
00:40:20 -- but there are some difference of course,
00:40:23 -- it doesn't require sound that's a point well made by Fleetwood and Nesgar.
00:40:30 -- It has its own set of articulators mouth shape, hand shape, and position.
00:40:36 -- And the third part is also important it necessary requires integration
00:40:42 -- of the information delivered by the mouth shape and by the hand shape
00:40:46 -- because that's a little bit different of audio visual because in audio you have
00:40:51 -- for normally hearing you get all the complete information through the auditory channel
00:40:58 -- and audio visual integration we know that it is irrepressible automatic
00:41:03 -- but it's not necessarily in a - well.
00:41:06 -- And on the other hand ear in cued speech each information is by itself ambiguous
00:41:16 -- so it necessarily requires integration.
00:41:20 -- So that makes cued speech an interesting object of research at least for us
00:41:30 -- and in our previous research we have shown that early cued speech user they have similar skills
00:41:46 -- to normally hearing in speech perception in the development
00:41:52 -- of phonology representation including phonology awareness.
00:41:58 -- In reading and spelling achievement and a mechanism, in the development of multi syntax
00:42:06 -- and we have also shown that the left brain sphere is predominant
00:42:13 -- for processing cued speech at least in early user.
00:42:19 -- And since a long time we have we had the desire to examine all of the cued speech information
00:42:39 -- which is processed by the brain because it is similar to spoken information
00:42:46 -- but in the visual modality we saw the auditory stimulation.
00:42:51 -- And so we had the opportunity to make such an experiment with the FNRI technique
00:42:59 -- because [Inaudible] the doctor and researcher who was trained in FNRI could collaborate
00:43:06 -- with us and also [Inaudible] who was really skilled in FNRI.
00:43:12 -- So to put this experiment a little bit in perspective in context I summarized what we know
00:43:34 -- about how the brain process visible speech.
00:43:39 -- First for hearing participants where there is a lot of studies
00:43:43 -- but maybe the similar study the one by Talvett and collaborator in 1997, in which they observed
00:43:55 -- that speech reading by hearing participants speech reading of numbers from one to nine
00:44:02 -- by hearing participants in the scanner activates array if visual primary caucus and also array
00:44:11 -- of primary and secondary auditory caucuses.
00:44:17 -- So and they conclude that visible sign in speech could activate cortical networks considerate
00:44:23 -- as they indicated to any model auditory processing.
00:44:37 -- And there are the famous picture of the brain and this is the activation related
00:44:44 -- to lip reading here occipital and going into the temporal lobe
00:44:50 -- and in yellow you have the activation common to lip reading and auditory speech
00:44:57 -- which are located in the primary and auditory cortex.
00:45:14 -- Another result well that I like that much is the demonstration by all and collaborated
00:45:21 -- that there is a correlation between the activation
00:45:25 -- of the super [Inaudible] that we just saw.
00:45:28 -- And the speech reading score of the individual normally hearing subject
00:45:34 -- at least the correlation between the activation related
00:45:40 -- to the left hemisphere to the left language area.
00:45:55 -- So there is individual variation in hearing subject.
00:46:01 -- And what do we know about the brain of deaf participants
00:46:06 -- versus visible speech, speech reading?
00:46:09 -- Well, I mentioned here two studies the first one by [Inaudible] collaborator who found
00:46:16 -- that there is in the deaf brain there is activation in the temporal lobe related
00:46:23 -- to speech reading it does the same task identifying numbers from one to nine
00:46:31 -- but this activation are more dispersed and less intense than those exhibited by hearing people.
00:46:40 -- So they concluded that where is the conclusion?
00:46:47 -- They concluded that maybe hearing is necessary to have coherent network of activation
00:46:55 -- for speech reading in the temporal lobe but this conclusion was contradicted or at least -
00:47:03 -- I don't find the word in English.
00:47:17 -- A little bit different since topic and collaborator study
00:47:24 -- and in the study they found just to go rapidly to the result, they found greater activation
00:47:37 -- in the left medial and posterior portion of the supra temporal child
00:47:43 -- for deaf and for hearing participants.
00:47:46 -- So they found that deaf participants show greater activation
00:47:51 -- of the temporal language area in response to speech reading.
00:47:57 -- And their task was different it was a detection task so they were presented in the scanner
00:48:04 -- with a list of words that they have to speech read and to detect a word
00:48:11 -- to push a button when the word yes was presented.
00:48:15 -- And the baseline was also different it was a grave excision cross on the chain
00:48:20 -- which becomes red so very basic baseline.
00:48:39 -- The deaf were better at speech reading but even taking this
00:48:44 -- into account the activation was greater in the deaf than in the hearing.
00:48:52 -- And so the conclusion was completely different they say if superotemporal cortex is not used
00:48:59 -- to process auditory speech it could be recruited to process visual speech even more
00:49:06 -- than in hearing participants for whom perception of spoken language is audio visual.
00:49:12 -- So between the old study of 2001 and the new study of 2008 there is a difference
00:49:20 -- in the results and a difference in the conclusion for deaf participants.
00:49:39 -- So we -- we would like to know with our deaf participants
00:49:46 -- and our cued speech stimuli we would like to know what region
00:49:51 -- of the brain are activated behavior cued speech stimuli.
00:49:55 -- And we want we would like to compare that with activation related to the processing
00:50:01 -- of the same words in audio visual speech by hearing participants of course
00:50:06 -- because I've use this with the same content the same phonology but on one hand you have hearing
00:50:13 -- on the other hand just a visual modality.
00:50:17 -- We also want to compare the activation related to speech reading alone
00:50:21 -- in deaf and hearing participants.
00:50:23 -- And I'm also interested to expose the activation related to the processing
00:50:29 -- of cues alone presented result speech reading and an additional question that many use
00:50:36 -- that question is to identify the site of integration possibly the site of integration
00:50:43 -- of manual and speech read information in the cued speech user.
00:51:02 -- I want give a reply to all this as I said, it's a work in progress
00:51:08 -- and for the moment we have scanned 15 hearing participants with audio visual speech
00:51:16 -- and nine well I will present the data of nine deaf cued speech user of French cued speech.
00:51:23 -- We have five more now that are not already in the data.
00:51:30 -- It is even experiment I mean the stimuli of the different condition audio visual
00:51:39 -- and audio visual and control for the hearing participants are just randomly mixed and similar
00:51:46 -- for the stimuli and the human condition.
00:51:51 -- And the participants were in the scanner there to look to fixation cross
00:51:57 -- and then there was a clip a video clip during two segments and then a question mark
00:52:04 -- and they have to detect it was the same task as in the topic experiment,
00:52:08 -- they have to detect a tiger and a tiger here is tapa a very usual summary of word
00:52:16 -- and the word auditory or lip reading or audio visually
00:52:21 -- or in cued speech or only in cues and so on.
00:52:26 -- And the control task was also inspired
00:52:29 -- by the topic experiment they have to detect a small red circle.
00:52:44 -- So we have all of the conditions for the hearing participants
00:52:48 -- so hearing pass the experiment is audio visual stimuli, audio alone, visual alone,
00:52:55 -- and contour, meaning with the red circle.
00:52:59 -- The deaf cued speech user pass the experiment with cued speech stimuli the mouth and the hand
00:53:07 -- or the cues only or the speech reading only and the control.
00:53:12 -- And we also had a group of signers but I won't report about the data of the signers today.
00:53:21 -- So here is an example of the it's -- so its cued speech I mean lip reading plus the hand.
00:53:36 -- [French Translation] In French here is the lip reading alone.
00:53:44 -- And here you should have the cues alone always for the same word
00:53:53 -- and here you should have an example of the control condition with the red circle.
00:53:58 -- In addition, we made the -- we design the test of lip reading and we present outside
00:54:06 -- of the scanner and we presented sentences in lip reading speech reading.
00:54:14 -- For those who know French did anybody understand something now?
00:54:24 -- It is
00:54:24 -- Can we see it again?
00:54:31 -- It is a [French Translation] a [French Translation] is hiding against
00:54:37 -- and the participants have to choose between four images picture this one
00:54:43 -- so this is an easy condition between because a [French Translation] the other three images has
00:54:50 -- nothing similar neither with the word cash [French Translation].
00:54:54 -- Neither is the word [French Translation] so there is no similarity.
00:54:59 -- And this is an example of the difficult condition.
00:55:04 -- No. No, no, no.
00:55:06 -- It is [French Translation] and again, to choose between these four images this is the right
00:55:17 -- but it's difficult because [French Translation] are similar on the lips.
00:55:26 -- So it's difficult to discriminate between this and this and here is one of the words
00:55:39 -- of the sentence [French Translation] and one of the similar word
00:55:45 -- in lip reading [French Translation].
00:55:48 -- So all of this is functioning quite well the performance for the easy item is better
00:56:00 -- than for the difficult item and so on.
00:56:05 -- So by chance more or less the hearing and the cued speech participant have the same age
00:56:13 -- on average and the cued speech participants are better at retrieving in our test
00:56:21 -- than the hearing participant which is classical in several studies in the literature.
00:56:28 -- Here I show you the degrees of the cued speech participant in term of age their performance
00:56:35 -- in the speech reading test you see that there is little foundation the last one is the most 66
00:56:42 -- percent but otherwise its 80, 90, 70.
00:56:48 -- Onset earlier and set of deafness and early I mean one or two or three years early exposed
00:56:58 -- to cued speech and quite early as well as hearing aide.
00:57:05 -- So all were good at lip reading.
00:57:07 -- Here it's just to show you that all of the participants were attentive in the situation
00:57:17 -- that they had to detect eight target and omniscient was between zero
00:57:24 -- and ten percent depending on the condition.
00:57:28 -- Ten percent represent one target so they were in the scanner
00:57:33 -- but they were not sleeping they were really detecting the target one accessory
00:57:38 -- and they didn't make many false alarm.
00:57:52 -- Here are the imaging parameters and for the data analysis but I must confess
00:57:58 -- that it is [Inaudible] work and not mine and I'm not a specialist as [Inaudible] this is just
00:58:05 -- for those who know of these things.
00:58:18 -- And here I represent some result but in a descriptive manner all
00:58:26 -- of the analysis have not been done yet and this is the probability of activation at one
00:58:36 -- but it is not corrected so it's not the way it should be presented finally.
00:58:44 -- So for the activation in hearing participants
00:58:48 -- for the audio visual condition minus the control condition
00:58:51 -- and the control condition I remind you is just a still face with a
00:58:58 -- with the red circle this activation are related so to moving lips, hearing, songs,
00:59:07 -- and integration compared to the conflict condition which is a very easy baseline.
00:59:15 -- And so we have here an activation of temporal cortex bilaterally including the superotemporal
00:59:26 -- circles and overlapping on the [Inaudible] area
00:59:32 -- and extending more posteriorly toward the visual NT V5 area.
00:59:41 -- In audio visual with activation of the primary auditory cortex namely
00:59:52 -- in left hemisphere the ischial gyrus.
00:59:57 -- And also an activation which is classical in this kind of situation
01:00:02 -- of the inferior from the gyrus the broca area.
01:00:06 -- So we have a network a second connecting the spectrum and motor region
01:00:14 -- in normally hearing participants.
01:00:28 -- These are for the cued speech participants you see
01:00:32 -- that the activation is much more reduced less extended and I think that they are two reasons
01:00:39 -- for that for the moment first that the temporal participant is limited and the second is
01:00:46 -- that this is well your first analysis for each brain
01:00:52 -- and then the common analysis second analysis second level analysis
01:00:57 -- which start the activation common to the different individual brains.
01:01:03 -- And there is and I will show you there is a large variability
01:01:08 -- between the different cued speech participants
01:01:12 -- so that might be a reason why this activation are so limited.
01:01:18 -- But anyway, we have activation in the middle temporal gyrus and femoral temporal gyrus
01:01:27 -- and also here in the NT V5 bilaterally but this [Inaudible] I would say that its small extended
01:01:35 -- in the left and in the right hemisphere but it's not tested up to now.
01:01:45 -- Also the posterior part of the superotemporal gyrus is activated at the left side at least
01:01:53 -- that could be a site of integration between the mouth movement and the hand shape.
01:02:02 -- What's also strikes me is that there is no activation
01:02:07 -- of the primary auditory cortex well it could be obvious because no self is presented
01:02:13 -- and no activation of the primary auditory cortex between mind the data
01:02:18 -- of cortex show large activation of the auditory process for speech reading
01:02:24 -- in deaf individual so we have to think about that.
01:02:28 -- But I think that there are reason why there is no activation
01:02:33 -- because of behavioral data the listing and other study in short term memory.
01:02:40 -- And no activation in motor relation in audio visual it was activated
01:02:47 -- in the broca area quite clearly in left hemisphere
01:02:53 -- and here we don't have anything in the motor area.
01:03:07 -- But as I said there is variability so these are the image of each of the participants
01:03:14 -- of the nine participants as you can see there are participants with large activation,
01:03:21 -- large close of activation and supero --
01:03:27 -- the superotemporal gyrus and even in the inferior frontal
01:03:34 -- and other participants with little activation.
01:03:38 -- So this one name one hope is to capture this variability but we are not sure yet there.
01:03:51 -- We made a control look into invest activation where the control condition activates more,
01:03:58 -- activates second areas more than the experimental condition
01:04:05 -- but this does not explain the lack of activation that I mentioned before.
01:04:21 -- Hello for the speech reading hearing participants so we have something very close
01:04:29 -- to audio visual speech be it less activation but activation here in the temporal lobe
01:04:39 -- and in the inferior frontal gyrus.
01:04:42 -- And also something maybe more activated here all of this comparison are still to be done.
01:04:52 -- So in speech reading these are classical results I mean the activation
01:05:01 -- in the broca area involving several studies and sometimes it is imperative
01:05:07 -- that the hearing people should vocally rehearse the syllable or the words presented
01:05:16 -- like in a kind of internal speech and in other people its
01:05:21 -- like people interpreted maybe reflecting the activity of the Nero neurons.
01:05:32 -- Walla what I see.
01:05:36 -- And for visual [Inaudible] which confine to the left hemisphere
01:05:50 -- so indicating perhaps linguistic processing.
01:05:56 -- Again, no activation of broca again, a lot of variability and finally
01:06:23 -- for the auditory condition we have this is for hearing participants
01:06:29 -- so again the temporal cortex bilaterally and going to the [Inaudible] area
01:06:40 -- and also the primary auditory cortex which is expected.
01:06:47 -- And also the broca area here in the left hemisphere.
01:06:58 -- And for the fuse alone so it again it is this first this first activation but more present
01:07:09 -- in the left hemisphere than on the right side NTV 5 is activated bilaterally
01:07:17 -- and again more extended on the left side no activation of broca in variation
01:07:25 -- and no activation of auditory cortex but also variability as in the other two conditions.
01:07:53 -- So for the moment this data could be related to the following ideas
01:08:03 -- that for normally hearing participants audio visual speech perception involves a network
01:08:10 -- connecting perceptual and motor region like described
01:08:14 -- by other involving the superotemporal gyras, brocas areas left
01:08:21 -- and with an extension toward the visual NT area on the left then could be a site
01:08:31 -- of cross model combining linguistic information.
01:08:35 -- And also that speech reading normally hearing involves similar activation
01:08:42 -- as audio visual including the brocas area I mentioned that.
01:08:49 -- And in NTV5 maybe which is activated bilaterally.
01:08:55 -- I will go back to that because I didn't say anything.
01:09:00 -- And maybe this activation of this area in the speech reading condition
01:09:10 -- which is a little bit more than the audio visual condition is related to the frequency
01:09:16 -- of the task because as we have said that most of the speaker is quite small
01:09:22 -- and without song it's difficult to perceive the words and to well all
01:09:31 -- at the same time it's an accessory to perceive the words because they are instructed
01:09:35 -- to push the button and we only selected we only give the subject wear good detectors.
01:09:45 -- And so maybe here it is a question of the visual attention which is mostly sustain
01:09:50 -- in this condition than in the audio visual condition or in the audio condition.
01:10:07 -- And for the cued speech perception by deaf cued speech user, what we found up to now if I try
01:10:17 -- to summarize a small activation in the left and in the right hemisphere and that some
01:10:22 -- of the language areas are seem to be activated so the major temporal gyrus
01:10:29 -- and inferior temporal gyrus and the posterior part of the superotemporal gyrus.
01:10:36 -- Maybe this part this -- maybe this area is a binding area of mouth shape and hand shape
01:10:47 -- and this has to be clarified and tested by contrasting pattern of activation.
01:10:55 -- What I found interesting also is the activation of the NTV 5 and the fact
01:11:00 -- that it seems more important in the left than in the right hemisphere and I made a link
01:11:09 -- with the results from [Inaudible] who said
01:11:15 -- that signing experience whether the people are signers are deaf signers or hearing signers
01:11:23 -- that the signing experience provokes induced lateralization left hemisphere lateralization
01:11:31 -- of NTV 5.
01:11:33 -- So it's more the left hemisphere of this area which is recruited in the detection of movement
01:11:41 -- because in sign language movement is associated with linguistic processing and maybe
01:11:47 -- that similar phenomenon could occur for cueing.
01:11:52 -- Well, I've already commented this no activation on the primary auditory cortex
01:12:01 -- which may be perhaps just obvious because there is no song presented and no activation
01:12:12 -- of broca area which is more surprising for me but maybe it's a question of [Inaudible]
01:12:19 -- of the subject of the participants.
01:12:30 -- Indeed we have a certain variability in the speech reading abilities
01:12:37 -- but as I showed it was quite limited because all the good speech readers well
01:12:44 -- or test of speech reading was quite good the easy items provoke more response
01:12:49 -- of difficult items and the deaf were better than the hearing.
01:12:53 -- So we don't know whether we could well for the moment we don't find a correlation
01:13:01 -- between speech reading abilities and the language
01:13:04 -- and the activation of the language areas.
01:13:08 -- And maybe also this variability could be related to the variability of the prediction abilities
01:13:17 -- of some of them are intelligible speech other don't
01:13:22 -- but we have no measure that for the moment.
01:13:28 -- So this work is to be continued.
01:13:31 -- And now I finished and I thank you for your attention.
01:13:40 -- [Applause] Are there any questions?
01:13:42 -- Thank you Jacqueline that was really interesting.
01:13:50 -- Thank you.
01:13:50 -- I have a lot of questions but my main comment is really I'm really surprise how strong your
01:13:57 -- activity in NTV 5 and beside the work by [Inaudible] where they show these differences
01:14:05 -- in MT but in response to vision motion perception
01:14:08 -- but you are showing it in response to lip reading.
01:14:12 -- So I think that's really very interesting and I also notice
01:14:15 -- that when you show your single subject data you see it in every one
01:14:18 -- of your subjects it seems to be very robust phenomenon.
01:14:21 -- Yeah, exactly.
01:14:23 -- Do your users of cued speech use any other sign languages
01:14:27 -- in addition to being users of cued speech?
01:14:30 -- Yeah, yeah.
01:14:31 -- That's true.
01:14:32 -- They do?
01:14:32 -- Yeah some of them do not all.
01:14:35 -- Yeah I should add a column to my sheet description of the participant because all
01:14:41 -- of that have been raised with cued speech at an early age but now some of them are communicating
01:14:46 -- and that's a good point for sign language.
01:14:48 -- Well that's what I'm wondering because you have an opportunity
01:14:51 -- to [Inaudible] my other question is whether you have a group of hearing participants
01:14:56 -- who uses the cued speech that you could also
01:14:58 -- That is planned but not yet.
01:15:00 -- Yeah because then you really have a chance to pull pry these apart
01:15:04 -- and see what's responsible for which changes.
01:15:07 -- So but it's really interesting how strong NT response is.
01:15:12 -- Yeah, that's the new part the new reserve I think.
01:15:18 -- Yes, definitely yeah.
01:15:22 -- Thank you.
01:15:24 -- Quick question clarify your control group you said you have a red dot can you describe what
01:15:34 -- that's for the mouth or what was that for the rat?
01:15:39 -- This is a controlled task so in from a variety to compare the activation
01:15:46 -- in an experiment and condition to the activation in a condition and so you have
01:15:52 -- to make a decision about what your control condition will be.
01:15:56 -- And so here we decided to use as a control condition the study face but in order
01:16:02 -- that the people to ensure that the people will look to these static face
01:16:07 -- and that the brain will be activated by the visual information provided by the face,
01:16:13 -- we add a detection task which was to push the button when these red circle appear.
01:16:21 -- So for example, you have ten controls, control trials and in one
01:16:28 -- of these ten the red circle appear and the participant has to push the button
01:16:34 -- so it's not systematically in the controls from time to time the red circle appear.
01:16:39 -- To see the look on their face only.
01:16:42 -- Yeah, yeah, yeah.
01:16:43 -- Thank you.
01:16:43 -- And for example I was in the scanner but I didn't detect this red circle.
01:16:50 -- And so I was [Laughing].
01:16:51 -- I done all of this for nothing just to experience the situation.
01:17:00 -- [Laughing] Which I found terrible by the way.
01:17:04 -- [Laughing] Terrified.
01:17:08 -- Other questions?
01:17:17 -- Thank you
01:17:18 -- Thank you.
01:17:21 -- [ Background Talking ]
01:17:29 -- If you want to continue the conversation we welcome you to join us over there.
01:17:33 -- Also there are evaluation forms as always please fill them out and we will collect those.
01:17:40 -- And I thank you all for your attendance.
01:17:42 -- Good.
01:17:43 -- Yeah, thank you.
01:17:44 -- [Applause]