00:00:26 -- Good afternoon everyone.
00:00:28 -- I've got notes in one hand and a microphone in the other, so I'm going to borrow some hands
00:00:36 -- from someone else to sign for me today.
00:00:38 -- I'm thrilled to see so many people here for today's lecture, or, or for the first lecture
00:00:44 -- of the winter of the spring semester for the VL2 presentation series.
00:00:48 -- And today we are in for a big treat because we have Dr. Rachel Mayberry from the University
00:00:53 -- of California of San Diego to, to, to present to us.
00:00:58 -- She is, or at least her work is known to many of you.
00:01:02 -- So she doesn't need much in the way of introduction.
00:01:04 -- But she deserves a few words of, of introduction for those of you who don't know her.
00:01:11 -- She's worked a long time and been a, been a respected linguist and researcher in the field
00:01:15 -- of language acquisition for children who are deaf.
00:01:21 -- And she's also interested in literacy and the acquisition of reading and literacy skills.
00:01:26 -- So she's really a, she's really a good match for the agenda of VL2 which of course is interested
00:01:32 -- in understanding acquisition of visual languages
00:01:35 -- and its relationship to the development of literacy.
00:01:38 -- So we are very interested in hearing what she has to say.
00:01:42 -- Rachel is an affiliated researcher with VL2.
00:01:46 -- She's working on two projects.
00:01:49 -- She's conducted a meta analysis of all the literature related to the role
00:01:54 -- of phonological awareness among deaf readers.
00:01:58 -- A meta analysis is just a fancy way of saying you do statistics on statistics
00:02:03 -- that have been reported on other, on other papers.
00:02:06 -- And try to come up with kind of like an overall statement about what, a, what a, what a,
00:02:10 -- the body of literature has to say about, about a particular phenomenon of interest.
00:02:14 -- And she, she may talk a little bit about that today.
00:02:18 -- Her second study that she's doing with Amy Lieberman at,
00:02:21 -- at UCSD has to do with chronically what parents do with their deaf kids with, as they're reading
00:02:29 -- to them, and the, the relationship of what some of these very early literacy strategies
00:02:35 -- that parents use have to do with the child's developing literacy skills
00:02:39 -- and maybe she'll talk a little bit about that today as well.
00:02:42 -- She received her PhD from McGill University where she was on the faculty for many years.
00:02:46 -- She served as the director of the school of communication sciences
00:02:51 -- and disorders, and the faculty of medicine.
00:02:53 -- And she was the founding member of the center for research on language, mind and the brain.
00:02:58 -- And also the interdisciplinary doctoral program on language acquisition.
00:03:01 -- She's been at UCSD for four years, I got that right.
00:03:06 -- So four years.
00:03:08 -- And where she's continued her work doing psycho linguistics, psycho linguistics experiments
00:03:15 -- of sign language processing, neuro imaging studies of sign language
00:03:20 -- in early, early and late learners of sign.
00:03:23 -- And also the, the topic which I eluded to earlier, the, the relationship of,
00:03:27 -- of reading the, the mastery of reading to the mastery of ASL.
00:03:33 -- And whether or not reading can be thought of as a unimodal visual experience for deaf kids,
00:03:39 -- and what relationship that might have to their signing ability.
00:03:43 -- So with that brief, brief introduction, I'm going to turn it over to her, and take it away.
00:03:50 -- [ Shuffling sounds ]
00:03:57 -- [ Silence ]
00:04:05 -- Rachel: Thank you.
00:04:06 -- I was planning on giving my presentation in English, however, researchers in my lab
00:04:16 -- and people here at [inaudible] have convinced me
00:04:19 -- to give my presentation ASL, so I'll try my best.
00:04:23 -- [ Silence ]
00:04:28 -- I'd like to start by thanking VL2 for inviting me to present.
00:04:34 -- It is an honor to be able to share my research with you.
00:04:41 -- Really I should say our research, because I'm referring to several people in my lab,
00:04:46 -- both current and previous graduate students and researchers.
00:04:51 -- [ Silence ]
00:05:05 -- We'll be talking today about reading development in deaf children.
00:05:11 -- This is a series issue particularly because research shows that the median reading level
00:05:19 -- of deaf children, or deaf people rather, is at a third or fourth grade level.
00:05:28 -- One issue with that statistic is that people often forget it is a median number
00:05:33 -- and not an average.
00:05:37 -- So that statistic is often misinterpreted as stating
00:05:41 -- that most deaf people can not read, but that's not the case.
00:05:46 -- 50 percent of deaf people read at the third or fourth grade or below, but the other half don't.
00:05:57 -- When you start to look at the question of why it is that deaf people have difficulties
00:06:02 -- with reading, finding an answer becomes very difficult.
00:06:08 -- One hypothesis behind why deaf people do not read well, wise and phonological coding.
00:06:15 -- [ Silence ]
00:06:21 -- Some researchers hypothesize that you have to be able
00:06:25 -- to code phonology in order to be able to read.
00:06:31 -- Therefore, if a person does not have the ability to hear or speak the conclusion would be drawn
00:06:40 -- that they are not able to code, phonetically code while they are reading.
00:06:45 -- [ Silence ]
00:06:51 -- So the theory then is that deaf children are not able to learn to read well
00:06:56 -- because they're not able to phonologically code the text.
00:06:59 -- Oh I don't know why my slide just advanced on its own.
00:07:05 -- [ Silence ]
00:07:15 -- Given these questions, I'll be looking at three primary topics.
00:07:21 -- One will address whether or not phonological coding is the reason
00:07:27 -- that deaf people had difficulties reading and if it's not what other reasons be.
00:07:35 -- [ Silence ]
00:07:40 -- I'll start by explaining how we measure phonological coding
00:07:43 -- in readers, both deaf and hearing.
00:07:49 -- A second large study that was conducted under the support of VL2 was a meta analysis
00:08:00 -- which included an analysis of several other studies of deaf and hearing readers.
00:08:09 -- Finally, I'll be talking about a study that was conducted in Montreal with deaf adults
00:08:19 -- which looked at the possibility of other factors that foster reading skills.
00:08:25 -- [ Silence ]
00:08:32 -- When researchers talk about phonological coding, we need to look at what exactly they mean.
00:08:43 -- They're addressing the relationship between orthography
00:08:47 -- which is written text, and the sounds of speech.
00:08:56 -- Different models have been developed to explain this relationship between print and reading.
00:09:03 -- [ Silence ]
00:09:13 -- This model is called mediated model.
00:09:16 -- It starts with the idea that you can not extract the meaning of a word in print directly.
00:09:27 -- So you would have to understand print through the sounds that you know are represented
00:09:36 -- by these letters, through sounding it out, then you can access the meaning of this printed word.
00:09:45 -- Many researchers think that this model explains how young hearing children learn to read.
00:09:52 -- This model may be helpful for young children who already have a spoken language.
00:10:02 -- This may explain how they encounter text and how they access meanings,
00:10:09 -- how they convert these printed words into something
00:10:14 -- that connects with their spoken language.
00:10:19 -- For hearing children this includes a lot of self teaching and learning how to connect the sounds
00:10:26 -- of the speech that they already know to the letters of the printed word.
00:10:32 -- These letters would be new for them, this is a secondary system
00:10:35 -- and something that they've not seen.
00:10:39 -- As children read more, they don't need to convert all of the letters of print
00:10:46 -- into full names, but rather they can access meaning directly.
00:10:52 -- This is where we have a model called the direct [inaudible].
00:11:00 -- This would be an example of sight recognition where you see the word cat, for example,
00:11:08 -- printed and you immediately connect it with the animal cat, without having to sound
00:11:13 -- out the individual letters or sounds of the word cat.
00:11:17 -- [ Silence ]
00:11:22 -- The direct route can be used with words that are already familiar.
00:11:31 -- But when a hearing reader encounters a novel word,
00:11:35 -- or something that they have not seen before, they might revert to the mediated model
00:11:41 -- and use their phonology to figure out the meaning of a word.
00:11:44 -- [ Silence ]
00:11:50 -- These two models would predict that deaf people would have difficulties reading
00:11:57 -- because they would not be able to access spoken language phonology
00:12:03 -- which would be connected to printed letters.
00:12:07 -- And if this is a prerequisite to the direct route then following these two models deaf
00:12:15 -- people would also not be able to use the direct route for reading.
00:12:23 -- However, there is another model that attempts to explain the connection
00:12:28 -- between semantics and orthography.
00:12:31 -- [ Silence ]
00:12:39 -- The connection this model takes from models of computing
00:12:45 -- where there are networks at play when people read.
00:12:54 -- Orthography which is the printed word has a relationship to phonology and semantics.
00:13:08 -- And the brain will use whatever route necessary.
00:13:13 -- It's essentially a race in your brain.
00:13:16 -- Your brain will use whatever queues are available, whether it's a queue
00:13:20 -- in the printed word, and the orthography.
00:13:22 -- Or if the queue is a syllable.
00:13:25 -- So the brain will use whatever queues are available to create meaning.
00:13:31 -- From a research perspective, and a theoretical perspective,
00:13:38 -- it is possible then following this model that deaf children can learn to read well
00:13:45 -- without first having to learn phonology, or how to speak.
00:13:50 -- [ Silence ]
00:13:57 -- So we wanted to test some of these questions about whether phonology is a prerequisite
00:14:04 -- for deaf children to learn to read well.
00:14:12 -- One way to test this is to set up an experiment
00:14:16 -- where you can manipulate the phonology and or orthography of words.
00:14:22 -- [ Silence ]
00:14:32 -- You can manipulate words according to spelling patterns, or speech patterns.
00:14:43 -- In English there are categories of sound symbol correspondence.
00:14:49 -- When I refer to sounds I'm talking about spoken [inaudible].
00:14:53 -- One category is called regular, where there's a regular correspondence
00:15:02 -- between sounds and written symbols.
00:15:05 -- There are a large number of words in this category.
00:15:08 -- Examples might be bust and dust, all words
00:15:12 -- that are spelled this way have a very predictable way to be pronounced.
00:15:22 -- When young hearing children learn to read, they can, they learn words like bust and dust,
00:15:30 -- these regular words first, because they're very consistent.
00:15:36 -- Another category of words in English have a regular spelling pattern
00:15:45 -- and there are regular pronunciations that go
00:15:49 -- with these spelling patterns but there are some exceptions.
00:15:53 -- For example, with braves and caves, the a is a long a sound.
00:16:00 -- And whenever you have an e at the end of a word,
00:16:03 -- you have a long vowel preceding it, except in a few cases.
00:16:08 -- For example, have.
00:16:11 -- [ Silence ]
00:16:16 -- So the regular inconsistent category has more regular words
00:16:21 -- than inconsistent but there are some exceptions.
00:16:26 -- Following these categories an instructions shows that deaf children, excuse me,
00:16:33 -- hearing children really focus on regularity and patterns and sounds.
00:16:44 -- Now the ambiguous category means that there might be a regular spelling pattern.
00:16:49 -- For example, o-w-n.
00:16:53 -- 50 percent of the words that are spelling with o-w-n at the end will sound one way,
00:16:59 -- but the other 50 percent have a different pronunciation.
00:17:03 -- And the reader would have to guess how these words are pronounced.
00:17:07 -- So this category is called then the ambiguous category, and it takes hearing children longer
00:17:14 -- to process ambiguous words when they encounter them in print, than regular words.
00:17:21 -- Now the fourth category is called strange.
00:17:25 -- English loves to borrow words from other languages
00:17:30 -- and we invent different spellings to adapt these words to English.
00:17:36 -- I don't know why the category is called strange, but we call it the strange category
00:17:43 -- because each spelling is essentially one of a kind.
00:17:49 -- We've got yacht, laugh and tongue listed as examples in this category.
00:17:56 -- And there are no other words in English that are spelled and pronounced
00:18:02 -- like these words that have the same patterns.
00:18:06 -- Strange words are the last category of words that hearing children are taught.
00:18:12 -- So we use these categories of sound symbol correspondence to see how deaf children react
00:18:25 -- when they encounter these sorts of words in text.
00:18:33 -- I'll explain the graph of hearing children's performance first.
00:18:42 -- Along the X axis, those different categories of words are plotted.
00:18:48 -- Regular words, regular inconsistent words, ambiguous and then strange words.
00:18:57 -- On the Y axis you have the mean reaction times
00:19:02 -- which indicates how quickly participants respond.
00:19:07 -- [ Silence ]
00:19:16 -- This trajectory of each of these lines indicates an average
00:19:19 -- of the third or fourth grade reading level.
00:19:24 -- The more regular a word is, the quicker a hearing child can recognize it.
00:19:34 -- So you're seeing that they're responding quickly but they're influenced
00:19:40 -- by the relationship between sound and spelling.
00:19:47 -- Now we administered this same test to deaf children
00:19:53 -- and let's see how their results patterned.
00:19:57 -- The data that I'll be showing next is not our data.
00:20:02 -- [ Silence ]
00:20:06 -- The axis on this graph are the same.
00:20:09 -- The X axis reflects the four categories of sound symbol correspondence,
00:20:14 -- and the Y axis represents reaction time.
00:20:23 -- These are the results for deaf children who speak, who attend an oral school for the deaf
00:20:33 -- in Montreal, and who did not sign at all.
00:20:38 -- What we see here is that children's reaction time.
00:20:46 -- [ Silence ]
00:20:53 -- Drops. So they're able to respond faster the older that they get.
00:21:00 -- But when you look at their performance across categories,
00:21:04 -- the result is that they are not heavily influenced by sound symbol correspondence.
00:21:10 -- Next we'll look at deaf children who sign.
00:21:14 -- Some of these deaf children speak, some who, some do not, but all sign.
00:21:18 -- [ Silence ]
00:21:30 -- I collected this data in Chicago.
00:21:34 -- My participants were all profoundly deaf.
00:21:40 -- And we achieved essentially the same results.
00:21:46 -- They recognize words more quickly as they become more proficient readers.
00:21:50 -- So the top line is from second graders beyond that is fourth grade, sixth and then eighth.
00:21:59 -- When you look across the graph, you also see they are not heavily influenced
00:22:05 -- by the sound symbol relationship.
00:22:08 -- So deaf children who do sign and do not sign are not showing much sensitivity
00:22:15 -- to the relationship between sounds and print.
00:22:17 -- [ Silence ]
00:22:29 -- Now we can look at, as adults and see whether deaf adults pattern like deaf children.
00:22:43 -- This data is from Charlene Chamberlain's dissertation.
00:22:49 -- The participants of the study were deaf volunteers from Montreal, Toronto and Ottawa.
00:23:03 -- She administered the same experiment where she looked at reaction time for word recognition.
00:23:11 -- [ Silence ]
00:23:16 -- She also divided her stimuli into the same four categories of regular, regular,
00:23:30 -- inconsistent, ambiguous and strange.
00:23:34 -- You see the top line is of unskilled deaf readers.
00:23:39 -- And you're seeing a relatively flat, flat reaction time.
00:23:44 -- So they're not affected by the different categories.
00:23:49 -- The next line down is skilled deaf readers.
00:23:54 -- Their line is also fairly flat.
00:23:58 -- These skilled deaf adults are skilled readers and skilled signers.
00:24:04 -- [ Silence ]
00:24:10 -- The skilled hearing readers also show a fairly flat reaction time across word categories.
00:24:21 -- So what this tells us about reading development is that phonological coding is something
00:24:29 -- that readers use when they don't read well
00:24:35 -- or when they encounter a low frequency word or a novel word.
00:24:41 -- So Charlene changed the stimuli and looked solely at the low frequency words.
00:24:50 -- She still divided the stimuli into the four categories of sound symbol correspondence.
00:24:56 -- You'll again see that unskilled deaf readers have a very slow reaction time.
00:25:03 -- So they're not recognizing words very quickly.
00:25:07 -- Hearing college students and deaf readers who sign well and read well look very similar.
00:25:14 -- [ Silence ]
00:25:19 -- Overall, these experiments show us that we do not have a lot of evidence for deaf people,
00:25:29 -- whether they are deaf children who do or don't sign, or whether they are deaf adults
00:25:34 -- who do or don't sign, or don't speak.
00:25:37 -- They use phonological coding.
00:25:39 -- [ Silence ]
00:25:54 -- Researchers have developed a number of ways at getting
00:25:57 -- at the question of phonological coding.
00:26:00 -- This is one way to get at that question.
00:26:05 -- You can ask a reader to make a judgment about sound, a phonic judgment.
00:26:12 -- For example, one type of test that is often used with children is one that you see here
00:26:21 -- where a number of pictures are presented and the children are asked
00:26:25 -- to say which word sounds different.
00:26:28 -- [ Silence ]
00:26:37 -- [ Background coughing ]
00:26:46 -- The right answer would be that b does not sound like dog or frog.
00:26:54 -- [ Silence ]
00:27:03 -- So, this test attempts to look at whether or not children are using phonological coding,
00:27:10 -- but if you use this with deaf children, it confuses whether
00:27:16 -- or not they're using simply phonological coding or spelling.
00:27:21 -- Because if you were to present the text below each of these pictures,
00:27:26 -- you could see a relationship between the two final letters of dog and frog.
00:27:32 -- And, of course, you can figure out phonological patterns from spelling patterns.
00:27:39 -- So, one way do avoid the problem of whether they're using text
00:27:47 -- or whether they're using phonology is to use a test that's called a pseudohomophone task.
00:28:03 -- I'll start by looking at what a pseudohomophone is.
00:28:06 -- [ Silence ]
00:28:14 -- You could show subjects different words and ask them
00:28:18 -- to determine whether it is an actual word or not.
00:28:24 -- For the word pencil, we would say yes.
00:28:27 -- For the word haim, h-a-i-m, you would say no, that is not a word.
00:28:32 -- And then the final word is fleze, f-l-e-z-e.
00:28:38 -- If you were using phonological coding, you might want to respond with a yes,
00:28:48 -- because it sounds like the word f-l-e-a-s.
00:28:52 -- So, in this example, fleze is not a real word, but it sounds like a real word,
00:29:00 -- so it's considered a pseudohomophone.
00:29:03 -- [ Silence ]
00:29:18 -- Another way of creating a decoding task is to ask, for example, if this is a flower,
00:29:29 -- with the word iris, you would say yes.
00:29:32 -- The word r-o-w-s is not a flower, but it sounds like the word r-o-s-e, rose.
00:29:41 -- So, if you were using phonological coding, you would either incorrectly say yes,
00:29:48 -- because you're thinking of the flower rose, or your reaction time would be longer
00:29:53 -- because you're connecting it with another word that sounds similar.
00:29:59 -- So, these sorts of tasks also get at the question of whether
00:30:04 -- or not readers are using phonological coding.
00:30:07 -- [ Silence ]
00:30:14 -- Chamberlain also designed a pseudohomophone experiment
00:30:18 -- in her dissertation, which she used on deaf adults.
00:30:21 -- [ Silence ]
00:30:27 -- [ Audience coughing ]
00:30:34 -- The y axis measures reaction time.
00:30:37 -- The x axis plots pseudohomophones and non words that are not pseudohomophones.
00:30:47 -- What you're seeing here is that skilled deaf readers
00:30:54 -- and signers performed similar to their hearing counterparts.
00:31:01 -- Deaf adults who do not read well take a long time to make these decisions.
00:31:09 -- On this graph, we've plotted the rate of errors, the percentage of errors.
00:31:17 -- Again, on the x axis, you have pseudohomophones and non pseudohomophones.
00:31:25 -- The skilled deaf readers essentially can't be fooled.
00:31:33 -- They have a very low error rate that shows that they are not being distracted
00:31:39 -- by phonological coding when reading.
00:31:42 -- The hearing readers however make a very high number of mistakes because they are engaging
00:31:49 -- in phonological analysis and coding.
00:31:52 -- Now very interestingly, the deaf people
00:31:56 -- who are not considered good readers do make more mistakes than the deaf people,
00:32:05 -- but not as many as the hearing people even though these deaf readers are not very
00:32:09 -- strong readers.
00:32:11 -- Excuse me.
00:32:12 -- They're not strong signers.
00:32:14 -- So what this tells us is that deaf children and adults regardless of whether they sign well
00:32:28 -- or not or speak or not do not use phonological coding
00:32:34 -- and despite not using phonological coding, they're still able to read well.
00:32:41 -- Now, you could be saying to yourselves, your lab has a bias.
00:32:47 -- And we have found results that we want because we are biased.
00:32:54 -- However, the literature also shows that there is a great deal of support for the idea
00:33:04 -- that phonological analysis is required for deaf readers.
00:33:12 -- [ Silence ]
00:33:29 -- To answer this question, we asked if VL2 would support a special sort
00:33:34 -- of study called a meta analysis.
00:33:37 -- We are very grateful to VL2 for their support for this research.
00:33:43 -- It's very hard to find a governmental agency that would support a research project like this
00:33:49 -- because the focus is very narrow and in some ways it can be regarded
00:33:54 -- as being a very odd question.
00:33:56 -- I'll start by explaining what a meta analysis is.
00:34:03 -- A meta analysis is a research study, but instead of testing people, you test or analyze papers
00:34:15 -- that have already been published.
00:34:17 -- [ Silence ]
00:34:23 -- And what we mean by testing or analyzing published papers is that we look
00:34:29 -- at the statistical methods and the statistics of these published papers
00:34:38 -- to determine how much variance there is among deaf readers and how much
00:34:45 -- of it can be explained being due to phonological coding.
00:35:00 -- Now, when researchers believe their hypothesis to a great degree,
00:35:10 -- they may find they present their research in a way that shows
00:35:15 -- that there was a great, a large effect.
00:35:17 -- But when you look at the numbers, sometimes it turns out that the had a very small subject pool
00:35:25 -- or their other issues in their research.
00:35:29 -- For example, that the phonological effects, despite being pregnant, are very small.
00:35:40 -- For example, the studies that shows that taking aspirin in very small doses
00:35:50 -- as being helpful came from a meta analysis of other research on aspirin.
00:35:59 -- I worked with Alex Delche Deche
00:36:02 -- [ assumed spelling ]
00:36:02 -- and Amy Lieberman on this meta analysis.
00:36:09 -- We collected all of the research that we could find that had been conducted on deaf children
00:36:16 -- and their reading and/or word recognition.
00:36:20 -- All together, we found 231 papers and we read them all.
00:36:30 -- However, not all of the papers addressed phonological coding.
00:36:35 -- So, we selected those studies that fit our criteria.
00:36:40 -- One of those criterion were that it had to be an experimental measure of phonological coding,
00:36:47 -- because, of course, we needed to look at the statistics
00:36:50 -- and only an experimental measure would produce the statistics.
00:36:56 -- We also wanted to make sure that they did not include any tests of visual phonology.
00:37:07 -- Of those 231 studies, 58 studies met our inclusion criteria
00:37:14 -- and those 58 studies all together tested 2145 deaf subjects.
00:37:24 -- [ Silence ]
00:37:29 -- All of these 2145 subjects had severe to profound hearing loss, however,
00:37:37 -- their type of education was widely varied.
00:37:40 -- Excuse me, their communication mode and education was widely varied.
00:37:45 -- They were pre-leaders to adults and they were from different countries,
00:37:50 -- as widely varied as from the Netherlands, France, the UK, Canada, etcetera.
00:38:03 -- The first question we asked when looking
00:38:05 -- at these 58 studies was whether deaf readers show evidence
00:38:12 -- for phonological awareness and/or phonological coding.
00:38:15 -- [ Silence ]
00:38:27 -- You could even do your own meta analysis by just counting the numbers of yeses and nos.
00:38:35 -- How many papers found evidence versus how many did not find evidence.
00:38:40 -- In looking at all 58 studies, we found an almost even 50/50 split.
00:38:47 -- [ Silence ]
00:38:53 -- A large number of studies found evidence of phonological coding, but an almost equal number
00:38:57 -- of studies did not find evidence for phonological coding.
00:39:02 -- These numbers were based on the full subject pool for each study,
00:39:10 -- but often when researchers have a group of participants, they will do a subject study
00:39:15 -- of just the selected group of participants.
00:39:18 -- When you look at these subgroups, you still come out with a 50/50 split
00:39:23 -- of 50 percent showing evidence for phonological coding
00:39:27 -- and 50 percent not showing evidence of phonological coding.
00:39:33 -- When you pool these numbers, the total number of participants for these studies
00:39:41 -- that do show evidence is 673 and the number for those who do not show evidence are 636.
00:39:50 -- This could have opened up several other opportunities for research,
00:39:54 -- but we didn't pursue other issues.
00:39:58 -- The second question that we asked about these 58 studies were if phonological coding is present,
00:40:09 -- do those phonological coding skills predict reading achievement for deaf people
00:40:18 -- and understanding whether or not phonological coding leads
00:40:21 -- to good reading is a very important question.
00:40:28 -- Now, when you think about how to answer this question,
00:40:31 -- you have to have an experimental design for phonological coding and reading skills.
00:40:37 -- It's not enough to simply look at one without the other.
00:40:43 -- Only 25 out of the 58 research studies met this criteria
00:40:49 -- of measuring both phonological coding and reading.
00:40:52 -- [ Silence ]
00:41:00 -- These 25 studies tested 1074 subjects.
00:41:05 -- Again, all of them were severely or profoundly deaf.
00:41:09 -- The languages represented by these participants were English, French and Dutch.
00:41:14 -- Their communication levels were widely varied
00:41:16 -- and their ages also varied from six to adulthood.
00:41:20 -- Take a look at what they found.
00:41:24 -- [ Silence ]
00:41:34 -- The first question that we encountered in doing the meta analysis was how
00:41:38 -- to represent what all 25 studies found.
00:41:43 -- We looked at the statistical average of the groups represented in these studies.
00:41:55 -- Now in each study, there could be several groups.
00:42:02 -- And there will always be variants.
00:42:06 -- Statistics, however, wants to measure the overlap of the variants across groups.
00:42:18 -- [ Silent ]
00:42:24 -- So when a researcher conducts their statistical analysis and they say
00:42:28 -- that something is significant, it means that one factor causes something else to happen.
00:42:35 -- And what they want to measure is how much variance is linked to that one factor.
00:42:42 -- The numbers on the y axis represent the percentage, so .4 is 40 percent
00:42:51 -- and that would mean that 40 percent of all of that variation
00:42:57 -- across subjects could be accounted for by phonological coding.
00:43:04 -- The average effect of phonological coding across all studies is represented
00:43:13 -- by this blue bar and it's just below .4.
00:43:18 -- The line in the middle represents the amount of variance that each study had from this average.
00:43:30 -- [ Silence ]
00:43:43 -- We also looked at the effect size of each of those 25 studies.
00:43:49 -- The effect size was plotted on the right of this graph
00:43:55 -- and across the 25 studies, you see a very wide range.
00:43:59 -- The r squared represents the average effect for phonological coding
00:44:05 -- and that comes out to about 10 percent.
00:44:09 -- What that means is that out of over 1000 deaf people tested for phonological coding
00:44:16 -- and reading, phonological coding can only explain about 10 percent of the differences
00:44:23 -- that you see across those 1000 deaf readers.
00:44:27 -- [ Silence ]
00:44:37 -- This led us to another question.
00:44:38 -- Given the large amount of variants between phonological coding
00:44:44 -- and deaf peoples reading skills, what can account for this variance?
00:44:50 -- How can we explain these differences?
00:44:52 -- [ Silence ]
00:45:10 -- Now, the several different methods that I mentioned that could be used
00:45:14 -- for testing phonological coding also have different cognitive requirements attached
00:45:20 -- to each of them.
00:45:22 -- There are also different ways to manipulate the spelling and sound correspondences.
00:45:28 -- And the way in which these researchers use these variables could also have an impact
00:45:34 -- on their results.
00:45:36 -- So, we looked at all of these research studies and categorized the sorts of tasks
00:45:44 -- that they used according to the phonological variables and cognitive variables.
00:45:53 -- Some might test memory.
00:45:58 -- Identifying would include a task where they have to look at one stimuli
00:46:03 -- and then select another one that it represents.
00:46:08 -- Matching, they have to match to stimuli.
00:46:10 -- Judgment tasks.
00:46:13 -- And then the last two categories are production tasks where they either have
00:46:18 -- to produce a written sample or they produce a spoken sample.
00:46:25 -- Phonological factors are listed in the columns.
00:46:29 -- You can manipulate the syllables of spoken language or individuals phonings.
00:46:39 -- You can ask people to determine whether something rhymes.
00:46:43 -- So, that really is looking at vowel sounds.
00:46:46 -- You can use pseudohomophones.
00:46:51 -- You can use words that have a regular relationship between spelling and sound.
00:46:57 -- You can manipulate spelling of words.
00:47:00 -- And you can use words that have silent letters in these tasks.
00:47:10 -- We created this chart and plotted all of the studies according to what sorts
00:47:15 -- of manipulations they used in their tasks.
00:47:19 -- [ Silence ]
00:47:22 -- [ Audience Coughing ]
00:47:26 -- And the difference in what sort of methods were used in these studies turned
00:47:31 -- out to sometimes have a very large impact on the results that they achieved.
00:47:36 -- [ Silence ]
00:47:48 -- Each number in the cell shows the number of studies
00:47:53 -- that used both this cognitive method with the sort of phoning unit.
00:48:02 -- [ Silence ]
00:48:10 -- So this helped us to determine whether or not the task type
00:48:13 -- of the phoning unit was what made the difference.
00:48:20 -- After plotting the numbers in this graph, we looked at the statistical effect size.
00:48:24 -- [ Silence ]
00:48:33 -- These numbers show what percentage of the differences
00:48:39 -- in reading can be attributed to phonological coding.
00:48:46 -- [ Silence ]
00:48:57 -- I'd like to draw your attention to this highlighted cell
00:49:01 -- in the category of pseudohomophone judgment.
00:49:07 -- This is the type of task that you can use that avoids the problem of visual decoding.
00:49:14 -- For example, the bee, dog, frog, and you see what it doesn't have much of,
00:49:23 -- it doesn't explain much of the effect size.
00:49:26 -- However, there were two research studies that have the highest effect size
00:49:34 -- which asked participants to speak.
00:49:41 -- So, these would include tasks where someone had to see a syllable and say it.
00:49:48 -- Now, when you eliminate these studies that have very high and very low numbers.
00:49:54 -- [ Silence ]
00:50:00 -- It means that deaf adults and deaf children, excuse me.
00:50:07 -- It means that phonological coding still does not explain why deaf children
00:50:15 -- or adults are or are not reading well.
00:50:19 -- [ Silence ]
00:50:31 -- We also know that what you're asking children to do can also have a big impact on the results.
00:50:38 -- [ Silence ]
00:50:43 -- We also wondered whether the large variability
00:50:47 -- and the effect size was perhaps a result of the reading level.
00:50:52 -- Perhaps if the young readers who aren't very proficient readers are depending
00:50:57 -- on phonological coding for a while, and then they don't rely
00:51:05 -- on phonological coding as adults.
00:51:07 -- We know that's true for heading children,
00:51:09 -- but we don't know whether it's the case for deaf people.
00:51:15 -- On the x axis, reading grade levels are plotted from grades one to 12.
00:51:23 -- The y axis plots effect size for phonological coding.
00:51:28 -- If, this is by the way not a chart of data.
00:51:36 -- This is just showing what would be the case
00:51:39 -- if young children were using phonological coding a great deal at a young age
00:51:45 -- and then were not using it as they became more proficient readers.
00:51:49 -- You would see this above line.
00:51:53 -- [ Silence ]
00:52:04 -- Or if it were that deaf people were developing phonological coding skills
00:52:12 -- through reading proficiency, you would see a line that slants upward.
00:52:18 -- The actual data showed this.
00:52:25 -- Each diamond represents one research study.
00:52:31 -- And the placement of the dot or diamond is based on the average reading ability
00:52:41 -- for all their participants and the size of the relationship between phonological coding
00:52:47 -- and reading that was found in that study.
00:52:53 -- When you look at this, you again see that there is not much of a relationship
00:52:59 -- between the phonological coding and reading.
00:53:04 -- It's flat.
00:53:05 -- So, we're not seeing that phonological coding predicts reading skills.
00:53:10 -- [ Silence ]
00:53:17 -- Given the answers that we found to these three questions and two years worth
00:53:23 -- of this meta analysis of these 58 studies and looking at them in such depth.
00:53:32 -- This, by the way, was a very good experience for my graduate students
00:53:36 -- to conduct all of these statistics.
00:53:40 -- We found that phonological coding was irrelevant for deaf readers.
00:53:46 -- Now, a number of these studies, though, did look at other factors, which they did report
00:53:55 -- as having a possible relationship on reading and other factors.
00:54:05 -- Some of these factors are listed here in the first column.
00:54:09 -- The next column is a measure of the correlation which has to do with the effect size.
00:54:20 -- The end is the number of studies that found evidence
00:54:25 -- for that factor having a correlation to reading.
00:54:29 -- So, again, the correlation is the average correlation between reading and that factor.
00:54:39 -- The first factor that was mentioned was language proficiency, whether it's their ASL proficiency,
00:54:50 -- a measure of vocabulary, a measure of spoken English proficiency.
00:54:56 -- That correlated with reading at 0.59.
00:55:02 -- Phonological coding correlated to reading at a level of .34.
00:55:10 -- These results suggest that language is important and arguably more important as a predictor
00:55:18 -- for reading skills than phonological coding.
00:55:23 -- How much time do I have left?
00:55:24 -- Ten minutes, ok.
00:55:27 -- [ Silence ]
00:55:51 -- So we've given you experimental evidence from people that we have studied ourselves.
00:55:57 -- Children and adults who sign and who don't sign, who read well, who are not skilled readers
00:56:05 -- and phonological coding does not seem to be a predictor for any of them.
00:56:13 -- We've also presented evidence from what we have found from other studies
00:56:17 -- and we are still not finding evidence that phonological coding predicts reading skill
00:56:23 -- but we do have some evidence that language proficiency may have a role for reading skill.
00:56:34 -- We tested the relationship between participant's ASL fluency and their reading skills.
00:56:41 -- [ Papers shuffling ]
00:56:47 -- [ Cleared throat ]
00:56:54 -- Across the literature you will see some debate about the sign language factor.
00:57:03 -- Some of the cons that are presented is that ASL is not English.
00:57:07 -- ASL is not spoken and that ASL for those two reasons does not map onto written English
00:57:17 -- so the conclusion is drawn that you therefore would not expect ASL to be correlated
00:57:23 -- to reading abilities and this is also an argument that has been used by educators
00:57:35 -- for why ASL should not be used in the educational setting for enhancing reading.
00:57:42 -- Now, at the same time research on the brain indicates
00:57:48 -- that the brain doesn't care about modality.
00:57:52 -- The brain doesn't care whether a language is signed or spoken.
00:57:59 -- We also know that a number of hearing children learn English or any L2 through reading
00:58:10 -- and they can learn an L2 through reading without being able to speak it.
00:58:19 -- Linguists talk about the case of what happened in Germany before World War II.
00:58:26 -- There were many linguists that were well versed in other languages, fluent in other languages,
00:58:32 -- Italian, French, Russian, languages other than German.
00:58:37 -- When they heard that the Nazis were coming and it was time to flee,
00:58:43 -- many of the linguists didn't leave the country and when they were asked why they didn't go
00:58:50 -- to France or Italy or whichever country it was that used the language
00:58:56 -- that they were considered an expert in, they said that despite being an expert
00:59:02 -- in that language, they're only fluent in it in text, that they can't speak
00:59:08 -- or use that language to communicate orally.
00:59:10 -- So we know that there are a number of people that are fluent in a second language but only
00:59:15 -- in print, not through face-to-face communication.
00:59:18 -- [ Silence ]
00:59:29 -- I conducted this experiment with Chamberlin.
00:59:36 -- The same participants that were involved in her earlier study that was mentioned earlier
00:59:41 -- in the presentation that were exposed to the pseudo homophones were the same people
00:59:46 -- that were involved in this experiment.
00:59:50 -- Now if ASL were.
00:59:54 -- If you were to say you're testing whether or not ASL inhibits reading and you ask people
01:00:04 -- to participate, first you have to see whether or not they're willing.
01:00:08 -- Then you have to see what their ASL skills are to see if that is a predictor.
01:00:18 -- You might see that people who are fluent
01:00:21 -- in ASL might not be strong leaders and you might, excuse me.
01:00:31 -- If ASL is truly, if ASL truly inhibits English reading what you would see
01:00:36 -- across the participants is that people who are fluent in ASL are not good readers
01:00:40 -- and that people who are not fluent in ASL should be the good readers.
01:00:45 -- We. This was a retrospective study.
01:00:51 -- We asked people to ask.
01:00:55 -- Excuse me.
01:00:57 -- We asked people to participate from the deaf communities in Montreal, Toronto and Ottawa
01:01:05 -- to participate in a research study and we would accept anybody.
01:01:11 -- All were adults who had already graduated from school
01:01:18 -- and had been out of school for many years.
01:01:21 -- We administered three reading tests and we measured their ASL skills
01:01:29 -- by administering a grammaticality judgment task.
01:01:35 -- We also administered a narrative comprehension task and we administered a task where you had
01:01:48 -- to see how much they could comprehend from a signed supported speech stimulus.
01:01:55 -- We also administered a sort of mini IQ test and we asked them
01:02:03 -- to self rate how well they could speak a spoken language and how well they could understand it.
01:02:13 -- So, we asked for these participants and we didn't know how well they would read or not.
01:02:24 -- As it turned out they had an eighth grade average and above the eighth grade level,
01:02:35 -- the government considers that a skilled reader, that if you can read at the eighth grade level
01:02:40 -- or above, you are functionally literate.
01:02:44 -- If you read below the eighth grade level,
01:02:47 -- you are considered unskilled or a dysfunctional reader.
01:02:52 -- So again anyone in our study who could read above grade eight were put
01:03:00 -- into the skilled reader category and anyone below that was put into the unskilled category.
01:03:09 -- Then we looked at what their language characteristics were.
01:03:12 -- [ Silence ]
01:03:25 -- We had a total of 31 participants.
01:03:28 -- We started out with 40 but we had to ask people to come back two times and some people dropped
01:03:33 -- out so we were left with 31 and we found that on the Stanford test,
01:03:43 -- many of our skilled readers were reading above the tenth grade level.
01:03:49 -- Again you're seeing this as beyond the eighth grade level.
01:03:52 -- Our less skilled readers are the ones below eighth grade and on average they're reading
01:04:01 -- at about the third to fourth grade level across all three measures and again we're looking
01:04:10 -- at adults who had graduated from school several years earlier and we had equal numbers
01:04:18 -- of subjects in the skilled and less-skilled group.
01:04:21 -- Now let's look at their sign language comprehension.
01:04:27 -- The skilled readers performed very well on the grammatical judgment task.
01:04:34 -- They had a very good command of ASL syntax and they could comprehend the narrative very well
01:04:42 -- and they could comprehend the story that was given in manually-coded English very well.
01:04:48 -- The adult subjects who were not very good readers had a fairly weak command of ASL syntax.
01:05:00 -- They did not understand the narratives in ASL very well nor in manual-coded English
01:05:09 -- and they did not sign well either.
01:05:13 -- So, this shows a link between ASL skills and reading skills.
01:05:20 -- [ Silence ]
01:05:30 -- Remember, we also asked the subjects to rate themselves on their speech intelligibility
01:05:38 -- and how well they could understand speech.
01:05:40 -- They rated themselves on a scale from one to ten and they questions asked about how well friends
01:05:47 -- or strangers were able to understand them.
01:05:51 -- Ten would mean that they could be understood and understand others very well.
01:05:56 -- Zero means that they don't understand others at all and others don't understand them.
01:06:02 -- You're seeing here that the skilled readers and the less-skilled readers look very similar
01:06:17 -- and you're seeing then that their speech skills are not what matters for reading.
01:06:25 -- It's their general language skills.
01:06:31 -- Additionally, I think we all know that language is not merely enough for reading.
01:06:40 -- One out of five people in the U.S. are functionally illiterate despite being able
01:06:47 -- to speak a language.
01:06:49 -- So we know that it's not just language.
01:06:52 -- What then does report literacy?
01:06:59 -- There's a Canadian researcher by the name of Sotovich [assumed spelling]
01:07:03 -- that developed a measure of reading frequency.
01:07:08 -- It's a really a very creative measure.
01:07:16 -- This test has a list of authors.
01:07:23 -- Subjects are to circle the authors that are real authors.
01:07:29 -- However, there are also names on this list of people who are not authors.
01:07:34 -- Now, if you read a good deal, you would presumably know a number of authors.
01:07:42 -- You wouldn't know all of them but you would know a good number of authors
01:07:45 -- and you would circle the authors that you heard of.
01:07:50 -- People that don't read much would probably circle random names and have a number of errors.
01:07:55 -- Another part of this task includes magazine titles.
01:07:59 -- The list includes titles of real magazines and not real magazines.
01:08:04 -- The same thinking goes into interpreting results.
01:08:06 -- If you read a good deal, you'll circle correct magazine titles.
01:08:09 -- If you don't, you'll circle a variety of titles.
01:08:14 -- This is used as a measure of reading frequency
01:08:18 -- and it's a fairly quick way to measure reading frequency.
01:08:22 -- This researcher found that for children and adults people who read well read often.
01:08:35 -- People who do not read well avoid reading.
01:08:40 -- So what does this mean for our group of deaf adults?
01:08:45 -- Remember some read well and some don't read well.
01:08:49 -- [ Silence ]
01:08:58 -- The good readers recognized a lot of magazine titles, many more than the less-skilled readers.
01:09:07 -- The skilled readers also recognized many more authors
01:09:12 -- than the less-skilled readers recognized.
01:09:16 -- So, this means that we do have evidence for people that people who read well read often
01:09:27 -- and people who don't read well don't read often.
01:09:37 -- So from our experiment we see that the best predictor of the deaf adults in our study
01:09:52 -- of reading skills was their language ability, their command of ASL syntax
01:09:59 -- and grammar and comprehension of language.
01:10:03 -- Deaf adults who read well read often and it seems then that there may be two factors
01:10:13 -- at a minimum that support reading development; language proficiency and reading frequency.
01:10:23 -- These two factors seem to be stronger predictors for reading skills than phonological coding.
01:10:30 -- [ Sneezing ]
01:10:32 -- Ok. Now we're just about at the end.
01:10:37 -- At the title of my talk I promised that we'd talk
01:10:41 -- about the linguistic foundations of reading.
01:10:48 -- We see that vocabulary is important.
01:10:54 -- It is important to be able to read words but phonological coding is not important
01:11:03 -- for how we read these words and some models of reading for example,
01:11:10 -- the connectionist model applies best for reading development in deaf people.
01:11:19 -- We also see that language development and the acquisition of a natural sign language
01:11:27 -- like ASL can facilitate reading however that's not enough.
01:11:32 -- If you want to become a skilled reader, you must also practice.
01:11:38 -- Reading more frequently means reading practice and that is a very important factor
01:11:45 -- in reading development for deaf children.
01:11:50 -- It's obviously also important that a child not just practices
01:11:56 -- but also has good language development.
01:12:00 -- Finally I'd like to thank VL2 again for supporting our meta-analysis and I also
01:12:09 -- like to thank the Federal agencies that have supported my other research
01:12:14 -- that was presented in this study.
01:12:16 -- I'd like to thank my research assistants for all of their patience and all of their help
01:12:22 -- in analyzing all of these research studies.
01:12:24 -- Thank you.
01:12:26 -- [ Silence ]
01:12:34 -- [ Coughing ]
01:12:43 -- Good Afternoon.
01:12:44 -- I'm a deaf studies graduate student.
01:12:47 -- In your talk about phonology I wanted to ask him when you're study did it look at phonetics
01:12:57 -- in terms of deaf children and hearing children
01:12:59 -- and how they learn how to read through phonetics?
01:13:03 -- Did you specifically look at that issue?
01:13:15 -- That's a good question.
01:13:20 -- Are you phonology I include phonetics and phonology to me is the same.
01:13:28 -- I use phonology but it's also phonetics.
01:13:40 -- Phonological coding analysis phonetics.
01:13:48 -- The teaching of that is called phonetics.
01:13:52 -- [ Silence ]
01:14:08 -- Hello. My question relates to when you were talking about varied experiences.
01:14:16 -- I think one of the things that was missing, I did not see is the use of finger spelling
01:14:24 -- and the reason I bring this up is because we've seen a lot of literature related to phonology
01:14:30 -- and sound and how that translates orthography to the understanding of the word
01:14:38 -- but my understanding of characters is that some of them correspond to sound but some
01:14:50 -- of them are actually more closely correlated to meaning.
01:14:58 -- So we have shallow and deep orthography and different languages.
01:15:13 -- Of course they're either shallow or deep so why is it that we're not looking
01:15:21 -- at characters and the close relationship?
01:15:24 -- Why are we looking at the phonology of them instead of looking at some languages
01:15:31 -- that do not have that strong phonological base?
01:15:36 -- Can you talk a little bit about that?
01:15:38 -- [ Inaudible ]
01:15:48 -- [ Two people talking at same time over each other ]
01:15:48 -- Dr. Rachel Mayberry: orthography and some have a shallow orthography.
01:15:51 -- Some have a shallow orthography.
01:15:53 -- Shallow orthography.
01:15:55 -- Dr. Rachel Mayberry: the idea of shallow orthography regular relationship between
01:15:58 -- Regular relationship
01:16:01 -- Dr. Rachel Mayberry: not a lot of exception word
01:16:03 -- Between the written word and sounds
01:16:04 -- Dr. Rachel Mayberry: with hearing children suggest that children learn
01:16:07 -- to read those languages quickly so we thought maybe
01:16:13 -- Children learn how to read those quickly so I thought maybe
01:16:16 -- Dr. Rachel Mayberry: wondering if maybe those [inaudible] and maybe
01:16:18 -- that reader can read those languages quickly too but there's not a lot of research on that
01:16:25 -- and there is some research that suggests
01:16:27 -- that Italian deaf children have problems learning how to read.
01:16:34 -- So, it's possible that language could have a shallow orthography and be regular
01:16:40 -- and that children would still have problems reading the language
01:16:44 -- because they have a unique language themselves.
01:16:49 -- For languages that have character writings, like Chinese, research has suggested
01:16:56 -- that in the character there are elements that represent sound.
01:17:05 -- So and there's some research that suggests that Chinese readers do use sounds
01:17:11 -- when they're reading the character.
01:17:15 -- Also reading these characters is hard because you know that time has changed
01:17:22 -- to another kind of writing to represent sound.
01:17:25 -- Now, I don't know of studies of Chinese deaf readers so in our meta-analysis we tried
01:17:42 -- to we followed the language of study.
01:17:45 -- Oh, there's Hebrew and I know Paul Miller, he's in the VL2.
01:17:51 -- He studies Hebrew and in Hebrew writing there's a lot of kinds of vowels to be missing
01:18:00 -- so the hypothesis is that it is easier to read Hebrew if you know the sound patterns
01:18:07 -- of the language but it seems that children don't have an advantage in reading Hebrew either.
01:18:15 -- So, in the writing system.
01:18:21 -- No, no, no.
01:18:23 -- It's regular.
01:18:26 -- Ok. So, I think the question you ask is an important one and we tried in our meta-analysis,
01:18:34 -- we always put in the language of study.
01:18:38 -- Was it Italian?
01:18:39 -- Was it French?
01:18:39 -- Was it Hebrew?
01:18:41 -- Was it, I think I said French?
01:18:44 -- It didn't seem to make a difference in the results.
01:18:47 -- [ Silence ]
01:18:59 -- Hello, my name is Lorene Simms [assumed spelling].
01:19:03 -- During your presentation I was looking at thinking to myself about how I read.
01:19:09 -- I was still paying attention to your presentation of course
01:19:12 -- but what you have shown seems to be there is a, there's evidence that shows,
01:19:18 -- there's not enough evidence that shows a strong correlation with phonological awareness
01:19:22 -- with reading in deaf people so I know that even though I cannot hear,
01:19:26 -- I'm an oral failure; has been labeled as such.
01:19:30 -- There's always something inside of me that I do use to read.
01:19:36 -- I don't know if it's I'm speaking to myself or something.
01:19:38 -- I don't have the experience of sound and I'm not sure
01:19:43 -- about what we mean when we say the word phonology.
01:19:45 -- What we mean by that and if I, people that are also deaf have the same background they have
01:19:51 -- like an inner voice or something like that is that also considered under the category
01:19:57 -- of phonology or is that a different phenomena,
01:20:00 -- just a different category of strange if you will?
01:20:05 -- Dr. Rachel Mayberry: I think that's a very
01:20:08 -- [ Coughing ]
01:20:08 -- I'm sorry.
01:20:09 -- That's a very important question and some researchers hypothesize
01:20:19 -- that all readers use phonological coding and all readers use phonological coding and
01:20:29 -- but for deaf readers you just have to find the right modality.
01:20:36 -- So, it could be moto-kinesthetic.
01:20:39 -- It could be where you feel it in your mouth or it could be lip reading.
01:20:44 -- It could be some visual representation of what you see.
01:20:46 -- It's really difficult to study those.
01:20:55 -- It's really difficult to get at that particular question but I want to say that I think
01:21:04 -- that it's very clear that when somebody is a good reader,
01:21:08 -- they have a mental representation of the letters.
01:21:13 -- That, to be a good reader you really have to know different spelling patterns
01:21:18 -- and different writing patterns and that we really don't know what the internal mental
01:21:24 -- representation is really for a hearing reader or for a deaf reader and we have often debated
01:21:33 -- in the [inaudible] that it might be that if you become a really skilled reader
01:21:39 -- that in fact you might be able to deduce some of the phonological patterns
01:21:45 -- that are being represented in the writing.
01:21:47 -- So, I don't mean to say in our work that readers
01:21:53 -- who [inaudible] don't use any phonological coding,
01:21:56 -- the problem with this term is it's really broad.
01:22:00 -- What I do mean to say is that we don't find evidence that using sound specifically
01:22:05 -- and converting letters to sound doesn't predict reading skill.
01:22:21 -- Last question to you.
01:22:24 -- Thank you very much.
01:22:28 -- Hello, I'm Judy Monty [assumed spelling].
01:22:34 -- My question was similar to Lorene's or I have a similar experience as Lorene
01:22:39 -- as I was watching you but I was also thinking about my quote unquote dark years
01:22:45 -- with deaf children and sometimes there was a struggle to read and during that time for me,
01:22:56 -- it was because I was being taught phonetically and so I was really using it memorization.
01:23:02 -- It wasn't until later that I became a skilled reader and then at that point to do
01:23:08 -- so that I think phonology helps deaf readers who are already proficient.
01:23:13 -- That's something I can't explain but I think many of us who are skilled readers
01:23:18 -- and we are deaf sort of internalize that phonology but we don't use it until later.
01:23:28 -- So we see deaf and hard of hearing kids who perhaps are looking at the sound based way
01:23:39 -- of understanding a word but there's no meaning there.
01:23:42 -- This is interesting that many people, deaf people can't remember their experiences
01:23:50 -- in reading or really reading really well at school and it wasn't until later
01:23:53 -- that they became skilled readers, then maybe something with the phonology.
01:24:00 -- Dr. Rachel Mayberry: We get this response a lot through our work.
01:24:02 -- We're not saying that phonological analysis coding doesn't help reading.
01:24:09 -- We're not saying that deaf people don't engage in phonological coding and
01:24:17 -- or doesn't help to learn how to read.
01:24:19 -- What we're saying is that if a small part of learning how to read
01:24:25 -- and that it's not the be all and end all of reading and likewise
01:24:34 -- that phonological coding can be done by people who cannot speak.
01:24:41 -- So, really I think it's pretty clear that language skills
01:24:45 -- and reading frequency are more important than phonological coding.
01:24:52 -- So I'm not saying that it has no role or that people don't do it
01:24:57 -- or that hearing people don't do it.
01:25:00 -- I think that reading is contextual and people develop different paths and but it's not.
01:25:08 -- It's not this.
01:25:09 -- When you read the literature you get the idea
01:25:11 -- that if you don't phonologically code, then forget it.
01:25:17 -- You are condemned to a lifetime of illiteracy and that is one
01:25:23 -- of the reasons we wanted to do the meta-analysis.
01:25:37 -- Thank you Rachel.
01:25:42 -- [ Applause ]
01:25:43 -- We have a reception, a short reception right out here in the hall.
01:25:49 -- Please fill out your evaluation forms as always and join us
01:25:54 -- to continue the discussions out in the hallway.
01:25:56 -- Thank you very much.