00:00:31 -- Okay. Welcome to our next three zero two
00:00:34 -- presentation series.
00:00:36 -- It's my pleasure today to introduce
00:00:38 -- to you Dr. Max Rizenhoover [assumed spelling],
00:00:41 -- I'm sorry I'm not good with that but I tried.
00:00:43 -- I asked him and I can't do it very well, anyway,
00:00:47 -- he's from Georgetown University Medical School
00:00:49 -- in the Department of Nuero-Science.
00:00:51 -- He joined that in two thousand and three from MIT,
00:00:54 -- Massachusetts Institute of Technology prior to that,
00:00:58 -- where he got both his PHD and did his Post Tech work.
00:01:01 -- I would like to just read a few things from his website,
00:01:04 -- because I think they're very insightful
00:01:06 -- about his particular work.
00:01:08 -- In, let's see, I forget the year but probably around two thousand
00:01:12 -- and three, Technology Review Magazine named Dr. Rizenhoover,
00:01:18 -- one of the hundred innovators, thirty five years of age
00:01:22 -- or younger, who's technologies were poised
00:01:24 -- to make a dramatic impact in our world.
00:01:26 -- So he's been recognized for his work, he needed according
00:01:31 -- to this particular article, a lot of time on an FRMI machine.
00:01:37 -- So he wrote a grant to the National Science Foundation
00:01:40 -- and he was awarded a Faculty Early Career Development
00:01:43 -- for about seven hundred and fifty thousand dollars.
00:01:46 -- So you can see that his work has been well supported
00:01:48 -- and well recognized.
00:01:50 -- It's very exciting and definitely my pleasure
00:01:52 -- to introduce you and permit you to take on from me.
00:01:55 -- Thank you very much.
00:01:56 -- Max Rizenhoover: Thanks very much, Diane.
00:01:58 -- Okay, are we on?
00:01:59 -- Okay, great.
00:02:01 -- Okay well, thanks so much for having me and let's jump right
00:02:05 -- in because there are a lot of slides.
00:02:08 -- So what we're interested in is well,
00:02:10 -- how does the brain recognize objects?
00:02:12 -- So how do we go from the world out there to the world in here
00:02:16 -- and it's, it's a pretty hard problem and something,
00:02:21 -- something that you're doing right now,
00:02:23 -- whether you're reading the letters here,
00:02:27 -- you're reading the patterns on the screen,
00:02:28 -- you're recognizing the words
00:02:30 -- and you're doing it every time also in other domains.
00:02:35 -- People are getting paid for it.
00:02:36 -- Right, if you're a radiologist, you have to recognize tumors
00:02:39 -- on the X-rays for instance.
00:02:40 -- If you're a baggage screener then obviously you have
00:02:43 -- to recognize certain objects and we know a number of disorders
00:02:47 -- where object recognition is affected.
00:02:49 -- So, for instance in dyslexia or autism,
00:02:52 -- two areas that we're interested in, there are differences
00:02:56 -- in optic recognition that we're trying to understand.
00:02:59 -- So here it's recognized in the printed word
00:03:02 -- and here it's actually a face processing,
00:03:04 -- recognizing faces that's affected.
00:03:06 -- And what the goals are, our research is
00:03:09 -- to understand how the brain recognizes objects
00:03:13 -- and that then provides a framework that which to look
00:03:17 -- at these different disorders or application areas and then try
00:03:22 -- to make inroads to understanding those.
00:03:25 -- So, why is autronition [assumed spelling] hard
00:03:28 -- and so here is an illustration and how many
00:03:33 -- of you know Alice Cooper?
00:03:37 -- No, no one wants to admit they know Alice Cooper, so,
00:03:44 -- so the problem with vision here is so these images,
00:03:48 -- this is all Alice Cooper, right?
00:03:50 -- And so on your retina in your eye,
00:03:53 -- they all look very different, right?
00:03:55 -- Very different activation patterns on your retina
00:03:59 -- but they all in your brain then activate like Alice Cooper,
00:04:03 -- right and you realize it's, it's the same person.
00:04:07 -- However, you also might recognize that,
00:04:10 -- is this Alice Cooper?
00:04:12 -- No, no, it's, so it's Gene Simmons,
00:04:15 -- obviously because he's got the tongue right?
00:04:19 -- So, and it's very hard again because for your brain,
00:04:22 -- I mean there's studs and there's leather
00:04:24 -- and there's face paint in both, right?
00:04:28 -- So they're very similar on your, on your retina to some extent
00:04:31 -- but the big difference is
00:04:33 -- that Gene Simmons has the long tongue, right?
00:04:36 -- So there's the small detail in the image
00:04:39 -- that then makes a difference.
00:04:41 -- On the one hand, a lot of viability that corresponds
00:04:46 -- to the same object, Alice Cooper,
00:04:49 -- and then the small difference that corresponds
00:04:51 -- to a different object, Gene Simmons.
00:04:55 -- So you're brain does that and it does it very well, right.
00:04:58 -- Now you're brain doesn't just do it very well,
00:05:01 -- it also does it very fast and so here's a movie that's going
00:05:06 -- to show different images at the rate of six per second.
00:05:10 -- So on your retina you only have the image for a hundred
00:05:13 -- and sixty milliseconds, yet as you'll see you'll,
00:05:15 -- you'll have no trouble recognizing the images.
00:05:21 -- So let me roll the clip.
00:05:25 -- Okay, so six images per second, let's see,
00:05:29 -- yet you'll probably have little trouble recognizing the objects
00:05:36 -- that's shown in the movie.
00:05:37 -- Even zo, even though in the retina they're only there
00:05:40 -- for a very short time.
00:05:42 -- So your brain can do vision very well and can do it very fast
00:05:46 -- and so the question is the, well how does the brain do it?
00:05:51 -- And here is a monkey brain, so monkey brains are small, right.
00:05:55 -- That's why we rule the world and not the monkeys,
00:05:59 -- which is a good thing.
00:06:00 -- So, so, but a, so it's small which means it's faster alright,
00:06:05 -- so, because signals don't have to travel that far, right.
00:06:08 -- So it's a small brain, so it can zip around really fast
00:06:11 -- but so here in the monkey brain, so it's,
00:06:15 -- and it's the same for the human brain.
00:06:16 -- The monkey brain is very similar to the human brain for vision
00:06:20 -- and so the signals travel from your eye, then to the back
00:06:26 -- of your head and then down here, so called ventral stream
00:06:33 -- because it's, it's down here
00:06:35 -- and then signals travel back up to frontal cortex.
00:06:39 -- So from here to the back, down and up.
00:06:45 -- And in the monkey that only takes a hundred
00:06:50 -- and thirty milliseconds.
00:06:52 -- Now in humans, it takes a little longer
00:06:55 -- because our brains are bigger, thankfully enough but it takes
00:06:59 -- about a hundred and fifty milliseconds in humans
00:07:02 -- for the same to go from the eye to over here.
00:07:05 -- And here's where the decisions made, so this is where,
00:07:08 -- this is the part that makes us smarter than the monkeys, right.
00:07:10 -- It's the frontal cortex, a lot bigger
00:07:13 -- in humans than in monkeys.
00:07:14 -- But it takes about a hundred and fifty milliseconds
00:07:17 -- for the signals to travel from the eye over to here and that's
00:07:21 -- about the time the brain needs to make the decision.
00:07:24 -- So it means that the whole decision is being made
00:07:30 -- in one shot through your system, right.
00:07:33 -- So you're information goes from here, back here, here,
00:07:35 -- here and boom you make the decision.
00:07:41 -- So, so how can the brain do that?
00:07:44 -- And that's what we've been working on,
00:07:48 -- so we have a computational model of how the brain does vision.
00:07:54 -- How the brain recognizes objects, so how we go
00:07:57 -- from the picture on your retina then to the neurons over here
00:08:03 -- that make the decisions.
00:08:04 -- And the nice thing about the model is
00:08:09 -- that there's just two operations.
00:08:10 -- Just two different things the brain is doing
00:08:13 -- but the brain is doing it over and over so the great power
00:08:16 -- of the brain is that it's doing simple things over and over.
00:08:20 -- And that let's you do something that's,
00:08:21 -- some very complex things.
00:08:23 -- So, the idea in the model is
00:08:25 -- that there's just two operations.
00:08:27 -- One is, you take simple things and you put them together
00:08:31 -- to get something more complex.
00:08:33 -- So, for instance you take spots on the retina and you line them
00:08:40 -- up to get an edge, right.
00:08:41 -- So edges are great, so here in letters we have lots of edges
00:08:45 -- and this is what your brain is doing back here
00:08:49 -- in your early vital areas.
00:08:53 -- So you take something simple and you put it together
00:08:56 -- to get something more complex.
00:08:59 -- The second operation is to increase the invariance.
00:09:03 -- So now we go from recognizing an edge over here,
00:09:10 -- just at one position to recognizing an edge at lots
00:09:13 -- of different positions.
00:09:14 -- And so that's the second operation,
00:09:16 -- so the idea being now you have neurons that like edges
00:09:20 -- of different positions and then you put them together to be able
00:09:25 -- to recognize the same edge across positions.
00:09:29 -- And the idea is that the brain is doing these simple things
00:09:33 -- over and over and that then actually lets us account
00:09:39 -- for human performance in like animal detection so,
00:09:44 -- for instance here, this image doesn't contain an animal, yes.
00:09:48 -- So we can show that, with this model here we can match human
00:09:53 -- performance on these difficult tasks.
00:09:59 -- So, again, so first operation is increasing complexity, so,
00:10:06 -- from simple to complex.
00:10:09 -- So, here's one example, let's say you have a neuron
00:10:12 -- that likes a horizontal edge and you have a neuron
00:10:17 -- that likes an oblique edge.
00:10:20 -- And then you have a neuron that is connected to these neurons
00:10:23 -- over here so then this one only responds if both
00:10:28 -- of those guys respond.
00:10:31 -- So meaning if there's a corner for instance, right,
00:10:34 -- so you want a horizontal bar and an oblique bar and so
00:10:39 -- if you have a corner, it's going to act
00:10:41 -- to make this neuron, right.
00:10:43 -- So, this one gets activated by horizontal bar,
00:10:46 -- this one by an oblique bar and then this one requires this one
00:10:50 -- and that one to be active so you get,
00:10:53 -- so you get a response to a corner.
00:10:56 -- So you take simple things, put them together
00:10:58 -- and you get more complex things, right.
00:11:00 -- So you can take these corners
00:11:02 -- and can get more complex connectivity,
00:11:04 -- combine corners and so on so forth.
00:11:07 -- And there's external evidence for that.
00:11:10 -- And the second operation then is this pooling operation, right.
00:11:17 -- So the idea is now that I have neurons
00:11:20 -- that like the same feature, so for instance,
00:11:23 -- a neuron that likes an edge, an oblique edge in this position,
00:11:28 -- another neuron that likes an oblique edge in this position,
00:11:31 -- so here, these guys over here and now what this unit does is,
00:11:37 -- it's connected to both of those.
00:11:40 -- And the response of this one is the maximum over these two guys.
00:11:44 -- So meaning, this one responds if this one or that one is active.
00:11:48 -- So in the first case we say we need both active,
00:11:53 -- and you get the corner, so more complex connectivity.
00:11:56 -- So it's the and, and in this case it's the or, right.
00:11:59 -- We say one or the other and that's enough
00:12:02 -- to activate this neuron here.
00:12:04 -- So, and it's, it's called its maximum, its max not because I'm
00:12:09 -- so vain, right here, max is to max pooling, right.
00:12:12 -- But it's, it's the maximum so it's basically this one
00:12:17 -- or that one so you can say it's the maximum
00:12:19 -- and that's the response.
00:12:21 -- So just two operations, increased complexity
00:12:24 -- and increase in variance and the idea is that the brain is doing
00:12:27 -- that over and over and there is support
00:12:30 -- for this maximum pooling also now from experimental papers.
00:12:39 -- So, well the question is, I can draw these boxes all day, right,
00:12:44 -- and tell you yeah, wouldn't it be nice
00:12:46 -- if the brain worked like that, right?
00:12:49 -- And the question is well,
00:12:51 -- does the brain actually work like that?
00:12:54 -- And so one domain we can look at now is faces.
00:12:58 -- So, face processing, right, so faces are everywhere, right.
00:13:02 -- They're on Mars, they're on fish sticks, right?
00:13:05 -- Faces are everywhere and faces
00:13:08 -- of course are very important, right?
00:13:09 -- So faces we do, it's not just because they're on Mars
00:13:13 -- and on fish sticks but faces are key for some
00:13:16 -- of the cognition, right.
00:13:17 -- So we recognize people, we recognize expressions
00:13:21 -- and so they're very interesting to study as an object class
00:13:26 -- and we're all very good at it.
00:13:27 -- I mean most people are good at it so they're experts
00:13:30 -- at recognizing faces and so it's a great domain to look
00:13:35 -- at to understand how the brain recognizes objects.
00:13:38 -- It's also interesting because some folks think
00:13:43 -- that faces are special.
00:13:45 -- Right, so, that our brain does faces differently
00:13:48 -- from other objects and so if we can account for face perception,
00:13:53 -- then the idea is well, we probably have a pretty good idea
00:13:58 -- of how the brain does vision in general.
00:14:00 -- Okay, so now the question is well,
00:14:04 -- can we actually understand how the brain recognizes faces
00:14:08 -- and can we use the model that I just talked
00:14:11 -- about to link what the brain is doing to something
00:14:15 -- that the mind is doing, i.e. faces, face perception?
00:14:24 -- Okay, so in one key finding from the nineties,
00:14:29 -- was that there is a brain area which is about the size
00:14:33 -- of a blueberry behind your ear over here on the right side
00:14:38 -- that is active whenever you look at a face.
00:14:42 -- So, you can, so what we're doing is FMRI,
00:14:44 -- I'll talk about that in a second.
00:14:48 -- So what you do is you look at how the brain responds to faces
00:14:53 -- and you look at how the brain responds to houses
00:14:57 -- and then you see which part of the brain is more active
00:14:59 -- to the faces compared to the houses.
00:15:04 -- So you just, so you subtract the two
00:15:07 -- and what you find is then there is an area here
00:15:10 -- on the right side, right here, I'm mean here's a [inaudible],
00:15:13 -- over here, a little bit inside obviously.
00:15:18 -- That's the so called fusiform face area.
00:15:21 -- So FFA, fusiform face area.
00:15:23 -- So the way we're looking at that is with FMRIs.
00:15:27 -- So you've probably heard about that
00:15:30 -- from [inaudible] using these scanners
00:15:33 -- and now the important part in FMRI is the F. So, we're looking
00:15:41 -- at MR, so MRI, you might have had an MRI, right,
00:15:45 -- so which is just like an X-ray of your brain.
00:15:48 -- So, MRI tells you something about anatomy, right.
00:15:52 -- How is the brain, what's the shape of the brain?
00:15:55 -- FMRI is either functional, tells you which parts
00:15:59 -- of the brain are active when you do certain things, right.
00:16:02 -- So, like we just saw for faces and houses,
00:16:06 -- with that simple example, which parts of the brain are active
00:16:09 -- when I look at faces and which ones are active
00:16:11 -- when I look at houses?
00:16:12 -- And then you can see which parts are now specific for faces.
00:16:16 -- Good, so an example here, we look at faces, we look at houses
00:16:21 -- and then we find okay, so this area is a lot more active
00:16:25 -- for faces than for houses.
00:16:30 -- Good, so now we take our model and we say okay, well,
00:16:33 -- let's try to understand what's going
00:16:35 -- on in this little blueberry inside your head.
00:16:38 -- And so what we then do is we say well, in this region the FFA,
00:16:43 -- let's assume there's a lot of neurons that like faces,
00:16:48 -- so a lot that like faces
00:16:50 -- and different cells like different faces.
00:16:54 -- So, now we can say okay well, in children the idea is well,
00:17:01 -- so in young children that's such a,
00:17:02 -- to learn faces the idea is they have just a few neurons
00:17:06 -- that like faces.
00:17:08 -- But these neurons respond, each neuron responds to a lot
00:17:11 -- of faces, right, so they're not very selective.
00:17:14 -- So, you have and that's show over here in this cartoon.
00:17:19 -- So, let's say if I had all the faces lined up over here, right,
00:17:24 -- so in a long line and then each neuron,
00:17:29 -- so let's say I have a neuron here
00:17:30 -- that responds to a lot of faces.
00:17:31 -- So here's the responds, here's the responds,
00:17:34 -- here are the faces, then you would have very broadly tuned
00:17:37 -- neurons meaning that a neuron would respond to a lot
00:17:40 -- of different faces, right.
00:17:41 -- All the different faces here that I showed, show this face
00:17:43 -- or I'll show this face over here, I got this response,
00:17:45 -- this face I got that response, this face I got that response.
00:17:48 -- So each neuron responds to a lot of different faces.
00:17:53 -- So meaning it's not very selective.
00:17:55 -- If you see different faces, you're going
00:17:56 -- to have the same neuron response.
00:17:58 -- So the representation is not very selective.
00:18:01 -- On the other hand now, here are the ideas on the adult like
00:18:10 -- and it takes, experiments have shown it takes about fourteen
00:18:14 -- to sixteen years to, for children to be as good at recog,
00:18:18 -- at recognizing faces as adults.
00:18:21 -- It is in the adult now, you have a lot of face neurons
00:18:25 -- and these neurons are very selective.
00:18:27 -- So, remember in children very broadly intoned, right,
00:18:31 -- they respond to a lot of faces.
00:18:33 -- Now the idea in adults is they're very selective.
00:18:37 -- So, there's each neuron responds to a few faces but then not
00:18:41 -- to other faces, right.
00:18:42 -- So here in the kid we had these broad neurons, right,
00:18:45 -- they respond to a lot of faces.
00:18:47 -- Now in the adult we have neurons that like, one likes these faces
00:18:50 -- over here, another one likes these faces over here,
00:18:53 -- so you have a lot of different neurons that are very selective.
00:18:56 -- And that's the, that's the idea here and that's,
00:19:00 -- that's what we show with these little,
00:19:02 -- little peaks here, right.
00:19:03 -- So, children, broad neurons, faces all look very similar
00:19:08 -- because these neurons aren't very selective.
00:19:10 -- They don't discriminate between the faces.
00:19:12 -- And the adult now, very selective,
00:19:15 -- so now you show two faces that are similar, they're going
00:19:18 -- to activate different neurons.
00:19:20 -- And remember the hypothesis is to be able
00:19:27 -- to discriminate faces, so let's say two faces are different,
00:19:31 -- you want them to activate different neurons.
00:19:33 -- So, for, for the kid then, faces look very similar, they're worse
00:19:37 -- at telling faces apart.
00:19:40 -- For the adult, we have these very selective neurons
00:19:42 -- that then make it easier to tell faces apart.
00:19:50 -- And so what we can actually show with the,
00:19:56 -- the computational model is how selective these face neurons
00:19:59 -- have to be for adults.
00:20:00 -- Because adults are very, I mean you and I,
00:20:02 -- I mean normally we are very good at telling faces a part.
00:20:07 -- So, what this figure shows here it's a little bit hard to see
00:20:12 -- but so here we have a computer graphics morphing system,
00:20:19 -- so we take this face over here and they're all,
00:20:23 -- they don't have any hair so nothing
00:20:26 -- that makes it easy to recognize faces.
00:20:28 -- So you really have to pay attention to the inner part
00:20:31 -- of the face but then what we can do is we can morph
00:20:36 -- from one individual over here to another individual over here.
00:20:41 -- So these are two different people and they're all
00:20:45 -- from Germany incidentally, they're all hairless Germans
00:20:49 -- but then what we can do with the morphing system is
00:20:53 -- that we can now smoothly morph from one face to another face.
00:21:00 -- Alright, so we, we take one face
00:21:03 -- and then we can smoothly change the shape
00:21:05 -- to another face over here.
00:21:06 -- And then what the computer simulation say that neurons have
00:21:13 -- to be so selective, if you have a neuron that likes this face
00:21:16 -- over here then it does not respond
00:21:20 -- to the face that's sixty percent different.
00:21:22 -- So over here we have zero percent so twenty, forty, sixty,
00:21:26 -- eighty, one hundred and one hundred is a different face,
00:21:28 -- right.
00:21:28 -- So, what the stimulation say is that these neurons are
00:21:32 -- so selective, so remember the selectivity like this,
00:21:37 -- they're so selective that if I change my face
00:21:41 -- and go sixty percent towards another face,
00:21:44 -- they stop responding.
00:21:46 -- So the face neurons are very selective.
00:21:50 -- So meaning if I see different faces then the prediction is
00:21:55 -- that they activate different neurons as long as they're,
00:21:59 -- even, even if they're very similar.
00:22:01 -- So this face and this face would be predicted
00:22:05 -- to activate different types of neurons.
00:22:13 -- So another question is how can we test that?
00:22:16 -- So how can we use FMRI to see how selective these neurons are?
00:22:22 -- And that's the big problem, right.
00:22:24 -- So what FMRI does is basically, it shows you
00:22:31 -- where the brain gets warm, right.
00:22:33 -- When you, when you do a certain, certain task.
00:22:37 -- And so the problem is if we think about the example
00:22:39 -- with the kid's and the broad neurons and the adults
00:22:42 -- with the sharply tuned neurons, now what FMRI does,
00:22:46 -- it looks at little chunks of your brain, so little chunks
00:22:50 -- of your brain and sees okay well,
00:22:53 -- how active are the neurons in there, alright.
00:22:55 -- How much activity do I have on this little chunk of the brain?
00:22:58 -- Remember the little blueberry sized region,
00:23:00 -- the little red spot.
00:23:01 -- We had an orange spot, we had the FMRI image and so the,
00:23:07 -- the problem is now that if you have a few neurons in there,
00:23:11 -- they're very broadly tuned that respond to all the faces
00:23:15 -- or you have a lot of neurons and only a few of those are active
00:23:19 -- for each face, you can get the same average activation.
00:23:24 -- Right, so if it's now in the kid few neurons and each is active
00:23:28 -- to a lot of faces or in the adult, very selective neuron
00:23:33 -- and different neurons active for each face,
00:23:35 -- you can get the same average activation.
00:23:37 -- If you look at that part, that's a,
00:23:40 -- that's kind of a key part, right.
00:23:42 -- So you can have, if you just look
00:23:43 -- at how active your brain is, you don't know if it's not,
00:23:48 -- if the activation is caused by a few neurons at different neurons
00:23:51 -- for each face or is it the same neurons that respond over
00:23:54 -- and over and over to different faces.
00:23:56 -- But the problem is that it has very different implications,
00:24:00 -- right.
00:24:00 -- If you have very broad neurons you're not very good
00:24:02 -- at discriminating faces and we have new data from that deals
00:24:06 -- with Autism that show that, right,
00:24:09 -- that are there you have much broader tuning.
00:24:13 -- Or if you have very selective neurons then you're well able
00:24:15 -- to discriminate faces.
00:24:17 -- But normal FMRI does not tell you that,
00:24:19 -- it's just tells you well, this area responds to faces.
00:24:31 -- So to illustrate that, so here's an example
00:24:34 -- and now let's look not at faces but look at motion,
00:24:37 -- so let's say we have a part of the brain,
00:24:39 -- actually there is a part of the brain where neurons care
00:24:41 -- about direction of motion.
00:24:43 -- So whether things move in one direction or things move
00:24:45 -- in a different direction.
00:24:48 -- And so let's in our little, so [inaudible],
00:24:50 -- that chunk of cortex, chunk of brain that FMRI looks at, right,
00:24:54 -- so in this one boxel [assumed spelling],
00:24:56 -- let's assume we have some neurons
00:24:59 -- that like downward motion and other neurons
00:25:02 -- that like upward motion, right.
00:25:04 -- So we have the neurons care about direction, right,
00:25:07 -- whether it's upward or downward.
00:25:10 -- And similar to our face case where we say well they
00:25:13 -- like different faces but now we do direction
00:25:15 -- because it's, it's simple.
00:25:17 -- So here now, let's say what we show,
00:25:20 -- so on the screen we show downward motion, right,
00:25:22 -- so we show like dots moving down, right.
00:25:26 -- Then the idea is that, that stimulates all the neurons
00:25:30 -- that like downward motion.
00:25:32 -- They all become active.
00:25:35 -- So let's see now, after that I show upward motion,
00:25:41 -- so now the red neurons, the neurons that respond
00:25:44 -- to downward motion don't like the upward motion.
00:25:47 -- So they're silent.
00:25:48 -- Now the neurons that like upward motion respond.
00:25:51 -- So, if you look at the activation level, though,
00:25:56 -- in my boxel, here I have some neurons responding
00:25:59 -- and here I have some neurons responding
00:26:01 -- and they get the same kind of activation level.
00:26:04 -- Here it's what I see over here, right?
00:26:06 -- Downward motion, upward motion, same kind of activation.
00:26:11 -- So just based on the activation I wouldn't be able to say
00:26:15 -- that okay, well these neurons actually care about,
00:26:17 -- care about direction of motion, right?
00:26:19 -- Because they get the same signal, whether they go down
00:26:21 -- or up, I don't have any difference
00:26:23 -- in my activation level.
00:26:24 -- So the question is well, what to do
00:26:31 -- and what we exploit then is a finding from monkey physiology.
00:26:37 -- If I stimulate a neuron once
00:26:42 -- and I stimulate it again a second time right afterwards,
00:26:45 -- the neuron has to recover, still recovering from the first time
00:26:48 -- and responds less the second time.
00:26:51 -- So, it's called adaptation.
00:26:53 -- So I show two stimuli in rapid succession.
00:26:57 -- So boom, boom, right like that, then the second time it's lower
00:27:01 -- because it's recovering from the first time.
00:27:03 -- And so that's exactly what we need because now
00:27:08 -- if I have different neurons that respond but say I show upward
00:27:12 -- and downward motion, then if I have some neurons responding
00:27:15 -- to up and other neurons to down and I show them
00:27:18 -- in rapid succession, I get fresh neurons both times,
00:27:21 -- I get a strong response.
00:27:24 -- But if I show, if the neurons don't care about direction then,
00:27:29 -- and I show, let's say the neurons like upward
00:27:31 -- and downward motion and they respond the same neuron responds
00:27:35 -- to both, I get a low response the second time
00:27:37 -- because I show one stimulus and then I show the second stimulus.
00:27:41 -- It's the same neurons responding so I get a low response.
00:27:44 -- So let's show that in the, in the graph here.
00:27:48 -- So here's our case now, we show down, down,
00:27:52 -- so here now we show downward motion,
00:27:55 -- we get these neurons responding here in red
00:27:58 -- and now we show downward motion again and it's the same neurons
00:28:02 -- that have to respond so they respond less.
00:28:04 -- See here, they're nice and red over here, strong response
00:28:08 -- and so here they're, this shows they respond less
00:28:11 -- and that's what you measure here, right.
00:28:13 -- So the second time you get a lower response.
00:28:16 -- Now however if I show down and then up, so here down,
00:28:23 -- down you get a low response
00:28:25 -- because the same neurons are active in both cases.
00:28:28 -- But so here now I show downward motion and then upward motion
00:28:32 -- and so what this means then is downward again,
00:28:37 -- these neurons as before.
00:28:38 -- Then these upward, its fresh neurons.
00:28:41 -- So fresh neurons give it a strong response.
00:28:44 -- So now what we see here is I get a low response
00:28:47 -- if I show the same stimulus twice
00:28:50 -- but importantly I get a strong response
00:28:52 -- if I show two different directions.
00:28:54 -- So that means my, in my Boxel there must be neurons,
00:28:58 -- different neurons that respond to the two directions of motion
00:29:01 -- because I get a stronger response
00:29:02 -- when I show two different directions.
00:29:05 -- And that's very important because it directly links
00:29:09 -- to what we talked about with faces, right.
00:29:11 -- Whether you have the same neuron responding to different kinds
00:29:14 -- of faces or whether you have different neurons responding to,
00:29:19 -- to two different faces.
00:29:23 -- So let's have a little cartoon to a, to a, to bring that home.
00:29:33 -- So let's say we have our cartoon with face area,
00:29:36 -- so here the neurons like different faces, right.
00:29:39 -- So, here are all different individuals
00:29:42 -- and let's say these are all neurons
00:29:44 -- that like these different people.
00:29:46 -- Now if I show a new face, let's say this one over here,
00:29:52 -- then the idea is the neurons that respond are those
00:29:57 -- that like similar faces.
00:29:59 -- So remember they're very similar and selective,
00:30:01 -- so this is not the adult, very selective neurons.
00:30:04 -- So, the idea is that here now I see a face and only the neurons
00:30:09 -- that like similar faces respond and all the others are silent.
00:30:16 -- So now what happens if I show this face twice, so this is, so,
00:30:23 -- so here I show it, so let's say these are all different face
00:30:27 -- neurons, so I show the first face,
00:30:30 -- I get a certain activation pattern.
00:30:32 -- Like, like this one over here.
00:30:33 -- So, face activation pattern.
00:30:37 -- Now if I show the same face twice,
00:30:40 -- I activate the same neurons twice,
00:30:42 -- you know I'd get a low response, right.
00:30:44 -- Because now second time I activate the same neurons,
00:30:47 -- they're still recovering, I get a low response.
00:30:56 -- So now the question is what happens
00:30:59 -- when I show two different faces?
00:31:01 -- So these are actually two different faces,
00:31:03 -- they are very similar but, though.
00:31:05 -- So I show the first face and now I show the second face that's
00:31:09 -- similar, so it's going to activate some neurons the same
00:31:13 -- and then so it's going to activate some fresh neurons.
00:31:16 -- So that means the activation should go
00:31:21 -- up because I've got fresh neurons here.
00:31:23 -- And so what happens now
00:31:27 -- if I make the face even more dissimilar, right?
00:31:29 -- So this one, this one now is sixty percent different.
00:31:33 -- So first face but now the second face,
00:31:36 -- now the idea is it activates all fresh neurons.
00:31:40 -- Now remember we said sixty percent is
00:31:43 -- when the first neurons stop respond, stop to respond,
00:31:46 -- the idea is that other neurons respond.
00:31:49 -- So here is zero percent, complete overlap.
00:31:53 -- Then thirty percent you've got some fresh neurons,
00:31:56 -- sixty percent, all fresh neurons.
00:31:58 -- So we expect the response to gradually increase
00:32:02 -- because we've got more and more fresh neurons.
00:32:05 -- But the important part is then, so zero percent, thirty percent,
00:32:11 -- sixty percent, now I already have two,
00:32:14 -- two fresh groups of neurons.
00:32:15 -- So I got a strong response.
00:32:17 -- So now if I go even further,
00:32:20 -- so here now ninety percent difference,
00:32:24 -- it doesn't change things, right.
00:32:25 -- Sixty percent, ninety percent, it doesn't matter.
00:32:27 -- I already have fresh groups of neurons both times.
00:32:31 -- So the prediction is if I look at my FMRI signal,
00:32:37 -- its lowest here, two faces are the same.
00:32:42 -- Then it increases, thirty percent increase
00:32:45 -- to sixty percent but now no change to ninety percent
00:32:48 -- because these neurons are very selective.
00:32:54 -- So now we can test that and so we test that
00:32:57 -- and we show people pairs of faces on the scanner, so,
00:33:00 -- so we show here thirty percent different.
00:33:03 -- Sixty percent different and ninety percent different,
00:33:06 -- and what they're actually doing in the scanner is they're,
00:33:10 -- they're looking for this face.
00:33:11 -- So, so they don't care how similar the faces are.
00:33:13 -- The only thing they care about is whether one of the two faces
00:33:17 -- that they see on each pair is this target face.
00:33:20 -- And show we they very rapidly,
00:33:23 -- so we show one face short three milliseconds,
00:33:28 -- short break four milliseconds and then the second one.
00:33:31 -- Remember we go bam, bam, we're going to show the two very fast
00:33:35 -- and then we want to see well, so our prediction is at thirty
00:33:41 -- and so we get a similar overlap, low signal.
00:33:45 -- Sixty, then stronger signal
00:33:49 -- but then ninety there's no change, right.
00:33:51 -- So we expect the response to increase
00:33:54 -- and that's actually what we see.
00:33:56 -- So here, so here's time, here's response, in that,
00:34:00 -- in that fusiform base area over here.
00:34:02 -- And so we find here in green the response
00:34:06 -- to thirty percent difference, right.
00:34:08 -- So like this.
00:34:10 -- We get a certain response, then M six,
00:34:13 -- we got to get a higher response, that's the red one, right.
00:34:15 -- So we go from thirty to sixty.
00:34:20 -- Another question is what happens with ninety?
00:34:21 -- Do we still get an increase because we predict no change
00:34:25 -- because we already activated the fresh neurons
00:34:27 -- because the neurons are so selective?
00:34:29 -- And that's what we find here, if you look at the same one
00:34:34 -- over here, it's, it's the M nine, so here three,
00:34:38 -- six and nine and six and nine are very similar.
00:34:41 -- They are not increasingly different.
00:34:43 -- And again it's different than what we find in the autism
00:34:47 -- where some individuals that are poor
00:34:48 -- with faces then still show an increase when you do an M nine.
00:35:01 -- So this is support for the idea now when we,
00:35:04 -- in this area here we have these neurons that are very selective
00:35:06 -- for faces and even similar faces can activate different neurons
00:35:11 -- and that's what lets us recognize
00:35:14 -- or distinguish different faces.
00:35:15 -- Even very similar faces.
00:35:17 -- And that's the prediction right, so the prediction is if two,
00:35:24 -- if I show you two faces, right, so this one or that one
00:35:29 -- and I ask you are they, are they the same or different,
00:35:33 -- then the prediction is well,
00:35:35 -- if they activate different neurons you should easily say
00:35:38 -- they are different.
00:35:39 -- If they activate the same neurons they should look very
00:35:42 -- similar, you should have a hard time.
00:35:44 -- And so that's what we find, so here is the, now asking people
00:35:50 -- after the scan to actually discriminate the faces
00:35:53 -- and we find here the increase M six and then you're
00:35:57 -- over at ninety percent and then you don't have a significant
00:36:00 -- increase anymore beyond that.
00:36:02 -- So the idea is you see two faces,
00:36:05 -- if they activate different groups
00:36:06 -- of neurons you can discriminate them.
00:36:09 -- If they activate the same neurons,
00:36:11 -- you can't discriminate them.
00:36:13 -- And so we have a very nice match
00:36:14 -- of the behavioral curve with the FMRI curve.
00:36:28 -- Okay, so, now it's, that's very interesting because all
00:36:33 -- that was based on saying well faces are
00:36:35 -- like any other object we look,
00:36:38 -- it's just that with evolved experience with it
00:36:40 -- that we have these very finely tuned, very selective neurons
00:36:44 -- that then let us lead to this activation over here.
00:36:49 -- This selective area, that then lets us discriminate faces
00:36:52 -- in real life.
00:36:55 -- So, now the, we're going to skip a part in between
00:37:00 -- but no problem, we'll just do it really fast, dupe, dupe, dupe,
00:37:04 -- dupe, we're just going to skip all that.
00:37:08 -- And, okay, so now we're going to jump to reading, so,
00:37:12 -- so reading is a similar example where we have
00:37:15 -- to recognize objects and we have to learn it, right.
00:37:17 -- Faces is a little harder, I mean, you can say well faces we,
00:37:21 -- we, we learn implicitly because we see lots of faces and we have
00:37:24 -- to tell people apart, right, who, who's who.
00:37:27 -- But reading is another example and reading is very interesting
00:37:33 -- because it's, it's much more controlled, right.
00:37:36 -- We have an idea what, what people learn to read
00:37:39 -- but you have the same kind of challenges, right.
00:37:41 -- So, we talked about invariance, we talked about specificity
00:37:46 -- and with reading you've got the same challenge, right.
00:37:48 -- So you remember Alice Cooper,
00:37:50 -- lot's of different images can be Alice Cooper,
00:37:53 -- same in reading, right.
00:37:54 -- So this word is invariance, it's just written in an,
00:37:58 -- in a weird font, right.
00:37:59 -- But so many different, I can write invariance
00:38:02 -- in many different ways and you can recognize it as invariance.
00:38:07 -- The same time I can have words like farm and form for instance,
00:38:12 -- that are very similar, there's only a tiny difference
00:38:15 -- within the O and the A and yet you realize there,
00:38:19 -- they're two very different kinds of objects.
00:38:22 -- So that makes another great example to look at,
00:38:25 -- okay well how does experience now shape our brain, right?
00:38:29 -- So how does learning to read shape our brain?
00:38:32 -- What is the representation for, for reading?
00:38:36 -- So what we predict is, so we have this face area, right,
00:38:39 -- where neurons like faces, now, what do we have
00:38:43 -- for the case of words?
00:38:50 -- And so we know there are several different brain areas involved
00:38:54 -- in reading and the area that I want to focus on now is here,
00:38:59 -- again, very similar location, actually as you, the blueberry
00:39:03 -- for faces but now it's in the left side.
00:39:07 -- So it's actually surprisingly similar
00:39:10 -- and there are some reasons
00:39:11 -- but on the left side then you have this
00:39:14 -- so called visual word forming area.
00:39:18 -- So part of your brain that's selectively, actively looking
00:39:22 -- at faces, so when you look at words.
00:39:24 -- So it's active looking at words but not when you look
00:39:26 -- at faces for instance.
00:39:27 -- Or much less so for faces.
00:39:32 -- So another question is what's going on in,
00:39:35 -- in this visual word form area?
00:39:37 -- And so we, other people have come up with ideas of well,
00:39:44 -- how can you represent words?
00:39:45 -- And so it's the same idea for faces to some extent,
00:39:49 -- you start on the retina and then you go over here
00:39:52 -- in the back of your head.
00:39:53 -- And so you go to edges and so on, so forth
00:39:56 -- but the idea is you slowly learn representations
00:39:59 -- that are more optimized for words, right.
00:40:01 -- So for faces, you again, you start with spots of light
00:40:04 -- with edges but then you put them together to something
00:40:06 -- that looks more face like and so the idea is for words,
00:40:10 -- well you then have something like letters
00:40:12 -- and then you have combinations of letters and so on, so forth.
00:40:16 -- And so the current theory or up unto one year
00:40:19 -- and it's a nicely invigorated discussion, the,
00:40:24 -- the idea is that in this visual word form area
00:40:26 -- that the representation is pre-lexical,
00:40:29 -- meaning the neurons care about combinations of letters
00:40:36 -- but they're not selective for words.
00:40:38 -- But the problem is, we obviously,
00:40:42 -- we can recognize specific words, right, so I mean,
00:40:46 -- we're selective for very small differences,
00:40:47 -- to very small differences and the question is well,
00:40:51 -- if you don't have the words in the VWFA then
00:40:53 -- where do we have them?
00:40:54 -- Right, where do you have the knowledge for the words?
00:41:00 -- So, I mean, where is the lexicon, right,
00:41:04 -- where do you have the little dictionary in your head?
00:41:06 -- And so what, I mean, our hypothesis was well,
00:41:11 -- then and so, so that was basically,
00:41:14 -- sorry I have to back track.
00:41:15 -- So that was based on, when you look at this VWFA,
00:41:18 -- the virtual word form area, you find that it responds strongly
00:41:22 -- to words but also to pseudo words.
00:41:23 -- So pseudo words are pronounceable non words, right,
00:41:27 -- so letter combinations that you can pronounce
00:41:29 -- but that aren't real words.
00:41:31 -- But, so all our hypothesis was, well,
00:41:36 -- let's say we have a dictionary here in the left,
00:41:38 -- the VWFA actually is a dictionary
00:41:41 -- where you have a certain memory, we had the case for faces,
00:41:43 -- right, so we have this,
00:41:44 -- the selective neurons for different faces.
00:41:47 -- Now the idea is in the VWFA that we have neurons that are
00:41:51 -- so selective that neurons respond to different words.
00:41:56 -- So you have a neuron that likes farm, right, F, A, R,
00:41:58 -- M. You have another neuron that likes form.
00:42:02 -- Another that likes firm and then belt or whatever,
00:42:05 -- so you have different neurons that like different words.
00:42:07 -- And so then if you read the word farm,
00:42:12 -- that really excites the farm neuron.
00:42:14 -- That's how you recognize it, oh, it's farm.
00:42:17 -- And since the farm neuron is so selective, right, so now instead
00:42:21 -- of faces here you have different words and the
00:42:24 -- so the farm neuron only likes farm but does not like form,
00:42:26 -- does not like belt, right.
00:42:27 -- So here, you read the word farm, it only activates farm
00:42:32 -- and it doesn't respond to other, the other neurons don't respond.
00:42:35 -- I mean, very little, I see the word belt,
00:42:39 -- I activate the belt neuron and the other neurons don't respond.
00:42:44 -- But if I see a pseudo word like furm here, F, U, R, M, right,
00:42:48 -- which is a pseudo word, you can pronounce it but you can't,
00:42:51 -- it doesn't have any meaning.
00:42:52 -- It's not a real word, then the idea is well,
00:42:56 -- its looks a little similar to these other words here.
00:42:59 -- To farm, form and firm, for which you have neurons
00:43:04 -- and so it might cause some activation
00:43:06 -- of these neurons here, too,
00:43:08 -- because these neurons have learned to discriminate farm
00:43:11 -- from form but they haven't seen this word before.
00:43:13 -- So the idea is that they can still,
00:43:15 -- they still show some response because they haven't learned
00:43:18 -- to discriminate farm from furm
00:43:20 -- because you haven't seen that before.
00:43:22 -- So there might still be some response.
00:43:25 -- And so now if you look at activation levels,
00:43:32 -- so again the problem with the activation level is,
00:43:36 -- if I look at the response to words,
00:43:40 -- I might get a certain activation
00:43:41 -- because I activate different neurons for different words.
00:43:44 -- And if I look at pseudo words again,
00:43:45 -- I might get activation just because you might have these,
00:43:50 -- these neurons that like these different words
00:43:51 -- and they respond a little bit still to these pseudo words.
00:43:55 -- And so again from your activation level you wouldn't be
00:43:57 -- able to distinguish that and that's what you find in a lot
00:44:01 -- of imaging studies that yeah, the words
00:44:04 -- and pseudo words give you the same kind of activation.
00:44:07 -- But so again, our hypothesis is well, we think that might be
00:44:12 -- because you actually have a dictionary
00:44:13 -- and you just get these little bit of responses here.
00:44:16 -- Alright, so our prediction is,
00:44:17 -- in your brain you have a little dictionary, right.
00:44:19 -- So neurons for each word that you've learned, you have neurons
00:44:23 -- that respond to just his word.
00:44:25 -- So you say well, that's a crazy idea, alright,
00:44:29 -- so but interesting enough there is support
00:44:33 -- for this kind of craziness.
00:44:36 -- So there was a nice paper a couple of years ago
00:44:40 -- that found a Jennifer Anniston neuron, right,
00:44:43 -- so in a different brain area but so, it was a neuron
00:44:47 -- that was active and unfortunately it's hard
00:44:48 -- to see here but these are all pictures of Jennifer Anniston.
00:44:51 -- And here's the response of the neuron
00:44:55 -- so here you see the more black you see the stronger
00:44:58 -- the response.
00:44:59 -- So this neuron responds to some what
00:45:02 -- to pictures of Jennifer Anniston.
00:45:04 -- Even very different pictures.
00:45:06 -- It does not respond to Jennifer Anniston with Brad Pitt,
00:45:13 -- so but it does not respond to a lot of other images, right.
00:45:16 -- Other people, other celebrities, it does not respond
00:45:19 -- to them it only responds
00:45:20 -- to Jennifer Anniston without Brad Pitt.
00:45:26 -- And similarly it's been found when monkeys are trained
00:45:33 -- to recognize paper clips, so like, so like these images
00:45:36 -- over here then they develop neurons again in, in this region
00:45:44 -- over here that are, that respond to this,
00:45:47 -- so here the monkey was trained
00:45:48 -- to recognize this paper clip over here.
00:45:51 -- Then afterwards the researchers went
00:45:53 -- into the brain then found neurons then that responded
00:45:56 -- to what the monkey was trained on over here but did respond
00:46:02 -- to these other objects even thought they are very similar,
00:46:05 -- right.
00:46:05 -- Here you see, it's very similar objects to this one
00:46:08 -- but the neuron does not respond.
00:46:10 -- So the monkey learned neurons that responded exactly
00:46:14 -- to the object the monkey was trained on but not the others.
00:46:17 -- So now I wanted to test.
00:46:23 -- So how, how can we you know test, have a dictionary
00:46:27 -- in our head and so what we did now is the same idea
00:46:30 -- with presenting two stimuli in rapid succession
00:46:33 -- and now we don't do two different faces
00:46:35 -- but we do two different words.
00:46:37 -- And so what we can do is then present the,
00:46:43 -- so we can present the same word twice, you have boat,
00:46:47 -- boat or we can present the word now,
00:46:49 -- remember faces we had [inaudible] different
00:46:52 -- so with words we can change one letter and go from boat to coat.
00:47:00 -- Or we can change the whole word.
00:47:02 -- Go from boat to fish.
00:47:04 -- So here it's just one little difference and here all it,
00:47:09 -- its boat and fish, all are different.
00:47:12 -- And we can do the same thing for pseudo words, so,
00:47:17 -- here are our pseudo words.
00:47:18 -- We have the pseudo words soat, which is not a real word
00:47:20 -- in case you were wondering, right.
00:47:23 -- We can show soat, soat or we can change one letter soat,
00:47:27 -- poat or we can show two different ones, soat and hime.
00:47:33 -- So we can do the same thing because you want
00:47:36 -- to say there's a representation for real words but not,
00:47:39 -- it's very selective for real words but not for pseudo words.
00:47:42 -- And now we can play the same game as before.
00:47:46 -- So we can look at the activation in these different conditions
00:47:48 -- and see well okay, how does this brain area respond
00:47:51 -- if I show these different pairs?
00:47:53 -- So what do we expect?
00:48:01 -- So now if indeed this brain area over here, a different word
00:48:07 -- from area contained neurons that just care
00:48:10 -- about letter combinations and they don't care about real words
00:48:12 -- but just like, like trigrams or combinations of three letters
00:48:16 -- or two letters or what not.
00:48:19 -- Then what would you expect for the different conditions?
00:48:21 -- Again we have same, one letter different
00:48:25 -- and totally different, no letters the same.
00:48:29 -- Then again we would expect the lowest activation for same
00:48:31 -- because you activate the same neurons twice, right,
00:48:34 -- lowest activation as before.
00:48:36 -- But one letter different well, it's a little bit different so,
00:48:39 -- so you would expect, alright so one letter different,
00:48:44 -- you would expect a little bit different.
00:48:45 -- A little increase in the signal
00:48:47 -- because you have some fresh neurons
00:48:49 -- and then totally different you would expect highest response
00:48:52 -- because now you have totally different neurons active.
00:48:59 -- So that's the prediction if it's,
00:49:02 -- if it doesn't care about real words.
00:49:03 -- So the dictionary just likes combinations of letters.
00:49:10 -- Now what would you expect and, and that would be the same,
00:49:12 -- sorry that would be the same for real words and pseudo words
00:49:16 -- because it only cares about letter combinations
00:49:17 -- so it doesn't care about if it's a real word or a pseudo word.
00:49:19 -- You would expect in both cases
00:49:21 -- to have this gradual increase in responses.
00:49:24 -- But what now if you actually have a dictionary?
00:49:27 -- If you actually have representation of neurons
00:49:29 -- that like whole words and only real words.
00:49:33 -- Then what you would expect again same lowest response right,
00:49:37 -- so because again when you activate the same neurons
00:49:40 -- but now even if I change one letter, so farm to form,
00:49:43 -- the prediction is to activate different neurons.
00:49:46 -- Because the neurons like one word and they don't
00:49:48 -- like another word, right.
00:49:49 -- So farm and form, so you would expect one letter change,
00:49:54 -- boom and you already got a strong response.
00:49:56 -- And it would be the same as if I changed all the letters,
00:49:58 -- so it's, it's like the faces, right.
00:49:59 -- So here sixty to ninety percent, doesn't matter, same signal.
00:50:03 -- Words one letter, all letters different,
00:50:06 -- doesn't matter, same signal.
00:50:07 -- So now that we have a dictionary for real words
00:50:11 -- but for pseudo words the idea would be well,
00:50:13 -- let say that I have soat
00:50:16 -- and it activates the soat neuron a little bit and maybe the a,
00:50:19 -- oh soat, what else is there, coat, yeah.
00:50:24 -- So you have coat and you activate these things a little
00:50:29 -- bit and then, then I change the pseudo word
00:50:32 -- and I might activate other neurons
00:50:35 -- but still show some similarity activation.
00:50:38 -- I might get this kind of gradual recovery
00:50:41 -- but the important part is,
00:50:42 -- this part over here for the real words.
00:50:44 -- So now what do we get?
00:50:48 -- So indeed what we find is over here, so same,
00:50:53 -- one letter different, all different.
00:50:55 -- So we find is, the response if I just change one letter
00:50:59 -- or all the letters I get the same increase in the response.
00:51:03 -- So [inaudible], I change one letter boom,
00:51:05 -- I activate different neurons.
00:51:07 -- Compatible with the idea that I have a dictionary in my head,
00:51:09 -- right, so it likes one, neurons like one word
00:51:12 -- but if you have a different word you activate different neurons.
00:51:16 -- And then indeed for the pseudo words,
00:51:19 -- it could be just very selective, right.
00:51:21 -- It could be just when we make a small change it just activates a
00:51:23 -- different neurons but for pseudo words you get this
00:51:26 -- gradual increase.
00:51:28 -- So meaning that you have a representation that's
00:51:29 -- specialized for the real words, so, very selective,
00:51:33 -- you change one letter, boom, different words but not
00:51:36 -- for the pseudo words because you haven't trained
00:51:38 -- up this representation.
00:51:40 -- They could ask well maybe it's because of this semantic task
00:51:45 -- of actually the task that they're doing is they have
00:51:47 -- to push a button whenever they see a word that's a food
00:51:49 -- or a vegetable.
00:51:51 -- So you could say well,
00:51:52 -- maybe because for the real words they've learned a meaning
00:51:55 -- and so they realize that it's different meanings
00:51:58 -- so you can get a strong response
00:51:59 -- because it's just meaning that's represented.
00:52:01 -- So, I'm not going to go into detail about, but we can do that
00:52:06 -- and then have people look for these stimuli or whatever.
00:52:10 -- You have A, B, C or X, Y, Z in the word,
00:52:12 -- nothing about the meaning, you're just looking
00:52:14 -- for letter combinations.
00:52:16 -- You can run the same experiment as before,
00:52:18 -- so people now push a button not for food or vegetable but just
00:52:22 -- where there's A, B, C or X, Y, Z. And what we find and for lack
00:52:28 -- of time, I'm not going to go into detail
00:52:31 -- but it's basically the exact same result, right.
00:52:32 -- For real words, same and then boom,
00:52:35 -- one L different, same response.
00:52:38 -- Pseudo words, gradual increase.
00:52:40 -- So, so now you can say maybe, maybe it's a,
00:52:45 -- it's again what these neurons care about, it's, it's, it's,
00:52:49 -- it's actually the, the semantics
00:52:52 -- and the task doesn't really address
00:52:54 -- that if the neurons just care about meaning, right.
00:52:56 -- So, pseudo words have no meaning,
00:52:58 -- real words have a meaning.
00:53:01 -- So then we'll be, what we did was another experiment
00:53:03 -- where we now, same as before, so are, one letter different
00:53:07 -- for art and all different end
00:53:09 -- and then we have a condition semantically related, right.
00:53:13 -- So we have arm and we have leg,
00:53:15 -- there are semantically related, right.
00:53:20 -- And so we do the same as before, right,
00:53:23 -- so now just for this one extra condition and what
00:53:28 -- in what we find in the visual work form area, again over here.
00:53:33 -- Now same is low, one L diff, as before is high but no difference
00:53:38 -- between the two, right, so same one L diff
00:53:41 -- but important semantically related.
00:53:43 -- So here's like arm and boat and here's arm and leg,
00:53:48 -- there's no difference.
00:53:49 -- So meaning it doesn't matter whether the two words are
00:53:52 -- related or not, all this area cares about is orthography.
00:53:57 -- So it doesn't care about the meaning,
00:53:59 -- it only cares about orthography.
00:54:00 -- Are they spelled the same or not?
00:54:01 -- And we have it under control if an analogy
00:54:03 -- that we don't have time to talk about.
00:54:08 -- So, now but for the semantics and that's a,
00:54:11 -- that's of course very interesting, like,
00:54:12 -- like where's the meaning?
00:54:14 -- Right. So dictionary just tells you how to spell it
00:54:16 -- but where's the meaning.
00:54:18 -- And so we can look at where
00:54:19 -- in the brain do you actually have a difference
00:54:21 -- between related and different.
00:54:24 -- So, the idea being that if you have neurons that care
00:54:27 -- about meaning, then if you have two words that are related
00:54:32 -- in meaning, you would expect an over lap, right.
00:54:34 -- So you have arm, arm and leg, right, so then you have an
00:54:37 -- over lap in the meaning.
00:54:38 -- If you have neurons that care about meaning,
00:54:40 -- you would expect some kind of over lap.
00:54:42 -- But then if you have arm and boat, you would expect to act
00:54:44 -- in a different kind of concepts.
00:54:46 -- So they can see where in the brain do I get activation
00:54:51 -- where I have semantically related words,
00:54:53 -- less than different concepts?
00:54:56 -- And what we find is actually then several interesting areas
00:55:01 -- and in particularly the left frontal gyrus so which is think
00:55:06 -- of it you kind of go in here and, in your brain.
00:55:10 -- And this is an area that's traditionally known
00:55:15 -- to be involved in semantic processing.
00:55:17 -- So that's interesting because you want to connect words
00:55:22 -- to their meaning, right.
00:55:22 -- We have learned words, so we have learned objects,
00:55:25 -- some of you have learned meaning but now we also want
00:55:27 -- to connect our dictionary to, to the meaning of these words.
00:55:30 -- So now we can look at it, okay,
00:55:33 -- so how are these areas connected?
00:55:35 -- And that's the last of the data I want to talk
00:55:37 -- about so we have our, our VWFA, our dictionary we just talked
00:55:42 -- about and we have our IFG.
00:55:44 -- I don't know if you're, frontal gyrus.
00:55:49 -- So here, so you have different slices, so again this is,
00:55:52 -- this is kind of low here and this is higher but well actually
00:55:55 -- so here you've got the cerebellum, so,
00:55:57 -- so different cuts but the idea is here in the back
00:55:59 -- of the brain there is a word form area,
00:56:02 -- here in the front, semantic processing.
00:56:04 -- So the idea is here you have the shape,
00:56:07 -- here you have the meaning.
00:56:08 -- And you can say well how are these areas connected?
00:56:13 -- So in what we find is for real words,
00:56:14 -- there is a very strong connection
00:56:16 -- of the dictionary to the meaning.
00:56:18 -- So we, we, we connect the orthographic representation
00:56:23 -- to the corresponding meaning here.
00:56:25 -- However, for pseudo words we don't have any
00:56:29 -- significant connections.
00:56:30 -- So if I see a pseudo word there is no, no connection
00:56:33 -- to semantics here because they have no meaning.
00:56:35 -- So, makes a lot of sense where you have your dictionary
00:56:43 -- where you have learned different words
00:56:44 -- but you don't have any meaning in here but you can connect it
00:56:47 -- to an area that doesn't code the meaning.
00:56:50 -- Good, so just to wrap it up, so what we have is
00:56:56 -- for the reading now, evidence that again learning,
00:57:01 -- learns a representation here
00:57:03 -- but now the neurons are not selective for like faces,
00:57:05 -- different faces but now for words,
00:57:07 -- they are highly selective.
00:57:08 -- So different neurons like different words
00:57:11 -- and those then provide input or are connected to semantic areas
00:57:16 -- that care about the meaning.
00:57:17 -- So they do, so you learn semantics, you learn,
00:57:20 -- well this is an arm and this is a leg
00:57:22 -- and then you have this knowledge here.
00:57:24 -- And then you learn the words that go with it
00:57:26 -- and then you hook up the two areas.
00:57:28 -- And so again, support for this experience driven learning
00:57:35 -- and now showing so that the parallels here faces,
00:57:39 -- we learn faces, we have to discriminate the faces,
00:57:42 -- we learn representation for faces are very selective
00:57:44 -- and same thing for words.
00:57:46 -- Representation for the orthography
00:57:48 -- that we then connect to meaning.
00:57:50 -- Okay, so just to wrap it up.
00:57:53 -- So [inaudible] recognition is very important, right,
00:57:55 -- we do it all the time and we do it, actually, this morning,
00:57:59 -- I talked to my kids, right and I told them well, I won't be there
00:58:02 -- for dinner because I'm giving this talk and I'm going
00:58:05 -- to be talking about how, how we see.
00:58:09 -- And they say oh that's easy, your eyes, seeing is easy,
00:58:12 -- right and, and, and it is easy and it's because your brain,
00:58:16 -- there's a lot of areas in your brain that, that do,
00:58:20 -- that trained up by experience and then make vision easy.
00:58:24 -- Right, because you have learned these representations
00:58:27 -- that are very selective for the object to care about.
00:58:31 -- So and it's important to then have the simple model
00:58:35 -- that says well actually [inaudible] recognition can be
00:58:38 -- that easy, can be that simple, there's nothing special
00:58:41 -- about whether it's a word or a face, right.
00:58:44 -- We want to recognize the object.
00:58:46 -- We want to be able to recognize it even if it looks different
00:58:49 -- but we know it's the same object.
00:58:51 -- And so if for words, it can be different fonts.
00:58:53 -- If for faces, it could be different lighting,
00:58:56 -- different expressions, so on so forth.
00:58:59 -- So same thing, we're just going to learn the representations.
00:59:02 -- And so it's very nice, meaning there's one
00:59:05 -- over [inaudible] explanation for these different scenarios
00:59:08 -- that might also apply for other objects like if you have
00:59:13 -- for instance, now, and that's something we hope to get
00:59:16 -- into right, if you have representations
00:59:17 -- for ASL for instance, right.
00:59:19 -- Then it's the same challenge that you have
00:59:21 -- to learn representations and hook them up to meaning, right.
00:59:25 -- And so that's something we're very interested in right now,
00:59:29 -- if you have learnt meaning for objects, how would you then hook
00:59:33 -- that up when you learn new ways of referring
00:59:35 -- to the same meaning, to the same objects,
00:59:38 -- how do you connect the two?
00:59:39 -- How does the brain do that?
00:59:40 -- Okay, so, that's it and thanks to the folks
00:59:44 -- that actually did the work and thanks
00:59:47 -- to the people that paid for it.
00:59:49 -- Thanks for your attention.
00:59:52 -- [ background noise ]
01:00:02 -- Thank you very much for your presentation.
01:00:04 -- We really enjoyed it.
01:00:05 -- We're going to open it up for questions,
01:00:09 -- I'm sure you feel a little overwhelmed, right.
01:00:11 -- There was a lot of information but come on up.
01:00:14 -- Hi, I'm wondering about facial expressions and if we see
01:00:21 -- in children, don't discriminate as much,
01:00:26 -- adults have more ability to discriminate.
01:00:29 -- As a teacher of American Sign Language, children, we,
01:00:36 -- we look at American Sign Language and the grammar
01:00:39 -- of the language is on the face.
01:00:43 -- So, there might be instances
01:00:44 -- where children may not notice the eyebrow raise as much
01:00:50 -- or the eyebrow frown and do you have to emphasize that more
01:00:55 -- when you're teaching sign language to children?
01:00:58 -- I don't know what my point is about that
01:01:04 -- but I'm just wondering, is why is it also that adults struggle
01:01:08 -- with that as well and it seems that children most of the time,
01:01:11 -- will acquire language easier.
01:01:14 -- We see that children acquire language easier but adults
01:01:17 -- that are learning it as a second language have a harder time
01:01:20 -- distinguishing that facial grammar.
01:01:22 -- Max Rizenhoover: That's a,
01:01:22 -- that's a very interesting question and I think there,
01:01:26 -- there are two things at work.
01:01:28 -- I mean, one is that there's just more plasticity the younger you
01:01:33 -- are, so it's easier to, to learn a representation and one
01:01:37 -- of the reasons is when you're an adult you already have all
01:01:41 -- these, these neurons there, right.
01:01:43 -- And it's now if I tell you, if you pay attention
01:01:46 -- to the eyebrows for instance,
01:01:48 -- you have to change this whole representation.
01:01:50 -- You have to optimize the whole representation.
01:01:53 -- Where is if you have kids, where now the idea,
01:01:55 -- so you have a few neurons there that are broadly tuned,
01:01:58 -- it is easier to change that representation
01:02:00 -- to emphasize certain aspects.
01:02:02 -- Alright and but it would and so that's also what we see in like,
01:02:07 -- like kids normally, we see adults focus very much on,
01:02:13 -- on the eye region for instance.
01:02:16 -- Where as, in kids there's less of a bias towards the eyes, so,
01:02:21 -- so it's from learning from which parts of the face are relevant.
01:02:25 -- But it's, it's an interesting question, if you,
01:02:29 -- if you take someone who, who signs, right, who knows that,
01:02:32 -- who pays attention to some parts of the face more than people
01:02:36 -- who don't sign then the question is well do you have an active
01:02:39 -- or more selective representation in, in this fusiform face area
01:02:44 -- for the certain aspects of the face.
01:02:46 -- And that's something that we very embarrassing to look
01:02:48 -- at because the idea is well, the experience drives this and now
01:02:52 -- if you learn your experience you pay a lot of attention
01:02:55 -- to these parts that are important.
01:02:57 -- The idea would be that, then drives your representation
01:02:59 -- to be more selective for, for, for these aspects of the face.
01:03:08 -- As a follow up, if that's the case, then those who grow
01:03:14 -- up with American Sign Language or those who do not grow
01:03:20 -- up with American Sign Language, they see facial features,
01:03:24 -- however when they're taking American Sign Language classes
01:03:26 -- as adults, they see the same facial grammar but it's,
01:03:30 -- it has a different meaning?
01:03:31 -- So that's the part they have a hard time with?
01:03:33 -- That's kind of where you're going with that.
01:03:35 -- Max Rizenhoover: Well I think the percent apart
01:03:36 -- Something for me to think about more, for sure.
01:03:38 -- Max Rizenhoover: So, so for the perceptual part just,
01:03:40 -- just seeing like, you have these subtle cues
01:03:43 -- that I don't even pick up on
01:03:44 -- because my representation might be selective
01:03:47 -- for different individuals but might not be as selective
01:03:50 -- for like changes in the eyebrows, right.
01:03:52 -- Because it's not something that I've paid attention to
01:03:54 -- and we know that you learn something more
01:03:59 -- when you pay attention to it.
01:04:00 -- There is some very interesting results that,
01:04:01 -- it's not just experience but it's actually engaging
01:04:05 -- with it that's important.
01:04:07 -- And so my representation probably looks different
01:04:10 -- than yours and, and yours might be more selective
01:04:12 -- and for these aspects that I haven't paid much attention to
01:04:15 -- and so it's a perceptual difference.
01:04:18 -- Alright, it then makes it hard to learn the meaning
01:04:21 -- because you're perceptual representation, so to me it's
01:04:26 -- like the, like, like in the kids that,
01:04:28 -- these different eyebrow features,
01:04:30 -- the face might just look very, very similar
01:04:32 -- because it's so broadly tuned.
01:04:34 -- Where as, where as in your representation it's much more
01:04:37 -- selective for these different states of the eyebrow.
01:04:41 -- So for you to actually look, it actually looks different,
01:04:43 -- where as for me it might look very similar.
01:04:45 -- And so actually as an example in this so called other race effect
01:04:50 -- for faces, so where people have compared now how well can I
01:04:55 -- discriminate people of my own race
01:04:56 -- versus from a different race and so you had immigrants
01:05:00 -- for instance, that come to the US from like Asian countries.
01:05:04 -- And they initially are much worse
01:05:07 -- at discriminating Caucasian faces than Asian faces
01:05:11 -- but after a few years in the country then they get better
01:05:14 -- at discriminating Caucasian faces.
01:05:16 -- And so the idea is through experience
01:05:18 -- and then you learn a final representation for these,
01:05:20 -- for this new ethnicity then because you engage with them,
01:05:24 -- you have to discriminate and so something similar then,
01:05:27 -- I would expect is going on in ASL.
01:05:28 -- But again it might take a long time because you,
01:05:32 -- you already have this representation for,
01:05:33 -- for these faces that you then have
01:05:35 -- to refine piece by piece so to speak.
01:05:45 -- Thank you.
01:05:49 -- My question is similar.
01:05:52 -- I'm wondering if you took a look at hearing children
01:05:57 -- who use spoken language and deaf children who use a sign language
01:06:01 -- to compare the two groups.
01:06:03 -- If we were to do that and the brain recognition of faces,
01:06:07 -- maybe deaf children would have a different pattern
01:06:09 -- than what you've shown here than hearing children
01:06:11 -- because of the fact that they engage.
01:06:13 -- Is that what you were just saying?
01:06:15 -- Max Rizenhoover: Right, right, so,
01:06:15 -- so they're presentation might be different
01:06:19 -- and emphasizing these parts that are important to ASL and all
01:06:24 -- of that to the non signing children would just not,
01:06:27 -- would just not learn.
01:06:29 -- I think it's also interesting to think about the,
01:06:32 -- the word representation actually because one reason it might be
01:06:37 -- on the left is because and so,
01:06:39 -- so we've just had the hearing subjects here.
01:06:42 -- It might be on the left because you have to hook it up to,
01:06:44 -- to phrenology but in ASL it might be different, right.
01:06:48 -- That you, that you might want to hook this
01:06:50 -- up the [inaudible] representation to,
01:06:52 -- to more visually like, like more shape based input, right.
01:06:56 -- So then might this word representation be somewhere else
01:07:00 -- that's better suited to hook it up with, with these gestures
01:07:03 -- in ASL and so I think there,
01:07:05 -- there might very well be these differences in these groups
01:07:09 -- in terms of how and where these representations are learned.
01:07:12 -- But I mean, that's exactly what we want to figure out, right,
01:07:15 -- I mean, how does, since you have the same meaning you've got
01:07:19 -- to link up in different ways but, then how you do it and in,
01:07:23 -- in hearing versus non hearing individuals,
01:07:26 -- then might actually differ in.
01:07:28 -- Or lead to different results in terms
01:07:31 -- of which brain areas come to be specialized.
01:07:37 -- Very interesting research.
01:07:39 -- Thank you.
01:07:43 -- Max Rizenhoover: Thank you.
01:07:45 -- This might be a silly question.
01:07:48 -- If you were blind,
01:07:49 -- do you suppose it would be the same results that you show here?
01:07:54 -- If you were born blind.
01:07:57 -- Max Rizenhoover: Alright, so.
01:07:58 -- If you were born blind.
01:07:59 -- You've never seen before.
01:08:01 -- Max Rizenhoover: So, so there, there's some people who,
01:08:06 -- some reason in monkeys have shown that,
01:08:08 -- that this area here selective for faces,
01:08:11 -- somewhat already right after birth.
01:08:14 -- So, so even without having seen faces
01:08:16 -- or having seen very little,
01:08:18 -- there seems to be some preexisting selectivity
01:08:21 -- in the fusiform face area for faces.
01:08:25 -- But, it, actually just this morning I heard a talk
01:08:28 -- where people have tried to tell blind individuals okay, there,
01:08:35 -- there's a face and see whether they get activation here
01:08:37 -- in this area.
01:08:38 -- And there is some very tentative evidence
01:08:40 -- that they actually activate this area without visual input
01:08:43 -- by just saying there's a face but it's very, very tentative.
01:08:46 -- And so it's, it's an interesting question
01:08:48 -- like how much is specified like genetically and how, how much,
01:08:52 -- how flexible is it driven through experience.
01:08:55 -- And it's a, for the original fusiform area it's,
01:09:00 -- that makes it so interesting because we know reading is,
01:09:04 -- is something very new.
01:09:05 -- So, so faces you can say well, well through evolution, right,
01:09:08 -- you might have selectivity for faces but reading is,
01:09:11 -- is very different, right.
01:09:11 -- Reading has only been around for, for a few thousand years,
01:09:15 -- so likely there's no genetic predisposition for reading
01:09:18 -- and so the question then is well, now, where,
01:09:23 -- why does it always end up in the left
01:09:25 -- and that might be a normal readers
01:09:27 -- because we know there's input from, from the phrenology
01:09:30 -- and so on and so forth.
01:09:31 -- But then what happens in individuals
01:09:34 -- that don't need this phrenology?
01:09:35 -- That, that learn, like, like in signers for instance, right.
01:09:38 -- Then do you learn it somewhere else?
01:09:40 -- Again because genetically there's no reason why this
01:09:43 -- should then be favored,
01:09:44 -- right and so this is what will be very interesting to find out.
01:09:48 -- Do, do you have more of reliability in terms
01:09:51 -- of where your dictionary is in signers
01:09:53 -- versus a hearing individual, so.
01:09:58 -- Related to that,
01:10:04 -- [ laughter ]
01:10:05 -- word recognition.
01:10:11 -- Looking at a word and then signing.
01:10:16 -- When you read a word but you don't say it right,
01:10:26 -- so you understand the word but then
01:10:30 -- when you sign something then you understand it.
01:10:34 -- They're both words, one's word, one's a sign.
01:10:39 -- Does that activate the same area of the brain?
01:10:44 -- And if that's, if that's true,
01:10:49 -- if it does activate the same area of the brain,
01:10:51 -- looking at a word and then signing it,
01:10:54 -- then what's the case for dyslexia?
01:10:56 -- Meaning if it's a problem understanding
01:11:01 -- that a sign is different?
01:11:05 -- Max Rizenhoover: So, so I think the individual
01:11:07 -- I mean in terms of like, the, the sign for serious
01:11:12 -- and miss is very, it's, it's the same sign,
01:11:15 -- it just has one different movement.
01:11:16 -- Just one different thing about it.
01:11:20 -- So how would you, how would they recognize that.
01:11:23 -- Max Rizenhoover: Right,
01:11:23 -- I think it's a great question because.
01:11:26 -- So I think it's a great question because like
01:11:34 -- with written words you have certain similarities
01:11:37 -- that make them very similar when you write them
01:11:40 -- like farm and form, right.
01:11:41 -- The A and the O are very similar but that's very different
01:11:45 -- than for, for ASL, right.
01:11:46 -- So you, so farm and form I would think aren't very similar,
01:11:49 -- aren't necessarily very similar in ASL,
01:11:51 -- right but then you have other words that,
01:11:53 -- the two just said right, might be similar in ASL
01:11:57 -- but are very different in terms of orthography.
01:11:59 -- And so it's nice because now you can do both, right.
01:12:02 -- So you can present the two gestures and the two words
01:12:04 -- and you would expect for areas that are orthographic,
01:12:08 -- you have a low response if they are similar
01:12:11 -- in orthography, right.
01:12:12 -- Where as areas that are selective for ASL,
01:12:15 -- you would expect a low response if you're not very,
01:12:19 -- very similar gestures, right.
01:12:20 -- So, so you can have a dissociation, right.
01:12:23 -- So orthographic similarity as where as ASL similarity
01:12:26 -- and then you have semantic similarity.
01:12:28 -- You can play off these different factors and see
01:12:30 -- which areas encode meaning,
01:12:33 -- which areas encode orthographic similarity
01:12:35 -- and which areas encode all the gestures of ASL
01:12:38 -- and that's something that would be a great, great to look at.
01:12:40 -- And again it means the same technique now showing two
01:12:43 -- gestures that can be very similar
01:12:45 -- and have different meanings, right or two gestures that,
01:12:48 -- that also look very different but then can have,
01:12:52 -- can be similar in meaning or can be similar
01:12:55 -- if you actually wrote them like in, in normal Roman letters.
01:12:58 -- And then look at these selectivity's to see okay, what,
01:13:03 -- which areas care about ASL, which areas care
01:13:05 -- about orthography, which areas care about meaning,
01:13:07 -- so I mean that's an experiment that we would like to do.
01:13:15 -- So if it's a written word or a sign word,
01:13:21 -- right now we don't know if it's the same area
01:13:24 -- of the brain or not.
01:13:26 -- True?
01:13:27 -- Max Rizenhoover: I don't know.
01:13:29 -- Good enough, thank you.
01:13:36 -- Thank you very much.
01:13:38 -- I really appreciate your coming and enjoyed your presentation.
01:13:41 -- Thank you all for coming.
01:13:43 -- Please fill out your evaluations.
01:13:45 -- Max Rizenhoover: Oh there's evaluations.
01:13:51 -- [ laughter ]