00:00:14 -- Thank you for coming to our last VL2 presentation for this year.
00:00:18 -- We're very pleased and excited to have Dr. Jeff Pelz from RIT here with us today.
00:00:25 -- He's in the Department of Imaging Sciences, and he's been doing some interesting work at RIT.
00:00:30 -- Because you know some students there have interpreters in the classroom.
00:00:34 -- And so how do they balance their attention to the instructor, the interpreter,
00:00:39 -- the Power Points, what's happening.
00:00:41 -- So we're going to hear about that today.
00:00:43 -- He got his Ph.D. from the University of Rochester in brain
00:00:47 -- and cognitive science, also in the same city.
00:00:49 -- So we're very pleased to have Dr. Jeff Pelz today as our last speaker for this year.
00:00:55 -- So -- thank you.
00:00:57 -- Thank you very much.
00:01:01 -- And thank you for inviting me.
00:01:04 -- I hadn't said this to Diane, I've been on this campus once before but it was in 1978.
00:01:09 -- So this is the 30th anniversary.
00:01:12 -- The place looks a little bit different than when I was here last time.
00:01:16 -- Came here as an undergraduate.
00:01:18 -- I was actually an undergraduate at RIT when they -- it was a fairly new campus at that point,
00:01:24 -- and NTID had just been built there around that time.
00:01:28 -- And so I had just learned a little, tiny bit of sign language, had met some students there,
00:01:34 -- and we actually came here, there were two NTID students and an electrical engineering student
00:01:40 -- and I came here because we decided we were going to invent this little box
00:01:44 -- that would let people use the touch-tone telephone.
00:01:48 -- We were going build a little box that would have some LEDs on it.
00:01:51 -- It was going to be a little micro telecommunication device.
00:01:55 -- And it turned into a trip to Washington, D.C. and we never made a box.
00:01:59 -- But I still make gadgets, and so that's part of what I'll talk about here.
00:02:03 -- So I'm going to describe some of the work that I've been doing in collaboration
00:02:08 -- with Mark Marschark and Carol Convertino at NTID.
00:02:11 -- And one of the things I'm going to ask you to do while I'm talking about this is to think
00:02:18 -- about the kind of questions that you might want to be able to ask
00:02:23 -- if you had access to an eye tracker.
00:02:26 -- Because part of what I'll do -- after I was talking
00:02:28 -- with Diane a little bit today I just added some slides because the eye tracker
00:02:32 -- that we're using is a portable eye tracker, and it would be quite easy for somebody here
00:02:38 -- to learn how to use this eye tracker and to be able to borrow the piece,
00:02:43 -- the front end of this eye tracker, collect in data here, ship it back and forth to RIT.
00:02:49 -- So I'll describe a little bit about that.
00:02:53 -- So fundamentally, the question we're interested in is the distribution of visual attention.
00:02:59 -- And so we can ask two basic questions.
00:03:02 -- One, how is limited visual attention distributed across the visual field.
00:03:07 -- And so the important piece here is limited visual attention.
00:03:12 -- It feels as though we see the whole world all at the same time.
00:03:16 -- It feels as though we can pay attention to a lot of things all at the same time.
00:03:20 -- In fact, anybody who spends any time thinking about the brain and thinking
00:03:24 -- about attention understands some of those limitations.
00:03:26 -- I've got a couple of -- couple of simple demonstrations
00:03:31 -- that will talk about some of those limitations.
00:03:35 -- And specifically, the question we've been interested in are what happens in a classroom.
00:03:42 -- It used to be that a hearing student in a classroom, we were concerned about that student
00:03:48 -- who was sitting down, taking notes and listening to the instructor at the same time,
00:03:52 -- maybe worrying about what was happening at the chalkboard.
00:03:55 -- Well, today at RIT we're worried about the instructor who is standing in the front
00:04:01 -- of the classroom who often has a sign language interpreter standing next to him
00:04:05 -- or her who's often talking about what happening
00:04:10 -- in the Power Point presentation going on at the same time.
00:04:13 -- And I've got some hearing students in the classroom and some deaf and hard
00:04:17 -- of hearing students in the classroom at the same time.
00:04:19 -- And I now have multiple streams of information all going on at the same time,
00:04:25 -- parallel streams of information, and it's impossible
00:04:29 -- to capture all those streams at the same time.
00:04:33 -- And so we're going to start asking questions about how it is that people select those
00:04:38 -- and we're going be talking about eye movements and gaze, and how we can use that,
00:04:43 -- the measurement of gaze, as a tool to try
00:04:46 -- to understand how it is people are -- are capturing that information.
00:04:52 -- So I'll describe briefly three experiments,
00:04:55 -- and while doing that I'll describe the instrumentation we're using
00:04:58 -- and some of the questions.
00:05:00 -- So in the first question we just looked at a basic classroom with an instructor,
00:05:07 -- sign language interpreter, and a Power Point display.
00:05:10 -- And in the first one we just looked at the way people deployed their gaze.
00:05:16 -- In other words, where do they look and when do they look there,
00:05:20 -- and how do they move their gaze about the classroom.
00:05:23 -- And in doing this in Experiment 1 we also compared that --
00:05:29 -- we compared those results across two conditions.
00:05:32 -- One, a live condition where there was a live instructor, an interpreter, and a display.
00:05:37 -- And we compared that to what we called the Memorex condition,
00:05:40 -- people old enough to remember those commercials, and that's where we video taped an instructor,
00:05:45 -- an interpreter, and the display and then played this back.
00:05:48 -- Because while we tracked two people at the same time,
00:05:52 -- we needed to have in many cases we ran 30 observers.
00:05:56 -- We couldn't track all those people at the same time.
00:05:59 -- And if we could convince ourselves that it was okay
00:06:03 -- to use recorded stimuli then we wouldn't have to try to have live instructors
00:06:10 -- and interpreters repeat exactly the same thing each time.
00:06:15 -- We also looked at how lectures presented either with sign language interpretation
00:06:24 -- or simultaneous communication effected gaze performance.
00:06:29 -- And finally we looked at how the lecture speed, whether the instructor was talking very quickly
00:06:39 -- or more slowly, and therefore the interpreter was going quickly or slowly.
00:06:44 -- And the position of the display effected performance.
00:06:49 -- There -- there's another variable we didn't consider, and that was the distance
00:06:54 -- between the instructor and the interpreter.
00:06:58 -- As an instructor at RIT one of the things I talk about with the interpreter at the beginning
00:07:03 -- of class is how we'll set this up.
00:07:06 -- Sometimes the interpreter would -- will sit on the other side of the display to make it easier
00:07:14 -- for the student, presumably, to be able to go back and forth
00:07:17 -- between the display and the interpreter.
00:07:19 -- Sometimes I tend to be -- when I'm writing on a white board I tend to be the kind
00:07:24 -- of instructor who's moving back and forth in front of the room quite a bit,
00:07:28 -- and I actually prefer, usually, to walk around with the interpreter.
00:07:33 -- But that's a variable that we didn't consider.
00:07:35 -- We always kept the interpreter and the instructor together.
00:07:40 -- So some background.
00:07:50 -- So the first thing to say is that, well, you can read this.
00:07:54 -- That isn't really the point.
00:07:56 -- The "O" here is in red.
00:07:59 -- What I'd like you to do, stare at the "O" here and try not to move your eyes.
00:08:21 -- Now go ahead and move your eyes.
00:08:24 -- Look around.
00:08:26 -- And hopefully what you saw was that when you fixed your gaze at one point everything
00:08:37 -- within a relatively small region looked okay.
00:08:40 -- But when you didn't move your eyes outside
00:08:43 -- of that region you weren't aware of the differences outside.
00:08:47 -- And now even that you know all the rest of the words are jumbled, if you go back and look
00:08:53 -- at that one point, okay, you won't be able to read.
00:08:58 -- And if you try to read outside of that, okay,
00:09:02 -- you'll see that you can read a radius of about a word.
00:09:05 -- A word above, below, left, and right.
00:09:08 -- Now we can actually set up an experiment where as you move your eyes
00:09:12 -- in realtime we can change the letters around it.
00:09:16 -- And it turns out that for people who read languages from left to right,
00:09:22 -- there's an asymmetrical window around the point that you're looking at,
00:09:26 -- what we describe as the fixation point, okay?
00:09:29 -- And so it's about 8 letters to the left and about 10 letters to the right, okay?
00:09:34 -- And so as you scan from left to right we can jumble the letters outside of that window
00:09:40 -- and it doesn't slow down your reading at all.
00:09:42 -- Okay, so while it certainly feels, when you're looking at a screen, even a large screen
00:09:47 -- like this that you're seeing a lot of it at the same time.
00:09:50 -- Okay, in fact, there's a relatively small window, okay,
00:09:54 -- that you're grabbing at the same time.
00:09:56 -- I'm going to do another one here that I have to explain the rules
00:10:01 -- for more carefully before you start.
00:10:03 -- I'm going to ask you to stare at the "X" in the circle here, and not to move your eyes.
00:10:08 -- What I'm going to do is bring a photograph up on the right-hand side of the screen
00:10:12 -- and slowly bring the photograph closer and closer to the "X".
00:10:16 -- What I want you to do is try hard not to cheat, keep on looking at the "X", okay, and try to --
00:10:25 -- try to ask yourself questions about this photograph.
00:10:30 -- See if you can identify the photograph,
00:10:32 -- and even after you think you've identified the photograph try to keep staring at the "X"
00:10:37 -- until the photograph is all the way to the "X".
00:10:41 -- Try not to cheat.
00:10:42 -- I'll be watching you.
00:10:43 -- Stare at the "X".
00:10:52 -- [ Background noise ]
00:11:10 -- So hopefully what people saw was that by about halfway, okay --
00:11:19 -- well I should -- let me back up a little bit.
00:11:21 -- Hopefully what you saw was that as soon as the image was on the screen it was easy
00:11:25 -- to tell that it was a human being.
00:11:27 -- You could probably tell it was a female.
00:11:29 -- You could probably tell something about hair color.
00:11:32 -- But probably until it was about this close, you probably couldn't --
00:11:38 -- maybe even closer, depending on where you're sitting.
00:11:43 -- Maybe here, somewhere around here, you probably couldn't tell me what was wrong.
00:11:48 -- I've had people -- sorry, I can't go backwards a single image here.
00:11:54 -- Around here I've had people say things like I think she's wearing too much makeup, right?
00:12:00 -- So there's something not quite right, but this is one of my favorite demonstrations,
00:12:07 -- because it certainly feels when it was way out here, you knew it was a human being,
00:12:13 -- but there wasn't nearly enough information to tell that there was an eye
00:12:18 -- where a forehead ought to be and lips where an eyeball ought to be, okay?
00:12:22 -- So your visual system is incredibly good at seeing a huge amount of the world --
00:12:28 -- that's the next thing I'll talk about is how much of the world you see at the same time,
00:12:32 -- okay, but what you see when we look way off in the periphery is very, very limited.
00:12:44 -- So one of the displays we use in my lab is a large screen display.
00:12:49 -- So this is a 50-inch plasma display.
00:12:52 -- And it's important for some of the work that we do to have a large display.
00:12:56 -- And so when you sit somebody close to this display that gives us a field of view that's
00:13:02 -- about 20 degrees high by 30 degrees wide,
00:13:05 -- and that lets me show somebody a 600-square degree field.
00:13:09 -- Okay? That's a big field of view.
00:13:12 -- But that's nothing compared to somebody sitting where you are now,
00:13:18 -- even if you just look at me, okay.
00:13:21 -- Your field of view is over 180 degrees wide.
00:13:24 -- You can actually see a little bit behind you.
00:13:26 -- If there's something moving a little bit behind you you'll see it, okay?
00:13:30 -- And your field of view vertically is over 100 degrees, okay?
00:13:35 -- It's about 110 degrees.
00:13:37 -- So your field of view is about 20,000 square degrees when you're just looking straight ahead,
00:13:44 -- that's without moving your eyes around.
00:13:47 -- So you can see almost a hemisphere all at the same time.
00:13:53 -- But, you already saw in that demonstration how many
00:13:56 -- of that hemisphere can you see with enough clarity to read?
00:14:02 -- About this much.
00:14:03 -- How much can you see with enough clarity to tell whether it's a really --
00:14:06 -- whether it's a normal human being?
00:14:09 -- Maybe a little bit more than that.
00:14:12 -- So there are a couple of questions to ask.
00:14:15 -- One is why are we built this way, and the other is what effect does this have
00:14:20 -- on our visual system.
00:14:22 -- So this is all to try to answer the question why do we move our eyes.
00:14:29 -- And so the world contains a huge amount of information.
00:14:33 -- Even if we hadn't developed written language the amount of information that's available
00:14:39 -- in every square degree and just since I'll be talking about degrees I should say
00:14:42 -- as a reference, this scales nicely with everybody because your fingernail changes size
00:14:48 -- at about the same rate as the length of your arm.
00:14:51 -- Your fingernail of your index finger held out at arm's length is about one square degree.
00:14:56 -- That works quite well for most people.
00:14:58 -- So that's about one square degree, okay?
00:15:01 -- And -- so there are about 20,000 of those, okay,
00:15:05 -- is about how much you can see all at the same time, okay?
00:15:08 -- The amount that you see at the highest clarity is about a fingernail's width.
00:15:15 -- Okay? There -- we have very limited neuro resources.
00:15:20 -- If we talk about how much brain matter we can devote to vision, okay,
00:15:26 -- depending on how you measure it, we can say that somewhere between about a third to two-thirds
00:15:32 -- of your brain is devoted to vision.
00:15:33 -- Now a lot of your brain is multipurpose, right?
00:15:37 -- It serves more than one purpose.
00:15:39 -- But it's easy to say that roughly half of your brain is dealing with visual input.
00:15:45 -- Yet it's really focused in a very, very small region.
00:15:48 -- If we could have enough acuity to be able to read
00:15:51 -- across that entire visual field you'd need one cubic meter of brain mass.
00:15:57 -- Okay? One -- sorry, one cubic meter of brain mass would be a problem,
00:16:02 -- because we couldn't provide enough energy to keep it going.
00:16:06 -- And if you fell over you couldn't stand back up again, because it would be too heavy.
00:16:12 -- We also can talk about the -- the neuro problems.
00:16:17 -- If you buy a $300 computer nowadays, a $300 computer is going to be --
00:16:22 -- we measure the speed of that in gigahertz, the number of billion cycles per second.
00:16:27 -- If you look at a single neuron in your brain the fastest spike train that we can maintain is
00:16:36 -- about 1 kilohertz, 1,000 spikes per second.
00:16:40 -- And of course we have a few more processes than the typical $300 computer,
00:16:45 -- but still we have a huge limitation on the rate at which we can pass signals around.
00:16:50 -- So one of the fundamental limits of the visual system is just how much neural tissue we
00:16:55 -- can devote.
00:16:57 -- And so that we have a fundamental need to reach a compromise between the field
00:17:02 -- of view and how much acuity we can have.
00:17:05 -- And if you look at animals there's a range of how this compromise has been --
00:17:11 -- what compromise has been reached.
00:17:14 -- Look at something like a field mouse.
00:17:15 -- A field mouse has its eyes horizontally opposed on its head so it basically -- it can see --
00:17:21 -- each eye sees almost a hemisphere.
00:17:23 -- You can't sneak up on a field mouse, okay?
00:17:26 -- It doesn't have this huge blind spot behind it that we have.
00:17:29 -- But it has no high acuity region.
00:17:31 -- It doesn't have the equivalent of a fovea.
00:17:33 -- It couldn't read the kind of text that human beings can read.
00:17:36 -- On the other hand, a hawk, whose job it is to sneak up on a field mouse,
00:17:41 -- has the two eyes all the way forward on its head, has very, very high acuity, okay,
00:17:47 -- and can see very high acuity from a great distance.
00:17:51 -- Human beings have evolved the visual system that really is a compromise
00:17:55 -- between the field of view and acuity.
00:18:00 -- So let me spend a minute talking about the way the eye is built,
00:18:09 -- because this compromise is -- really starts in the eye itself.
00:18:14 -- So if we just take a picture of the eye here and look in cross-section,
00:18:19 -- the cornea is the optical surface in the front of the eye.
00:18:23 -- If we look in the back of the eye the retina is back here.
00:18:26 -- The retina is made up of rods and cones.
00:18:29 -- When people usually talk about rods and cones they typically describe them in terms
00:18:34 -- of the rods are ones we use for night vision and just give us monochrome or black
00:18:39 -- and white vision for high sensitivity, and the cones are the ones
00:18:42 -- that give us color vision during the day.
00:18:44 -- For this discussion it's really more important to talk about not the different kinds
00:18:50 -- of sensors, but the density, how closely packed together they are, okay?
00:18:55 -- So on this graph, going to switch with you here,
00:18:58 -- on this graph the horizontal axis is the position on the retina.
00:19:03 -- So zero degrees is the center of the retina.
00:19:08 -- So if I say look straight ahead.
00:19:09 -- If I ask you to -- when I point at the zero here and you look at the zero that's straight ahead.
00:19:14 -- And then to the left and right here represents moving to either side along the retina.
00:19:20 -- Okay? The vertical axis here is the density of the number of rods or cones
00:19:25 -- in one square millimeter in the retina.
00:19:27 -- What I'm going to do now is go to the next slide and just get rid of the blue graph here.
00:19:33 -- The blue graph here represents the rods,
00:19:36 -- which really are mostly important only for night vision.
00:19:38 -- And since what we're talking about is vision during the day it's really just this red curve
00:19:44 -- that we're interested in.
00:19:45 -- These are the cones.
00:19:46 -- The density of cones peaks at the very center over 150,000 cones in one square millimeter.
00:19:54 -- So you take one square millimeter and the very center of your retina
00:19:57 -- and there are 150,000 cones right there, okay?
00:20:02 -- But look how drastically it falls off within just a couple of degrees from there.
00:20:06 -- It's dropped off by a factor of 10, okay?
00:20:09 -- Then it levels off to something more like 5,000 per square millimeter, okay?
00:20:16 -- And it's this density difference.
00:20:19 -- And this is really the basis -- this is
00:20:22 -- where that compromise starts, of our acuity and field of view.
00:20:27 -- So I have to say it starts here because then if you look at the wiring that goes from the back
00:20:33 -- of the retina, this -- it doesn't just continue from here, it actually is amplified from here.
00:20:40 -- The number of -- I'm going call them wires, but the neurons that connect from the --
00:20:45 -- from the retina in the center here give you --
00:20:48 -- there's a much greater connection density from the very center than there is in the periphery.
00:20:54 -- So let me just talk about the fovea, because this is a term that I'll keep on using.
00:21:05 -- If we go and look in the -- we go and look at the back of the eye.
00:21:12 -- So this is the retina.
00:21:14 -- This is the back of the eye, the very center here is where that peak density of cones are.
00:21:20 -- There's actually a pit.
00:21:22 -- It's literally a little pit here, and that's called the fovea.
00:21:26 -- Fovea is actually the Latin word for pit.
00:21:27 -- It's a physical depression there.
00:21:29 -- And it's at that point where there's that very high concentration of cones.
00:21:34 -- So when you look at one point when you're reading what you're doing is moving the position
00:21:39 -- your eye so that the part of the world you're trying to get information from falls
00:21:43 -- at that position on your retina and that's where you get the highest resolution.
00:21:50 -- So saying that we've got this huge concentration all
00:21:53 -- at one point is half of the solution to our problem.
00:21:57 -- Instead of saying because I can't have that density across the entire retina,
00:22:02 -- I can only have it at one point, what I need is some way to move that around.
00:22:07 -- What I need to be able to do is sample the environment.
00:22:09 -- I need to be able to sample the world with that high density session, okay?
00:22:14 -- And that's what the eye movement system does.
00:22:17 -- And so each eye has six big muscles attached to it.
00:22:20 -- The muscles are attached in three pairs.
00:22:23 -- They are agonist, antagonist pairs, just like the muscles in your arm.
00:22:27 -- And so if I say this is the eye, I've got one pair of muscles attached to the left and right
00:22:33 -- of the eye that allow the eye to rotate left and right.
00:22:36 -- A second pair that's attached top and bottom that let's the eye rotate up and down.
00:22:41 -- And a third pair that's attached to the top and bottom from the side
00:22:45 -- that lets the eye make torsional eye movements.
00:22:47 -- It rotates almost about the optical axis of the eye, okay?
00:22:51 -- The first two everybody's conscious of because when you look at somebody else's eye,
00:22:54 -- you can see them, you can see their eyes move up, down, left, and right.
00:22:58 -- These torsional eye movements are less obvious because when you look at somebody's eye,
00:23:02 -- unless you're close enough to see the little striations
00:23:05 -- in their iris you typically can't see the eye rotate.
00:23:09 -- But these rotational eye movements are actually very important,
00:23:13 -- because every time you tip your head a little bit your eyes counter rotate
00:23:17 -- so that the world is stabilized on your brain.
00:23:21 -- Okay? They serve one other purpose and that is that --
00:23:25 -- one kind of eye movement that everybody's conscious of because you played with them
00:23:28 -- when you were a kid are called vergis [Phonetic] eye movements.
00:23:32 -- That's adjusting the angle between your eyes so when you look
00:23:34 -- at something up close you go cross-eyed.
00:23:36 -- If you're looking at something that has -- if you look at a vertical rod up close,
00:23:43 -- if I move it closer it's obvious I should get more cross-eyed.
00:23:47 -- If I move it farther away I get less cross-eyed.
00:23:50 -- But here's a problem.
00:23:50 -- What should I do if I'm looking at it, and it tips forward and down.
00:23:54 -- Should I guess more cross-eyed or less cross-eyed?
00:23:58 -- The answer is both.
00:24:00 -- Because to keep the top of this correctly assigned I should get less cross-eyed.
00:24:06 -- To keep the bottom correctly assigned I should get more cross-eyed.
00:24:10 -- And so what your eye does is when you're looking at a vertical rod like this
00:24:14 -- that tips your eyes rotate in opposite directions.
00:24:18 -- So if you're interested in this, get a friend, hold something close, look at their eyes,
00:24:24 -- tip it back and forth, and you'll see their eyes do this.
00:24:31 -- Let's see.
00:24:31 -- An interesting number here.
00:24:33 -- You're typically not conscious of these eye movements, but you make about 150,000
00:24:39 -- of these rapid eye movements every day, okay?
00:24:42 -- You move your eyes several times every second when you're doing everything from reading
00:24:46 -- to having a conversation to watching an interpreter, to watching television,
00:24:51 -- to do almost anything, even when you're day dreaming, it turns out,
00:24:56 -- you're moving your eyes several times every second.
00:24:59 -- I said those out of order.
00:25:10 -- This is a clip of somebody reading.
00:25:19 -- Now this is shown at 1/4 speed.
00:25:23 -- So yeah -- I see somebody saying doesn't your eye scan smoothly.
00:25:29 -- Your eye doesn't scan smoothly.
00:25:30 -- Your eye moves to a position, locks in place, moves to the next word, locks in place,
00:25:37 -- moves to the next word, locks in place.
00:25:39 -- Now I'm showing this at 1/4 speed, so at this speed is looks like the eye is locked
00:25:44 -- in position for has a second at a time.
00:25:46 -- I'm going go to the next one now and let this play.
00:25:49 -- In this one it's going to go -- the video will go back and forth between the eye
00:25:53 -- at normal speed and then a video that shows a cursor.
00:25:57 -- The cursor on the screen will show where the person is reading.
00:26:01 -- So you'll see both the eye moving and you'll see a cursor
00:26:04 -- on the screen that shows what they're reading.
00:26:32 -- So those rapid eye movements are called saccades.
00:26:37 -- And the stationary periods in between are fixations.
00:26:44 -- The eye movements until about the 1950s, people considered these,
00:26:54 -- people treated the eye movements as involuntary reflexes.
00:26:59 -- Since the '50s we've really understood them to be much more tightly tied
00:27:05 -- to cognition, and volitional eye movements.
00:27:09 -- Voluntary eye movements.
00:27:12 -- One example is in reading.
00:27:15 -- As people are reading, people will essentially make one fixation, one of these --
00:27:20 -- their eye will become stationary on almost every single word.
00:27:25 -- As you're reading text you will occasionally jump over a, of, the.
00:27:30 -- But basically you look at almost every single word.
00:27:34 -- The timing, how long you spend fixating on each word is dependent word by word,
00:27:41 -- for example, on the frequency of the word.
00:27:44 -- So not only will you read a physics textbook more slowly than you'll read the newspaper,
00:27:49 -- but as you're reading the newspaper if you come to a word that is slightly less frequent
00:27:56 -- than other words you'll spend instead of a quarter of a second there,
00:28:01 -- you'll spend an extra 50 milliseconds.
00:28:03 -- You'll spend a little bit longer on each individual word.
00:28:06 -- So we're constantly in realtime modulating the amount of time spent on individual regions.
00:28:12 -- Here's another example that shows eye movements with saccades and fixations.
00:28:19 -- In this case, the person isn't reading, they're looking for something in a vending machine.
00:28:23 -- So again, you'll see saccades and fixations.
00:28:25 -- But now the person is just searching for something in a vending machine.
00:28:28 -- And it -- it feels when you're looking at this that it's sort of a hectic movement,
00:28:43 -- but of course when you're making these eye movements you're not even conscious
00:28:47 -- of doing this.
00:28:48 -- It just feels that you're doing a task but you're making these movements.
00:28:51 -- I saw somebody noticing -- the other thing that happens is the pupil also dilates
00:28:56 -- and constricts as you're doing this.
00:28:59 -- That happens partly because of changes in the brightness in the scene you're looking at.
00:29:04 -- But also it's clear that there's a correlation between the pupil diameter and interest
00:29:10 -- in the scene, an interest in what's happening in the scene.
00:29:14 -- The next kind of eye movement I'll show you is called smooth pursuit.
00:29:18 -- If there's something moving smoothly in the scene there's the ability for your eye
00:29:22 -- to track it very -- very accurately in the scene
00:29:26 -- so that the object that's moving will stay stuck, will be stationary, on your retina, okay?
00:29:34 -- In exactly the same way that a photographer trying to capture a sharp picture
00:29:39 -- of a racing car going by will track it with a camera.
00:29:42 -- The reason is to -- so that it's fixed in position on the retina.
00:29:54 -- So that's just a person walking by.
00:29:58 -- There are several other kinds of eye movements,
00:30:00 -- but we could spend much too long talking about all the different kinds.
00:30:04 -- So that's all I'll talk about here.
00:30:06 -- Going to mention one more thing about smooth pursuit, though.
00:30:09 -- And that is that if I asked you to use your eyes and just smoothly track around this oval
00:30:19 -- or just smoothly track along one of these lines, okay.
00:30:24 -- If I asked you to do that while I used my laser pointer you could do it
00:30:28 -- because you had a target to track.
00:30:30 -- But if I asked you to do it without a target and tracked your eyes,
00:30:35 -- this is what the tracks would look like, okay?
00:30:38 -- It turns out that for at least 99-and-a-half percent of the population the only way
00:30:44 -- to make a smooth eye movement is if you have a target to track.
00:30:48 -- There some people who can make smooth eye movements without a target.
00:30:52 -- But at least in our experience, and we've poked quite a bit, the only people we found
00:30:59 -- who can make smooth eye movements, they can't make smooth eye movements to a target like that.
00:31:04 -- What they can do is sort of blur their vision,
00:31:06 -- look up towards a corner, and just move their eyes.
00:31:09 -- But they can't find a line and actually smooth track -- track smoothly across it.
00:31:15 -- Okay, how do we track eye movements.
00:31:18 -- So this is where I just added a few more slides so that you can start to think
00:31:24 -- about how you might be able to use this in an experiment.
00:31:29 -- This is a picture from the mock classroom.
00:31:33 -- Not sure -- is it easy to turn -- I think these lights are just making --
00:31:42 -- shining on the screen enough.
00:31:44 -- If it's possible to turn one more of those down it might help a little bit.
00:31:52 -- What we do -- oh yes, that helps a lot.
00:31:57 -- Thank you.
00:31:58 -- So each observer here wears a pair of these glasses.
00:32:02 -- These -- we built these at RIT.
00:32:04 -- Is it okay?
00:32:06 -- [ Inaudible audience comment ]
00:32:06 -- Is it a problem?
00:32:08 -- We can put it back up if you need to.
00:32:19 -- Thumbs up, okay?
00:32:20 -- All right.
00:32:23 -- These -- these glasses -- I'll talk in a little bit more detail in another slide.
00:32:27 -- But basically what we do is we have one video camera pointing at the eye,
00:32:31 -- another video camera pointing at the scene so we record what the observer is looking at.
00:32:36 -- We see the scene that they're looking at.
00:32:38 -- We also capture an image of their eye.
00:32:41 -- And after processing what we get -- so the --
00:32:44 -- the picture and picture down here shows the observer's eye.
00:32:48 -- So when I show you videos of what the person is doing you'll see their eye move down here.
00:32:54 -- And this cross hair shows where the person is looking in the scene.
00:32:57 -- So this is a video from the live classroom with the instructor, the interpreter,
00:33:02 -- and the Power Point display over here.
00:33:32 -- So let me point out a couple of things.
00:33:36 -- One is we always use this picture and picture and always include this image of the eye here.
00:33:43 -- Because for example every time the person blinks there's a little artifact.
00:33:49 -- The cross hair moves down a little bit.
00:33:51 -- If it weren't for this picture and picture here, we might be reporting that every once
00:33:57 -- in a while the student looks down from the interpreter's face to his hands, right?
00:34:02 -- That's not the case.
00:34:03 -- What's happening is every once in a while the student is blinking, okay?
00:34:07 -- So this is really critical any time people are working with eye movement records.
00:34:12 -- So far, no eye track manufacturers have built this into their system and they all should.
00:34:17 -- It's a critical part of this.
00:34:22 -- Let's see.
00:34:24 -- So just describe a little bit about the way the system works.
00:34:28 -- So this is just a close up of the headgear the person wears.
00:34:31 -- We took a pair of safety goggles and removed the carbonite shield at the front.
00:34:37 -- There's a small camera here.
00:34:39 -- This is just a little miniature video camera.
00:34:41 -- This points out at the scene.
00:34:43 -- So when you saw the whole scene move before that was
00:34:46 -- because the student was just moving her head a little bit.
00:34:49 -- And so when she moves her head you see the whole scene move.
00:34:52 -- There's a little infrared illuminator here, points up towards the eye to illuminate the eye.
00:34:58 -- And another miniature camera here pointing up towards the eye.
00:35:01 -- So that's where we capture the image of the eye.
00:35:05 -- And so this is very light-weight.
00:35:07 -- There is a little strap that attaches behind the person's head to hold it on.
00:35:15 -- There's a little device.
00:35:17 -- We call this the 007 box in honor of James Bond,
00:35:22 -- and all it does is it has a little electronic device here
00:35:26 -- that captures these two video images, the image of the eye and the image of the scene,
00:35:30 -- and combines them into a single video image and saves them on to a -- into a small camcorder.
00:35:37 -- So there's no eye tracking happening in this box.
00:35:39 -- All we do is we capture those two images and report them together.
00:35:43 -- So if there were an experiment that you're running here and you wanted to add eye tracking
00:35:49 -- to it, you wanted to see where people are looking while they're doing a task,
00:35:54 -- the only piece we'd have to have here for that is this piece,
00:35:59 -- because all you'd have to do is capture those images.
00:36:01 -- What we do then is we take that video, put it back into the system,
00:36:07 -- and pull the two images back apart.
00:36:10 -- Pull the eye image back, spread it back out again.
00:36:13 -- Take the scene image, spread it back out again.
00:36:15 -- And so the eye tracking happens after the fact.
00:36:19 -- There are several advantages to doing this that I won't go into now,
00:36:24 -- but we could talk about it afterwards.
00:36:27 -- Do you want to put the lights back up now or --
00:36:35 -- If -- sure.
00:36:41 -- If it helps.
00:36:43 -- An important issue is how you get data out of the eye tracker.
00:36:50 -- Because this kind of eye tracker will you mount it on the person's head,
00:36:55 -- the output of this is a video tape that shows where the person is looking.
00:37:01 -- You don't get a data stream out that has a little record that says looking
00:37:07 -- at the interpreter, looking the instructor, looking at the display.
00:37:10 -- You get a video tape.
00:37:11 -- And so we've actually written a program that takes in the video and allows you,
00:37:19 -- using the keyboard, to step through the video,
00:37:21 -- and using the function keys allows you to, well, two choices.
00:37:26 -- One, using the function keys here you can enter at any point in the tape,
00:37:33 -- and it will automatically grab the time code from the tape and record a value,
00:37:38 -- or you can use the mouse and just click somewhere within the image.
00:37:41 -- And what it does is it automatically records the position in the image and the time code.
00:37:46 -- And so the result of that for this experiment is we just assign meanings
00:37:53 -- to the different keys here so that as the students were coding these tapes,
00:37:59 -- they just went through and they just went through the tape
00:38:03 -- and identified whether the person was looking at display, interpreter, instructor.
00:38:08 -- And of course we had other keys assigned to blink, track loss,
00:38:12 -- other things to be able to do this.
00:38:15 -- That made it practical to be able to code these tapes in a way
00:38:19 -- that didn't take years to code a week's worth of data.
00:38:25 -- So to get back to this experiment, what we did is we had the live condition where we had
00:38:30 -- to actually -- we did this with two instructors.
00:38:34 -- One person was giving a lecture on information technology.
00:38:39 -- She was describing the beginnings of the internet.
00:38:42 -- And another instructor was giving a lecture on granular physics.
00:38:47 -- And so we had those two lectures and we recorded them live.
00:38:55 -- What we actually did to try to have as much resolution
00:38:58 -- as possible is we had one video camera on the instructor,
00:39:02 -- another video camera on the interpreter.
00:39:04 -- We turned these cameras sideways so that we had the vertical extent on the instructor
00:39:10 -- and interpreter, and then had projectors turned side ways.
00:39:14 -- So we had as much resolution as we could, and then synchronized these for playback.
00:39:23 -- And so the recorded condition looked like this.
00:39:28 -- And so in many cases these things are crooked here, and that's crooked
00:39:33 -- because the observers were free to behave naturally.
00:39:37 -- And anybody who's taught a class knows once in a while students do this.
00:39:43 -- For anybody who knows Mark Marschark, when we first started talking
00:39:46 -- about doing this Mark's first question is can people act naturally when they use eye tracker.
00:39:53 -- We said sure, and so he -- the first thing he did is he wanted to pretend to be the student.
00:39:58 -- So we put the eye tracker on him and he came and he went -- and tried to break the eye tracker.
00:40:03 -- And he couldn't break the eye tracker.
00:40:04 -- So he went ahead and wrote the proposal.
00:40:10 -- So what we ended up doing was coding these for the live and Memorex condition.
00:40:16 -- The coding involved using this program that I showed you before.
00:40:20 -- The output of this is essentially a text file that we then just open up in Excel
00:40:28 -- that automatically gives us the amount of time spent in each area and is coded here
00:40:34 -- for which region the person is in.
00:40:37 -- And then from here we could get data and so here what I'm going to do is just show you some
00:40:42 -- of the results from this first experiment where we're looking at the live
00:40:47 -- and Memorex conditions, and I'll show you the results for two of the analyses we considered.
00:40:54 -- One is just the relative time people spent in each area
00:40:57 -- and the other are the transition probabilities.
00:40:59 -- So I'll describe more about transition probabilities.
00:41:02 -- But first let's talk about relative time in each area.
00:41:06 -- So this is for the hearing students.
00:41:10 -- And so in each case here all we did was split up and talk about the fractional time,
00:41:17 -- how much time people spent looking at the instructor, the interpreter, or the display.
00:41:21 -- And so the hearing students here.
00:41:23 -- And this was -- yeah, this is for the group of hearing students.
00:41:28 -- Just looked at the amount of time they spent looking
00:41:30 -- at the instructor, the interpreter, an the display.
00:41:33 -- And in this case you can see that the hearing students spent the bulk
00:41:36 -- of their time looking to the display.
00:41:38 -- Some time looking back toward the instructor.
00:41:40 -- In this case, actually in this lecture, the instructor did one demonstration
00:41:43 -- where he actually was pouring sand into a cup and doing a demonstration.
00:41:47 -- But the bulk of the time they spent looking
00:41:49 -- at the display while just dividing their attention and listening to the instructor.
00:41:56 -- I have to describe two groups.
00:41:58 -- We divided the deaf and hard of hearing students into two groups.
00:42:01 -- They were the skilled signers.
00:42:03 -- These were the students who came to NTID having more than five years
00:42:07 -- of experience with sign language.
00:42:10 -- So they were the skilled signers.
00:42:12 -- We also had the groups that we -- that we ended up calling the newbies.
00:42:16 -- Those were the students who came to NTID with very little or no experience in sign language.
00:42:22 -- And so they were just learning sign language at that point.
00:42:25 -- But if you look at the skilled signers, the skilled signers spent much more
00:42:31 -- of their time looking at the interpreter, far less time looking at the display,
00:42:35 -- and only a very small amount of time looking at the instructor.
00:42:38 -- Okay? And so with this analysis what we're able to do is simply ask the basic question
00:42:44 -- of how they divided their time and where they spent their time looking.
00:42:49 -- The newbies essentially split their time up.
00:42:54 -- Now I'll talk more about this in a different study, but it's important to say
00:42:59 -- that what we found is there was much more variability within the group
00:43:03 -- with the newbies than there was in other ones.
00:43:06 -- So the first interpretation of this was that the newbies were just looking all over the place,
00:43:12 -- that they kept on going back and forth.
00:43:14 -- In fact, this is the average between the observers and so part of this was that some
00:43:19 -- of the newbies looked like the hearing students.
00:43:22 -- They just spent a lot of time looking at the display.
00:43:25 -- Some of the newbies looked like the skilled students.
00:43:27 -- They were looking -- and so we were seeing the variation within the groups.
00:43:31 -- But in fact there -- there were some who were spread across.
00:43:35 -- So this just lets us look across these three different groups
00:43:41 -- and see how they divided their attention,
00:43:43 -- where they were deploying this limited attention they have.
00:43:47 -- But it's important to understand
00:43:49 -- that this analysis only asked overall how much time they spent.
00:43:55 -- All right?
00:43:56 -- So the hearing students spent something like 70% of their time looking at the display.
00:44:01 -- But we don't know whether that meant they spent 7 seconds
00:44:05 -- and then 3 seconds, and 7 second and 3 seconds.
00:44:08 -- Or whether it meant they spent 5 minutes at a time, and then just looking back very quickly.
00:44:14 -- And so the next thing we considered were the transition probabilities.
00:44:20 -- And so this is almost the opposite question.
00:44:24 -- Without regard to how long they spent in each area we simply asked if you started looking
00:44:31 -- at the instructor we asked what are the --
00:44:34 -- what is the probability that you would go from the instructor to the display,
00:44:38 -- and what is the probability that you would go from the instructor to the interpreter, okay?
00:44:44 -- So dividing the room up into these three regions, instructor, interpreter, display.
00:44:52 -- Looking first at hearing students we said let's take all the cases
00:44:57 -- where they are now starting, looking at the instructor.
00:45:01 -- We said let's look at all the cases where they went from the instructor to the display.
00:45:07 -- And all we did was we counted those up and we said 28% of all the transitions,
00:45:14 -- all the gaze transitions, were from instructor to the display for the hearing students.
00:45:23 -- 12% of all the transitions were from the instructor to the interpreter.
00:45:28 -- We then asked, okay, if you're at the display, 32% were from the display to the instructor.
00:45:40 -- 8% were from the display to the interpreter.
00:45:46 -- If you were at the interpreter, et cetera, and so we filled this in, okay?
00:45:52 -- And this just gives us a map of the transitions.
00:45:56 -- This gives us a map of how given any place in this -- any state in this --
00:46:03 -- in the process, what is the likelihood that you're going to move from one to the next, okay?
00:46:10 -- And just for presentation here what we did is --
00:46:15 -- all I'm going to do is I'm going make the thickness of the line here give you a gauge
00:46:20 -- of how likely these transitions were, okay?
00:46:22 -- So you can see that in this case the bulk
00:46:27 -- of the transitions were the hearing students just going back and forth
00:46:31 -- between the instructor and the display.
00:46:34 -- So let's see.
00:46:38 -- I just put back in this corner now -- remember that the hearing students spent most
00:46:43 -- of their time on the display, they spent about 70% of their time on the display here, okay?
00:46:50 -- But if you look at the transitions here you can see they spend 60%
00:46:55 -- of the transitions were just back and forth between the display and the instructor,
00:47:00 -- and the remaining 40% were divided fairly evenly going back and forth than the other ones.
00:47:19 -- Okay? Oops, [Inaudible].
00:47:21 -- Here are the skilled signers.
00:47:23 -- And so the skilled signers now essentially 80% of all the transitions were back and forth
00:47:29 -- between the interpreter and the display.
00:47:33 -- Only 4% of all the transitions were all the way between display and instructor.
00:47:38 -- And about 15% were back and forth between interpreter and instructor.
00:47:43 -- So again this time what we've done is we've completely removed the total amount of time
00:47:48 -- and we're asking only about those transitions.
00:47:52 -- And here are the newbies, where it's almost 15% for every transition.
00:48:00 -- The newbies were just going everywhere.
00:48:03 -- But again, we have to recognize now we just put all of them up here at the same time.
00:48:09 -- Again, I have to recognize that part of this, the newbie case here,
00:48:13 -- was because we're seeing a lot of averaging within that group.
00:48:19 -- Both the skilled and the hearing, there was much less variability within the group
00:48:23 -- than we saw for the -- for the newbie group.
00:48:28 -- The next two I'll go through much more quickly
00:48:35 -- because we're done talking about the eye tracker here.
00:48:38 -- The other two, we looked at the difference between lectures presented
00:48:42 -- with a simultaneous communication or a sign language interpreter.
00:48:47 -- And the last one was about lecture speed and position.
00:48:52 -- So in this experiment we looked at Simcom [Phonetic] versus interpreter.
00:48:57 -- So -- but again we'll keep on using the same group.
00:49:00 -- So here these were skilled signers.
00:49:03 -- And in the first column here what I'll do is I'll just show the case
00:49:07 -- of simultaneous communication.
00:49:10 -- So these were two skilled -- not just skilled signers,
00:49:15 -- but these were both excellent instructors.
00:49:17 -- These had both received awarded for teaching and they talked for a very long time.
00:49:23 -- And so we looked at the skilled signers and saw how they distributed their gaze
00:49:27 -- between the instructor and the display.
00:49:31 -- Here were the newbies, who looked very much like the skilled signers.
00:49:34 -- And the hearing students, who were essentially not that different, but they were --
00:49:43 -- looked very much like the original distribution we had, where they spent more
00:49:47 -- of their time looking at the display.
00:49:51 -- Now in the second case, and just make clear here so --
00:49:55 -- in this case now what we did was we had the same instructor deliver the lecture,
00:50:00 -- voice with no sign.
00:50:02 -- So they delivered the same material, right?
00:50:06 -- But now with an interpreter, and they did not sign for themselves.
00:50:10 -- And what we did was we looked at the gaze again.
00:50:14 -- And here what we did is we just looked at the total amount of time spent.
00:50:18 -- And what we saw -- what I'll do next is I'll come back
00:50:20 -- and I'll sort of compare between these two.
00:50:23 -- But you can see already what happened was the skilled signers followed the sign.
00:50:29 -- They -- the students who had been spending all their time looking at the instructor --
00:50:34 -- sorry, all the time they had spent looking
00:50:36 -- at the instructor simply moved over to the interpreter.
00:50:44 -- The newbies really divided the time, continued to spend quite a bit of time looking
00:50:51 -- at the instructor, even though the instructor was no longer signing.
00:50:56 -- And the hearing students, as we often see, will spend some time looking at the interpreter,
00:51:06 -- but continue to spend the bulk of their time looking at the display.
00:51:13 -- And so really all I've done here is just indicate the -- the change so the slides,
00:51:21 -- the column on the right now just shows the Simcom in the dark color
00:51:27 -- and in the lighter color here in the case when there's an interpreter.
00:51:30 -- And just showing how that gaze is redistributed.
00:51:34 -- And so looking at each one you can see here that the skilled signers, as I said before,
00:51:41 -- they just followed where the signs were.
00:51:44 -- And in this case, one of the questions is if what you do is --
00:51:51 -- if all the student does is follow the sign to the interpreter,
00:51:54 -- the question is what information that could be there with the instructor has been lost.
00:52:02 -- In -- in the cases that we did here there weren't, for example,
00:52:07 -- demonstrations the instructor was doing, but there had been in the first case.
00:52:12 -- But certainly there are cases where there might be information tied directly to the instructor.
00:52:18 -- He's the case where we saw with the newbies, that there was essentially a splitting of that.
00:52:27 -- And with some reduction of information here, it's clear that you can see the amount
00:52:31 -- of information -- the amount of time that was spent on the display was reduced,
00:52:36 -- okay, as the division happened there.
00:52:38 -- And then the hearing students, the amount of time went down on the instructor,
00:52:47 -- a little increase in the display.
00:52:49 -- But that was not significant.
00:52:51 -- But mostly some time spent looking at the interpreter.
00:52:58 -- And so finally the one now, what we did here was we had two manipulations.
00:53:02 -- This is one I described a little bit at the beginning of the talk.
00:53:06 -- What we did is we had the instructor and the interpreter right next to each other here.
00:53:11 -- And then moved the position of the Power Point display.
00:53:16 -- So there were three conditions.
00:53:17 -- There was the near condition, the middle where we moved it a little bit farther away,
00:53:23 -- and a far condition where it was all the way.
00:53:29 -- And so in each graph here all I'm -- all I've done is this is for the skilled signers.
00:53:35 -- This is looking at the total amount of time spent looking in each one
00:53:39 -- of these areas for the three conditions.
00:53:42 -- And you can see here that it made no difference at all.
00:53:45 -- And this was not what we predicted.
00:53:49 -- Based on some experiments we had done earlier in a completely different kind of experiment,
00:53:54 -- our prediction was that if we made the cost of looking towards the display more expensive
00:54:02 -- that we would have fewer looks towards the display.
00:54:06 -- And there was -- students were incredibly uneffected by this -- this change.
00:54:17 -- And that was true for the newbies.
00:54:27 -- Now for the hearing students there was a change, and these changes were significant.
00:54:32 -- But nothing that we could make any sense or had any reasonable hypothesis to.
00:54:37 -- I mean, what we saw was that there was a drop in the middle condition,
00:54:41 -- and then a drop in time towards the instructor with an increase toward the display
00:54:46 -- in the middle condition, and then an increase, again, in the far condition.
00:54:52 -- So while this was statistically a significant difference between here that showed
00:54:56 -- up in a nova [Phonetic] , it was nothing that we could come up with a reasonable story for.
00:55:03 -- Yeah?
00:55:04 -- Have you tried to put the display in the middle?
00:55:10 -- No. We -- we've talked about both changing the order of those, and also changing --
00:55:17 -- because something that actually is --
00:55:18 -- is an important variable in the classroom in our experience is whether the interpreter stays
00:55:25 -- with -- they shadow the instructor or whether they're separated from.
00:55:29 -- And we haven't done either of those.
00:55:34 -- The other manipulation we did as part of this experiment was the lecture speed.
00:55:41 -- So I'm going to play these next.
00:55:44 -- They weren't stretched out like this in the real experiment.
00:55:47 -- They were correctly proportioned.
00:55:50 -- But I'm going to play these now and just let you watch this one first.
00:55:55 -- Because like, we get down to it, the Internet really isn't a bunch of hardware,
00:56:01 -- it's not computers, it's not a bunch of wires, it's an agreement.
00:56:05 -- That's all it is.
00:56:05 -- There is no centralized anything, there's nobody in charge.
00:56:09 -- I'm just going to play it again.
00:56:13 -- Because like, we get down to it, the Internet really isn't a bunch of hardware,
00:56:17 -- it's not computers, it's not a bunch of wires, it's an agreement.
00:56:21 -- That's all it is.
00:56:22 -- There is no centralized anything, there's nobody in charge.
00:56:26 -- So this was the -- one of the original lectures that we recorded for Experiment 1.
00:56:33 -- What we did is we manipulated this in a program by increasing the speed.
00:56:38 -- This is 115% of its original speed.
00:56:42 -- So we increased the speed of both the lecture and the interpreter, but we also --
00:56:48 -- when we adjusted this we kept the pitch of the voice the same.
00:56:52 -- So that we didn't -- we didn't get any chipmunking effect.
00:56:56 -- Now I'm going to play the same lecture at 85% speed.
00:57:02 -- -- we get down to it, the Internet really isn't a bunch of hardware, it's not computers,
00:57:10 -- it's not a bunch of wires, it's an agreement.
00:57:13 -- That's all it is.
00:57:14 -- There is no centralized anything, there's nobody in charge.
00:57:21 -- So you can see what we did.
00:57:23 -- We did not just speed up and slow down the instructor
00:57:26 -- and then ask the interpreter to interpret that.
00:57:30 -- Because we wanted the interpreter to simply be sped up and slowed down too.
00:57:36 -- And we picked 115% and 85% as we created a whole range.
00:57:44 -- We picked those as what we felt the limits were at what would appear to be still natural.
00:57:51 -- We wanted people to be able to look at this and not --
00:57:54 -- and not have it be obvious they were looking at something
00:57:58 -- that was in fast motion or slow motion.
00:58:00 -- And the 115%.
00:58:03 -- Now observers saw both 115% and 85%, but they saw different instructors.
00:58:11 -- So one observer would see Instructor A at 115%, Instructor B at 85%.
00:58:18 -- A different observer would see those in the reverse condition.
00:58:22 -- So nobody ever saw both of them.
00:58:24 -- When you see these two together -- if I showed you the 85% first I --
00:58:31 -- with a group this size, I don't think anybody would complain that it was too slow.
00:58:35 -- But then if I showed you the 115% it would be really obvious that we sped it up.
00:58:40 -- If we do it the other way, by the time you saw the 85% it seemed very slow.
00:58:44 -- But when we did it between observers with a different lecture it was fine.
00:58:51 -- And again, it didn't make any difference.
00:58:56 -- So now what I've got to the bottom here, again,
00:59:00 -- are the white bars are slow and the gray bars are fast.
00:59:07 -- And so these are the same data above, but each one is divided in two.
00:59:13 -- The one above is all the near data.
00:59:15 -- So that's averaged, the fast and the slow.
00:59:18 -- Now it's broken down individually on the bottom.
00:59:20 -- And you can see that whether the person was presented with the 115%
00:59:26 -- or 85% it made no difference for the skilled, it made no difference for the newbies,
00:59:35 -- an again as we saw before,
00:59:37 -- there were interesting differences showing up for the hearing students.
00:59:41 -- There again with differences in here that showed up,
00:59:44 -- but nothing that was -- made any sense to us.
00:59:51 -- So with those questions that I presented at the beginning,
00:59:56 -- just say that what we saw is we were able to -- using these gaze trackers,
01:00:03 -- were able to map out how people distributed their attention in these conditions.
01:00:10 -- We found that there were no statistical differences
01:00:14 -- between the live and Memorex conditions.
01:00:16 -- It was important to us, because we were able
01:00:18 -- to do all the other experiments using video tape lectures instead
01:00:22 -- of having to do all of them live.
01:00:25 -- We had to convince ourselves of that, because of course the difference between a live interpreter
01:00:32 -- and a live instructor in front of a class or projecting these two dimensional I wall,
01:00:38 -- we weren't sure, up front, that we could do that.
01:00:41 -- But there weren't significant differences either in the gaze performance
01:00:45 -- or in the learning gauged by pretest and post test, in each case.
01:00:50 -- And I showed the piece about simultaneous communication versus interpreted.
01:00:54 -- And we were shocked that over the range that we tested that the students weren't effected
01:01:02 -- by the speed or the position of what was there.
01:01:06 -- So thank you for putting up -- I went a little bit past 5.
01:01:11 -- Happy to take any questions, and the question I've got for you to think about is
01:01:15 -- if you could monitor the attention of students in a classroom what question would you ask?
01:01:23 -- So I'm wondering what kind of -- what kind of movement -- transition -- makes better learner.
01:01:36 -- Have you tested the grasp of the content in the experiment?
01:01:45 -- It's a critical question and the answer is that nothing that we have seen in the --
01:01:54 -- I mean, the fundamental question is can we --
01:01:56 -- have we found a correlation between a particular gaze performance, a particular strategy in terms
01:02:01 -- of gaze and the performance of learning, and the answer is no,
01:02:04 -- we have not found such a correlation.
01:02:07 -- [ Background noise ]
01:02:26 -- Hi there.
01:02:27 -- I'm not sure if I'm allowed to change this question, but if I could I think that I would --
01:02:34 -- would like to sort of shift it around so that we're tracking the eye gazes of the teacher?
01:02:39 -- Would that be possible?
01:02:40 -- To have the instructor be monitored.
01:02:45 -- And so that they are looking at -- so we're examining what the students --
01:02:49 -- what the teacher is doing when they're looking at the students?
01:02:53 -- Because I've always kind of wondered about skilled signers and their tendency to witness,
01:03:01 -- so to speak, a situation emerge and, you know, be able to divide their attention
01:03:11 -- to the students in a class so that they're facilitating learning by --
01:03:17 -- by, you know, using those resources.
01:03:19 -- Where unskilled signers generally miss what's going
01:03:22 -- on in the peripheral with other deaf students.
01:03:26 -- So it's kind of an interesting sort of twist on your question.
01:03:28 -- Would you think there would be a difference between a deaf
01:03:31 -- or skilled signer's cognitive abilities or, you know, eye tracking,
01:03:37 -- when they're teaching a class as opposed to somebody who isn't a skilled signer.
01:03:40 -- That would be very interesting.
01:03:45 -- We have never done that, and we haven't even talked about doing that.
01:03:49 -- One of the interesting issues of course is that when you put an eye tracker on somebody we have
01:04:01 -- to think about whether that effects what they're doing with their gaze,
01:04:07 -- and whether that effects what other people are doing.
01:04:13 -- What we believe is that when you put an eye tracker on somebody they're conscious of it
01:04:19 -- for the first few minutes and every time you ask them to check their calibration.
01:04:26 -- Because when you first put it on you have to tell them to either look at a set of points
01:04:31 -- or follow a laser pointer or something like that.
01:04:36 -- It's very important to us that when we do the calibration we never tell the person
01:04:41 -- to hold their head still because in our experience if you start by saying, okay,
01:04:46 -- we're doing a calibration now, hold your head still,
01:04:49 -- it doesn't matter what you say after that.
01:04:51 -- Doesn't matter if you say okay, the calibration is over, go ahead and act naturally.
01:04:55 -- People for the next hour do this.
01:04:59 -- And they never forget it.
01:05:01 -- So we go to great lengths to come up with calibration schemes
01:05:05 -- where we never tell them what to do with their head.
01:05:09 -- We don't mind telling them to look at a particular point.
01:05:13 -- I'm trying to imagine a classroom where even though this eye tracker is just a pair
01:05:20 -- of glasses now and looks much less crazy than many eye trackers,
01:05:28 -- I'm trying to imagine a classroom where the teacher wears this
01:05:32 -- and you don't have students coming up looking at the teacher.
01:05:39 -- We -- we'd have to -- it'd be --
01:05:46 -- I mean -- I mean, I'm just like, yeah, I was looking at elementary age students
01:05:54 -- who wouldn't be distracted by a Power Point too.
01:05:56 -- I think that would probably create a different atmosphere too,
01:06:00 -- especially considering their age groups.
01:06:02 -- But it would have to be repetitive.
01:06:05 -- You'd have to use it a lot until the students were able to get used
01:06:08 -- to the teacher wearing these silly glasses.
01:06:10 -- But just an idea.
01:06:11 -- But that would probably only take a week, right?
01:06:13 -- If the teacher is willing to wear it for five days,
01:06:17 -- I would bet that by the third day the kids would stop paying attention.
01:06:24 -- Interesting.
01:06:27 -- [ Background noise ]
01:06:36 -- I had to step out so I missed most of the data.
01:06:40 -- But it seems like the demand for the deaf student is to pay most
01:06:46 -- of the attention to the interpreter.
01:06:48 -- So it seemed to me that this work would --
01:06:51 -- might have a lot to say about how we construct our Power Point presentations
01:06:56 -- in making them visually rich but easily comprehensible with less need for attention.
01:07:02 -- I mean, I think that this from a pedological point of view, Power Point has become
01:07:05 -- so prevalent that it -- that there are probably experiments that could be done
01:07:10 -- that could vary the level of -- sort of density of the English words on the slide,
01:07:15 -- the type of visual displays that would go onto the slide that would allow for this sharing
01:07:21 -- of attention to yield some kind of a maximum [Assumed spelling].
01:07:24 -- Goes back to Sam's question about what are the pedagogical implications of knowing
01:07:29 -- that this kind of shared, distributed attention span is taking place
01:07:33 -- and has to take place in the classroom.
01:07:37 -- We tried -- we did one experiment
01:07:42 -- where we tried before a lecture giving students either nothing or the slides before the lecture
01:07:52 -- or the slides but in jumbled order.
01:07:58 -- Wondering whether just having had access to some
01:08:01 -- of this information before hand would make a difference.
01:08:06 -- And neither one helped.
01:08:08 -- Now this was a 15-minute introductory lecture.
01:08:15 -- So we haven't tried this in a semester-long class.
01:08:19 -- So I don't know whether giving students the slides ahead of time
01:08:23 -- for a semester-long class would make a difference.
01:08:26 -- I can tell you my -- this is being recorded -- I can tell you my snotty prejudice.
01:08:33 -- My snotty prejudice is my students -- if I gave them all my slides for a semester-long class,
01:08:40 -- they wouldn't look at every single slide before every single class.
01:08:45 -- I -- I know that -- I mean, there's a group of NTID that's working on technology to help
01:08:54 -- with this and they've done -- I mean, the simplest thing they've done is a little timer
01:08:58 -- that you put on each Power Point slide that's supposed to help remind you
01:09:02 -- that when the slide goes up you should wait 10 seconds before you start talking.
01:09:08 -- And it's incredibly frustrating, and I went to a demonstration
01:09:13 -- where the people making the slides couldn't wait the 10 seconds for the little light
01:09:19 -- to turn green before they started talking, even during the demonstration.
01:09:23 -- It's -- it's probably one of the most difficult things to do.
01:09:36 -- I've -- I just taught in a class -- there's a class called frontiers of science.
01:09:44 -- And what's interesting about the class --
01:09:46 -- what's supposed to be interesting about the class is it's taught
01:09:49 -- by a different professor every week.
01:09:52 -- And so people are supposed to come and just talk about their research, and it's an honors class,
01:09:58 -- and so it's just supposed to expose these honors science students to a bunch of research.
01:10:04 -- What's interesting for me is it was presented in a classroom with four projection screens.
01:10:10 -- And so I had to create all new slides
01:10:12 -- because it's a single Power Point presentation, but with four screens.
01:10:19 -- And it gave me the option, no, it forced me to think about what to put on four screens.
01:10:27 -- And what I could do and what I ended up doing in almost every single case is
01:10:33 -- when I have an introduction slide, that introduction slide went up here and then
01:10:38 -- for the next slide it moved all the way to the right
01:10:41 -- and it stays there while I put up the next few slides.
01:10:45 -- And I think it made a huge difference.
01:10:49 -- I first thought of this as just a huge imposition that I had to figure out what to do
01:10:52 -- with four screens in front of a classroom.
01:10:55 -- But the fact that it gave me a place to put some history I think made a huge difference.
01:11:01 -- And I know talking to deaf students in my classes, what they seem most frustrated
01:11:09 -- by is the slide that has gone away.
01:11:14 -- And not so much that I start talking over something.
01:11:20 -- But that by the time they want to go and look at something, that one's gone.
01:11:25 -- And so I actually think now that having -- even if what we could do is just do something
01:11:31 -- where you have multiple screens and if it could just be automated to the point
01:11:36 -- where each time I advance a slide what's on the screen here --
01:11:40 -- remember in the old days when we had chalk boards and the professor walked around the room.
01:11:45 -- And I didn't -- I came back here and I started erasing.
01:11:48 -- And I always had four chalk boards worth of history.
01:11:52 -- That would be so trivial to do in Power Point if we have time I pushed the button instead
01:11:56 -- of erasing the slide it moved it over one.
01:11:59 -- I wonder how much that would help?
01:12:05 -- Demands farther gaze, but it would I think having
01:12:09 -- that history available might help do that.
01:12:12 -- [ Background noise ]
01:12:22 -- I'm curious how much you took into account outside stimuli in the classroom.
01:12:27 -- Like, I mean, as a student I'll look at the clock, I'll look at my peers,
01:12:30 -- I'll look down if I'm writing on a paper.
01:12:33 -- Were those outside stimuli not included in the mock classroom,
01:12:35 -- or was this something you ignored in your data?
01:12:41 -- Peers weren't included.
01:12:44 -- But I'd think everything else was.
01:12:47 -- Because it was actually in a real room, so I mean the examples you gave.
01:12:51 -- There was a clock there.
01:12:53 -- And in fact, there was other stuff.
01:12:54 -- Because there were projectors and there were --
01:12:57 -- there were plenty of other stuff to look at it in the room.
01:13:04 -- So there was an "other" category.
01:13:09 -- But people -- people spent most of their time looking where they were supposed to look.
01:13:16 -- Now was that because we paid them to be here in this study
01:13:21 -- or was that because they knew they were taking a post test?
01:13:23 -- Who knows.
01:13:25 -- [ Background noise ]
01:13:34 -- Okay, so this might -- this may or may not be related, but if you brought the display closer,
01:13:45 -- I mean, the reason I'd like to ask is if we were to look at --
01:13:49 -- like me as a signer, sometimes I look at my own hands while I'm signing.
01:13:52 -- So I'll be signing, I'm looking at the students,
01:13:55 -- I'm looking at the display, but I'm also looking at my hand.
01:13:57 -- And I'd kind of like to know where they're looking.
01:14:00 -- From further away they may not be making those eye-directed activities
01:14:06 -- to my hand or to my eyes.
01:14:08 -- But I want to know what's the proximity that can be used
01:14:10 -- for the eye tracking movement equipment that you have.
01:14:13 -- Like what proximity can we determine and what distance
01:14:16 -- that we're -- these people are looking at.
01:14:18 -- It will work over all distances.
01:14:20 -- There is a parallax error because the observer is looking obviously along the sight
01:14:30 -- of their eye.
01:14:32 -- And the scene camera is just above their eye.
01:14:35 -- The design of this is -- we've designed it so that that scene camera sits just above your eye.
01:14:42 -- Commercial eye trackers usually put the scene camera might in the middle between the two eyes
01:14:47 -- to sort of get a compromised view between the two eyes.
01:14:50 -- The problem is, then, if you calibrate at one distance and then track
01:14:55 -- at a different distance -- so for example, if I calibrate at one distance and then look
01:14:58 -- at my own hands, I'll get an error both vertically and horizontally.
01:15:04 -- We put that scene camera over the eye that we track so that
01:15:08 -- as you're changing distance you only get an error vertically and not horizontally.
01:15:14 -- That error is predictable and -- and I can calculate the error.
01:15:21 -- So if I know that the person is looking at their hands I can -- I can correct for that offset.
01:15:27 -- I didn't talk about it here, but this is an interesting --
01:15:32 -- I mean, you have to look at a word to be able to read it.
01:15:39 -- You don't have to look at the hands to read a sign.
01:15:44 -- I had a high school intern over the summer and I gave her the task of looking at one
01:15:52 -- of these video tapes and I said I want you to tell me
01:15:55 -- when the interpreter starts signing Internet and when she ends signing Internet.
01:16:01 -- And I told her the sign for Internet was this.
01:16:05 -- And -- oh yeah, actually -- Internet, that section there was the Internet is an agreement.
01:16:15 -- And I wanted her to parse that and tell me when each sign started and when it ended.
01:16:21 -- And one of the interesting parts about that -- because what we want to do is then go and look
01:16:25 -- at the pretest and post test and find out when people got a question right
01:16:30 -- and wrong and where they're looking.
01:16:32 -- Well there are issues like how many signs -- once the person --
01:16:37 -- if I'm watching the interpreter when the interpreter says this, if I watch this much
01:16:42 -- of it , are there any other signs the person could be doing?
01:16:45 -- Do you really have to watch this whole thing to get that's Internet?
01:16:49 -- And once the person has done this, is there any other sign it could be,
01:16:53 -- do I really have to watch all the way to think same before I have agreement?
01:16:59 -- In a study we haven't done that we -- that we need to do is to understand how far
01:17:09 -- in the periphery I can understand signs clearly.
01:17:16 -- Because for example when you watch somebody watching an interpreter no skilled signers,
01:17:26 -- when an interpreter is signing are watching her hands.
01:17:31 -- They are uniformly watching her face.
01:17:35 -- Some of the newbies, people who are still learning sign language are staring at the hands.
01:17:42 -- And sometimes when the -- if when the interpreter --
01:17:47 -- I mean, if we're standing this close together for many,
01:17:51 -- many signs a deaf person can be looking at me and getting these signs.
01:17:58 -- Finger spelling, probably not.
01:18:00 -- Although with context, maybe.
01:18:03 -- And that's something nobody has published a good study that talks about what kind
01:18:10 -- of information can be captured at different positions.
01:18:14 -- Because what the visual system is exquisitely tuned in the periphery for motion perception.
01:18:24 -- That's what we're really good at.
01:18:26 -- If I try to -- if I ask you to judge the position of something a little bit behind you
01:18:33 -- and it's standing still, you won't even know that it's there.
01:18:35 -- But if it wiggles just a little bit you can actually see something behind you.
01:18:40 -- And so what we don't know yet is what kind of motion is perceptible
01:18:47 -- at different positions in the field.
01:18:50 -- We did a pilot study last year trying to look at people with different levels of expertise,
01:18:58 -- where we didn't have people reading signs, we had people try to just tell the difference
01:19:04 -- between different kinds of motions.
01:19:07 -- So they were non-sign motions.
01:19:10 -- And we were hoping to see whether there would be differences in skilled signers
01:19:17 -- and hearing non-signers because there are published differences
01:19:22 -- in peripheral attention tasks.
01:19:27 -- But they're -- they're -- I'd really call them non-tasks
01:19:33 -- that people -- that have been published.
01:19:36 -- So we're wondering if something would show up.
01:19:37 -- And we didn't see anything significant in the differences there.
01:19:40 -- But that's something we need to -- need to see.
01:19:43 -- And so I -- your question made me think of that, because you know, the question of how much --
01:19:49 -- when you say sometimes you look at your hands when you're signing.
01:19:52 -- I -- I'd like to -- I'd like to watch you do that sometime, because especially
01:20:00 -- when you're signing, you know, how much of your periphery is -- I wonder --
01:20:05 -- I'm sure I watch myself sign sometimes,
01:20:08 -- but that's when I'm asking what's the sign for um -- so.
01:20:18 -- [ Background noise ]
01:20:27 -- My question is actually related to that and how the use of gestures by the teacher
01:20:33 -- in the classroom standing next to the interpreter effected eye gaze.
01:20:37 -- Because I notice now I watch the interpreter to help myself pick up sign,
01:20:41 -- but when you use gestures so show the placement of the camera on your head
01:20:44 -- or if you interject some signs that you know as well my vision changes between both,
01:20:50 -- even though -- although I'm hearing, but I'm curious how that would effect the deaf students
01:20:54 -- and their gaze too, if they see a hand moving does their vision go to the instructor.
01:20:59 -- And you did mention that one of the men did use a gesture with pouring the sand bag.
01:21:03 -- And I was just wondering if any of that data was analyzed to look at how much the gesturing
01:21:07 -- of the professor influenced the eye gaze of the students?
01:21:11 -- Yes. Large motions often drew the attention of the student and drew eye movement.
01:21:19 -- In the IT lecture, the lecture by the female professor, again, because the --
01:21:27 -- in the recorded lectures, in the Memorex lectures, we had the opportunity to look
01:21:32 -- at some stereotype things that happened over and over again.
01:21:35 -- There was one point where I think she started to go towards her laptop I think
01:21:41 -- as though she were going to change the slide, and then she changed her mind.
01:21:45 -- And then that beginning motion where she wasn't doing anything other than the beginning
01:21:51 -- of a motion often drew the attention of a quick gaze.
01:21:55 -- Now you can make an eye movement I mean, it takes a fraction of a second
01:22:00 -- to make an eye movement, make a decision
01:22:02 -- that there's nothing useful there and come back again.
01:22:05 -- And most signs are extended enough that you can move your gaze away from the interpreter,
01:22:12 -- come back again, and probably not miss anything.
01:22:15 -- So it was -- it was quite frequent that that would happen.
01:22:18 -- And when I watch these tapes now and try to think about making the decision
01:22:25 -- about when it's okay to break away from the interpreter, especially to read text on a slide.
01:22:31 -- And we talked about, you know, should we think about how much text could be --
01:22:35 -- should be on an individual slide, how much information should be there.
01:22:38 -- To me that's -- that's the part that I can't wrap my head around.
01:22:43 -- I can't imagine being able to make that kind of decision on the fly.
01:22:48 -- I think I'd constantly be second-guessing myself about whether it's okay to be gone this long.
01:22:53 -- I think I'd be making many more transitions than the experienced signers did.
01:22:59 -- Also when -- also with that I was curious how much information that the students
01:23:06 -- who are focusing on the interpreter might lose that --
01:23:09 -- the teacher who was speaking has to offer.
01:23:11 -- If they're representing something visually on their hands
01:23:13 -- that may not be the same representation in ASL, if the teacher has a better way of showing that,
01:23:18 -- maybe, that isn't specifically ASL language.
01:23:22 -- If they would possibly be missing information they would get the linguistic information
01:23:25 -- from the interpreter, but perhaps the visual representation of how the teacher shows
01:23:30 -- that on their hands might still be useful to the students then
01:23:33 -- if they miss information from that lack of information.
01:23:38 -- I'm sure that's true.
01:23:40 -- Especially with the variation in skill of the interpreter.
01:23:43 -- I mean, just -- I know enough sign language so that I can tell the difference.
01:23:49 -- I know when I have a really good interpreter, and I know when I don't.
01:23:54 -- And it's -- it's frustrating to me when I know how much a student is missing.
01:24:06 -- And I can see the students modulate, you know.
01:24:12 -- If I'm in a class where I normally have the name interpreter who comes each day to that class,
01:24:19 -- if that interpreter is sick and there is a sub, you know, there are students
01:24:23 -- who normally don't spend any time looking at me who will all
01:24:27 -- of a sudden spend this lecture looking at me.
01:24:31 -- And -- because they're modulating and making different decisions
01:24:35 -- about where they think they'll get the most information.
01:24:39 -- [ Background noise ]
01:24:49 -- Okay.
01:24:49 -- Okay, so I think we're going to have to stop here.
01:24:52 -- All right.
01:24:53 -- Thank you.
01:24:55 -- A production of Academic Technology, e-Learning, and Video Services.
01:25:00 -- Copyright 2008, Gallaudet University, all rights reserved.