As designers, we spend a lot of time crafting our images or graphics, but how much do we really know about how people look at them?

Greg Edwards uses eye tracking technology to understand how our eyes move over computer screens.

He helped create the Advanced Eye Interpretation Project at Stanford University, and is the CEO and founder of Eyetools, Inc. in San Francisco. Most of the work that Dr. Edwards currently does at Eyetools is to help clients understand how to make their websites communicate more effectively through a better understanding of viewer behavior.

The eye tracking tools have come a long way since the first pioneering work decades ago (see GJ post on the 1967 Yarbus eye tracking studies).

A typical basic hardware setup (this example from the lab at University of California San Diego) includes a non-invasive head-mounted system.


With eye tracking technology, scientists can carefully follow the saccades (jumps) and fixations of subject’s eyes as they review text and images on a computer screen. This graphic, sometimes called a scanpath or a gaze trace shows the sequence and position of an individual’s center of attention.


Scientists can also record the gaze behavior of a large group of people to find out what part of the design attracts the eyes the most, creating what’s known as a heatmap. The areas receiving the most attention are indicated in red and yellow. Areas receiving less attention are mapped in blue or dark.

The technology can also record the activity of the hands on the keyboard and mouse and correlate it with the gaze data.

Dr. Edwards and his team at Stanford were able to use this information to infer the mental state of the computer user. They called their technology “the eye interpretation engine.”

You can make basic inferences about mental states from this data. There’s a clear difference between “reading,” “scanning,” and “searching,” for example. Another discovery is that people look at banner ads even though they don't click on them.

As Dr. Edwards puts it, “the eye interpretation engine parses eye-position data into higher-level patterns that can then be used to infer a user's mental state or behavior.”

I asked Dr. Edwards if we can we tell from scanpath data if a subject is just looking at the style of a type font rather than reading the text? He replied:

“We can tell if a graphic designer is looking at the style of a type font or reading because the behavior changes -- looking at the font keeps the eyes localized in areas longer than would be natural as they examine the font, or the eye movement wouldn't be consistent since they would be looking at features of the font as the driving factor rather than the text itself. Now, could someone purposefully fool this to behave as if they were reading while they were actually examining the font? Yes, if one consciously did that. Would it occur naturally? No.”


I also wondered if it is possible develop higher levels of inference about the cognitive behavior behind eye behavior, to know not merely where someone is looking, but what they’re thinking when they’re looking at it.

For example, you might look at this woman’s red jacket and think that it doesn’t fit her right, and I might look at the same red coat and wonder where she bought it.

At the present time, Dr. Edwards told me, we cannot make such conclusions from the data. The purpose of his original patent work was not to determine what people were thinking, but to determine their mental state and current behavior—are they searching, examining, spacing out—which is different from thinking.

“You can see someone initially checking out the lay of the land of an unfamiliar scene, and you can see when they narrow in to focus on particular areas -- these are behavioral shifts that often happen very quickly and unconsciously -- people are not often able to accurate self report these. You can tell these with the scanpath data. You can't tell how they feel without some other means.”

It seems to me that this would be a very interesting area for future research, especially if eye tracking and keyboard/mouse data were combined with functional MRI (fMRI) data, which shows where activity is localized within the brain in real time.
------------
For more on fMRI data, check out the previous GurneyJourney post on Neuroaesthetics
Eye Interpretation Project, link.
Wikipedia article on eye tracking, link.
Eyetools blog, link.
Thanks to Dr. Edwards.
 
Top