Designing User Experiences with Eye Tracking

In recent years, we have witnessed a revolution in eye tracking technologies. Eye trackers that used to cost tens of thousands of dollars, requiring awkward head-mounts and convoluted calibration procedures now cost less than a few hundred dollars and are simple to set up and easy to use. As technology decreases in size and cost, it is easy to envision a world in which eye trackers will ship by default with interactive appliances, similarly to how any phone or laptop now comes with an integrated webcam.

This context creates a need for principles that guide the design of interactive systems that take input from the eyes. This talk will explore the principles of gaze that make it such a compelling way of interacting with computers and mobile devices.

Eduardo Velloso – Designing User Experiences with Eye Tracking

Modern eye trackers are relatively cheap, light and easy to use. But in popular culture eye tracking is generally considered something that is only used for evil.

Eduardo is not talking about heatmaps or eye tracking as assistive technology. But rather he’s talking about how eye tracking can enhance the experience when you are also using our hands (manual input).

The main message is that the eyes work in a substantially different way to our hands. Designing for gaze-based interaction is a different game. While it is very fast and indelibly linked to attention, it is still inaccurate and requires a lot of calibration.

Vision is heavily focused on foveal vision, the central area of our field of view. Eye tracking could, at best, control an area roughly the size of your thumb at arm’s length.

Challenge 1: inaccuracy

You can work with this as additional input, for example the ability to distinguish vision and touch allows different things to happen based on whether you are looking at your finger (or a stylus) or looking elsewhere. Looking at the toolbar: selecting a new colour, etc; looking at the touch point, drawing. The vision point does not need to be as fine as the touch point.

Challenge 2: calibration

Calibration is boring, slow and not realistic for many scenarios. Using relative movement negates the need for calibration – our eyes move smoothly when following something. Example: a display with moving items, that can detect which one you are looking at.

Challenge 3: multiple roles

Your eyes are already in use, looking at things! So you need to know when people actually intend to do something, as opposed to just looking around.

Our gaze is intensely powerful in human relationships. Compare the ways new couples glance shyly at each other, versus the deep gaze of a couple in love, or the distant and separate gazes after a fight.

Gaze awareness adds a whole new element to interactions like games – when you can see what your opponent is looking at, it can reveal their strategy.

Take aways:

  • designing for gaze requires new ways of thinking about input
  • use gaze in a supporting role
  • use movement to avoid calibration
  • embrace the nature of visual attention

@ed_velloso | eduardovelloso.com