The past two days have been incredible, in no small part thanks to a terrific writeup by the folks at at TechCrunch. It’s been great to put our product out in front of people and start getting feedback from the community. One theme that comes up a lot is the accuracy of our eye-tracking software, so I thought I’d take some time to address that here.
Where am I looking?
There are a number of ways of measuring accuracy in eye tracking, but ultimately they’re concerned with the same metric: how far is the calculated focus of the user’s gaze from where they were actually looking? You can delve into this even further by measuring the drift – the amount by which the accuracy degrades over time.
A good hardware eye tracker like the Tobii T60 reports having around 0.5 degrees of accuracy and 0.3 degrees of drift. If you’re sitting 24″ from the screen, applying some trigonometry shows that you’ll be off by about a third of an inch. The T60 has a pixel density of about 100 pixels per inch, so you end up with about 35 pixels of error.
Testing on a MacBook Pro, we found that our software currently achieves an error of less than 70 pixels. We’re testing this value on the MacBook Pro, so we have a higher pixel density – 128 pixels per inch – meaning that our error is a little more than half an inch. The MBP has a high high pixel density when compared to a standard display, so we can often do much better than this. As you might expect, error in an eye tracking study gets better under good lighting conditions. It also gets worse in poor lighting conditions or with excessive head movement (which we take some steps to mitigate, see below), but these figures are for a typical study.
We have a couple of projects underway at the moment to bring this error down even further using a few machine learning techniques (look for more posts on this subject!). If you’re doing a web usability study, though, knowing where someone looked plus or minus 70 pixels is generally enough for you to tell what component they’re inspecting. And when you aggregate data from a larger number of users, you get predictably better results…
Who can you track?
A dark secret of the eye-tracking industry is that not everyone tracks well. Even using custom hardware in a controlled environment, 10% or more of people just don’t track well. This number goes up for us, since our whole goal is to let people run eye tracking studies in their own homes with off-the-shelf hardware.
GazeHawk will never charge our customers for anyone who doesn’t track well. As a result, we often send out more invitations to our participants to take part in studies then the customer actually purchased. Then, we review each result to make sure that the accuracy is sufficiently good to include it in the customer’s report. We also pay a bonus to the users whose data we use, as an incentive to improve the lighting conditions and help us give our customers the best results possible.
This is really just an introduction – there’s a lot more to be said about accuracy in eye tracking, especially when it comes to getting useful results out of a study.
Coming soon: I weigh in on the debate about how many users you should have in an eye-tracking study, and some of the difficulties we’ve faced in providing valuable feedback to our customers.