Eye tracking companies, including GazeHawk, frequently use heatmaps to present the results of a study to customers. The reason for this is simple: heatmaps instantly communicate where study participants looked and for how long. They tell you the hot spots of the site and the places that might be getting overlooked.
That said, heat maps have some major drawbacks. In particular, heatmaps usually:
- Eliminate the element of time from eye tracking. Heatmaps do not communicate when a user looked at something, only that he or she did so at some point. Moreover, heatmaps invite the reader to forget that this is a problem, since there is often no indication that this axis is getting left out.
- Do not distinguish between a single person looking at a spot for a long time and a group of people looking at a spot for a moment. This ambiguity can lead to all sorts of problems interpreting the heatmap. We find that unless warned about this problem, readers tend to conflate the results of individual participants with that of the group as a whole and assume that most people’s individual heatmap looks pretty much like the aggregate heatmap. This is usually not true.
When you combine these two problems, you can get large differences between what a study’s “first-glance” aggregate heatmap suggests and what the data actually support. Here’s an example: a few months back, we conducted a study tracking the eye movement of people on NBC.com. The heatmap below shows the combined views of all participants in the study:
Suppose that we’re interested in making sure users understand the layout of the site and what content is available to them. At first glance, everything appears to be working as intended. The screen-spanning Miss USA images up top get a lot of attention, as does the “Spotlight on NBC” spot and top-right ad. Users followed the center column of large pictures down, occasionally flicking to the right or left columns as particular bits of content caught their eye.
Almost everything on the page was viewed at some point, and we can see a few clear but unsurprising trends in the distribution of views, such as a preference for large images and human faces. Mission accomplished?
Not at all. Check out these heatmaps from a few individual participants in the study:
While all participants looked down the green central column of the site, there doesn’t appear to be any pattern besides that (this is true of all participants, not just these four). Some looked to the left, some looked to the right – and others didn’t look anywhere at all, quite unlike what the combined heatmap suggests. If we only looked at the combined heatmap, we might think that the site is fine as-is. After reviewing the individual heatmaps, however, we get the impression that the site might have a bit too much going on; after the middle column, people did not know where to look.
While we’re aware of their flaws, at GazeHawk we think that heatmaps are not beyond redemption. Keeping their weaknesses in mind when using them can prevent misinterpretation and ensure that your conclusions are meaningful. And, if all else fails, you can always review individual participant data and watch the eye tracking as a video.
Next week we’ll look at ways to cluster eye tracking data so that the heatmaps don’t have these problems. Until then, don’t get burned.