Contact Us

Why Eye-Tracking?

Eye-tracking is a powerful tool for evaluating the user experience, whether the user experience involves answering a survey, navigating a website, or driving a car! Eye-tracking provides objective physiological data to help you develop materials and systems that are optimized for the task at hand. By monitoring eye movements, researchers can objectively see how study participants interact with a stimulus.

This can be helpful for answering questions like...


Do users notice the Contact Us feature on my survey?


Where do users look first when they view my website? Where do they look next?


Is design A processed differently than design B?


Westat's Approach

Data on the user experience often comes from self-reported feedback from study participants. Eye-tracking allows researchers to observe for themselves what participants pay attention to.

Eye-tracking can reveal where people look, but not what they perceive. Therefore, Westat’s approach combines quantitative eye-tracking data and qualitative feedback to provide a complete picture of how people interact with study materials.

Collecting eye-tracking data is not as difficult as it might seem!

With a little bit of training and practice, eye-tracking hardware and software can be set up easily.

There are two common types of eye-trackers – glasses-based equipment and remote equipment.

Eye-Tracking glasses

  • Worn by the users.
  • Good for eye-tracking in real world environments where the user may be mobile, or looking at a stimulus that is not on a screen like a paper document.

Remote eye-trackers

  • Rest in front of the user and track their eye-movemnent from a distance.
  • User does not have to wear anything but they have to remain largely stationary.
  • Optimized for eye-tracking on screens.

Westat currently owns three sets of eye-tracking equipment – two types of glasses and one remote tracker.

Tobii Pro Glasses 2

Tobii Pro X2-30 Eye Tracker

Dikablis Glasses 3 + Vehicle Testing Kit (VTK)

Analyzing eye-tracking data

Eye-tracking equipment typically captures and analyzes two types of eye-movements: fixations and saccades.

Most eye-tracking data analysis focuses on fixations, which are instances of the eye dwelling in one location for a set amount of time (usually a minimum of 1/10 of one second). Fixation location, count and duration are all useful indicators of attention. Longer and more fixations are associated with deeper cognitive processing. Saccades are instances of eye movements in between fixations. Longer saccades indicate that the eye is moving widely from one place to the next (like watching a tennis match); shorter saccades indicate that attention is staying focused more narrowly (like reading).

Eye-tracking data can be analyzed both visually and quantitatively.

Heat Maps

Heat maps are images that summarize where people spend most of their time fixating while looking at a stimulus. ‘Hotter’ colors represent areas that receive more attention than areas with colder, or no color. Heat maps can summarize the behavior of one individual or many. The example below shows a heat map for the relative fixation duration across about 50 participants who viewed a paper copy of a fictitious drug ad.

Gaze Plots

Gaze plots are images that show the sequence of fixations and saccades across a stimulus. This example is for one respondent who read a drug ad very carefully. Each circle in the gaze plot represents a fixation. The numbers show the order of the fixations (i.e. what they looked at first, second, etc..) and the size of the circles represent the length of the fixation (larger circles mean longer fixations). The lines between the circles represent the saccades.


Quantifying eye-tracking data

In addition to creating compelling visualizations, eye-tracking data can be analyzed quantitatively for more precise evaluations and comparisons between groups. Fixation data can be used to measure and compare numerous outcomes related to attention. For example - the total number of fixations, the total fixation duration, the time to first fixation, the number ‘visits’ to an Area Of Interest (AOI), etc. With the appropriate experimental design, this information can be very powerful for comparing user attention to different visual designs.

While eye-tracking allows researchers to observe what people pay attention to, it does not inherently tell you why they behave the way that they do, or what their thoughts are. Therefore it’s common to follow eye-tracking data collection with a debriefing interview to collect qualitative information to help explain the behaviors.

Our Experience & Capabilites

Select a publication to see abstract

expand all

close all

Re-examining the middle means typical and the left and top means first heuristics using eye-tracking methodology

Höhne, J.K., Lenzner, T., Neuert, C., and Yan, T. (2021). Journal of Survey Statistics and Methodology, 9, 25-50. doi: 10.1093/jssam/smz028.

Web surveys are a common self-administered mode of data collection using written language to convey information. This language is usually accompanied by visual design elements, such as numbers, symbols, and graphics. As shown by previous research, such elements of survey questions can affect response behavior because respondents sometimes use interpretive heuristics, such as the “middle means typical” and the “left and top means first” heuristics when answering survey questions. In this study, we adopted the designs and survey questions of two experiments reported in Tourangeau, Couper, and Conrad (2004). One experiment varied the position of nonsubstantive response options in relation to other substantive response options and the second experiment varied the order of the response options. We implemented both experiments in an eye-tracking study. By recording respondents’ eye movements, we are able to observe how they read question stems and response options and we are able to draw conclusions about the survey response process the questions initiate. This enables us to investigate the mechanisms underlying the two interpretive heuristics and to test the assumptions of Tourangeau et al. (2004) about the ways in which interpretive heuristics influence survey responding. The eye-tracking data reveal mixed results for the two interpretive heuristics. For the middle means typical heuristic, it remains somewhat unclear whether respondents seize on the conceptual or visual midpoint of a response scale when answering survey questions. For the left and top means first heuristic, we found that violations of the heuristic increase response effort in terms of eye fixations. These results are discussed in the context of the findings of the original studies.

Investigating the impact of violations of the “left and top means first” heuristic on response behavior and data quality

Höhne, J.K., and Yan, T. (2020). International Journal of Social Research Methodology, 23(3), 347-353. doi: 10.1080/13645579.2019.1696087.

Web surveys are an established data collection mode that use written language to provide information. The written language is accompanied by visual elements, such as presentation formats and shapes. However, research has shown that visual elements influence response behavior because respondents sometimes use interpretive heuristics to make sense of the visual elements. One such heuristic is the ‘left and top means first’ (LTMF) heuristic, which suggests that respondents tend to believe that a response scale consistently runs from left to right or from top to bottom. We conducted a web survey experiment to investigate how violations of the LTMF heuristic affect response behavior and data quality. For this purpose, a random half of respondents received response options that followed a consistent order and the other half received response options that followed an inconsistent order. The results reveal significantly different response distributions between the two groups. We also found that inconsistently ordered response options significantly increase response times and decrease data quality in terms of criterion validity. We, therefore, recommend using options that follow the design strategies of the LTMF heuristic

Eye-tracking data: New insights on response order effects and other cognitive shortcuts in survey responding

Galesic, M., and Yan, T. (2010). In Social Research and the Internet: Advances in Applied Methods and New Research Strategies, edited by M. Das, P. Ester, L. Kaczmirek, and P. Mohler, Chapter 14, Taylor & Francis Publishing Group (pp 349-370).

Survey researchers since Cannell have worried that respondents may take various shortcuts to reduce the effort needed to complete a survey. The evidence for such shortcuts is often indirect. For instance, preferences for earlier versus later response options have been interpreted as evidence that respondents do not read beyond the first few options. This is really only a hypothesis, however, that is not supported by direct evidence regarding the allocation of respondent attention. In the current study, we used a new method to more directly observe what respondents do and do not look at by recording their eye movements while they answered questions in a Web survey. The eye-tracking data indicate that respondents do in fact spend more time looking at the first few options in a list of response options than those at the end of the list; this helps explain their tendency to select the options presented first regardless of their content. In addition, the eye-tracking data reveal that respondents are reluctant to invest effort in reading definitions of survey concepts that are only a mouse click away or paying attention to initially hidden response options. It is clear from the eye-tracking data that some respondents are more prone to these and other cognitive shortcuts than others, providing relatively direct evidence for what had been suspected based on more conventional measures.

Applying Real World Eye-Tracking to Paper-Based Direct-to-Consumer (DTC) Advertisements

Andrew Caporaso, Douglas Williams, Victoria Hoverman, and Jennifer Crafts (Westat); Kathryn Aikin and Helen Sullivan (FDA). Poster presented at the Federal Computer Assisted Survey Information Collection (FedCASIC) Conference, Washington DC. (2019) Eye-tracking is frequently used as an empirical method to assess attention to defined areas or features of visual fields.

Eye-tracking is frequently used as an empirical method to assess attention to defined areas or features of visual fields. These range from survey forms to informational and marketing products. A common practice is to use a fixed integrated eye-tracking device with a monitor to display the visual stimulus. This allows the researcher to control presentation of the stimulus, but diminishes real-world application for paper-based documents - especially those spanning multiple pages and requiring respondent interaction with the paper document. This presentation will report on the methodology and data quality for a pretest Westat conducted on behalf of the Food and Drug Administration (FDA). The objective of the pretest was to refine procedures for collecting eye-tracking measures of attention for comparison with responses to self-reported measures of attention as well as information recall, recognition, and comprehension. Forty-one participants who had one of two medical conditions wore eye-tracking glasses while viewing a paper DTC advertisement related to their medical condition. Following the ad reading, participants answered a web questionnaire related to the advertisement and responded to debriefing probes about their experience with the eye-tracking equipment and the questionnaire. We report on lessons learned with respect to achieving successful calibration and accurate eye-tracking data. The findings will be informative for researchers interested in collecting eye-tracking data for real-world stimuli.

Evaluating Grid Questions for 4th Graders.

Aaron Maitland. Paper presented at the American Association of Public Opinion Research Annual Conference, Austin, TX. (2015).

Eye-tracking has been used to better understand the survey response process. For instance, eye-tracking has been used to identify questions that are difficult to comprehend, how to present long lists of response options, and to measure the length of fixation on definitions in Web surveys. Grid questions have been commonly used in Web surveys as well as in other types of surveys. The literature shows respondents took less time to answer questions when they were presented in a grid than when they were presented individually across separate pages or screens. The use of grid questions, however, may also be associated with several undesirable outcomes, including higher breakoff rates, higher missing data rates, and straightlining. Relatively little is known about how children answer grid questions. This paper demonstrates how eye-tracking was used to determine the feasibility of using such questions to measure the background characteristics of students in the National Assessment of Educational Progress (NAEP) questionnaire. Fourth grade students were answered both grid and discrete (single-item per screen) versions of questions on tablet computers while wearing real-world eye tracking glasses. This study addresses four research questions related to the use of eye-tracking to test survey questions. First, we examine whether grid items require more effort to answer than discrete items for fourth grade students. Second, we investigate how the processing of sub items changes within a grid. Third, we examine how the processing of questions change over time. In order to address these research questions, we examine difference in the mean number of fixations per word and the mean duration per word for grid and discrete questions. Overall, the study finds support for the use of grid questions with fourth grade students in the NAEP. Implications for the use of eye-tracking equipment to evaluate survey questions are also discussed.

Use of Eye-tracking to Measure Response Burden.

Ting Yan, Douglas Williams. Paper presented at the American Association of Public Opinion Research Annual Conference, Austin, TX. (2015)

Concerns about the burden that surveys place on respondents have a long history in the survey field. A review of the existing literature shows that the term “burden” is defined loosely and that researchers measure response burden in many different ways. Some measure response burden through properties of surveys/tasks that are believe to impose response burden, such as the length of an interview. Some measure response burden through respondents’ attitudes and beliefs toward surveys, such as interest in and perceived importance of the survey. Others measure response burden through respondent behaviors (e.g., willingness to be re-interviewed) or direct respondent measures (e.g., feelings of burden). All three types of measurement are based on self-reports to survey questions, which are subject to the usual sources of reporting error due to misunderstanding of the survey questions, partial retrieval of information, biased or inaccurate judgment strategy, problems in mapping to the given response options, and more or less deliberate misreporting. In this paper, we will examine the use of eye-tracking equipment to measure burden. In eye-tracking research, task-evoked pupillary responses have been shown to be a consistent index of cognitive load and difficulty. For instance, dilated pupils are found to be indicative of higher levels of cognitive load and difficulty, in essence burden. We will create three measures—mean pupil dilation, peak pupil dilation, and latency to peak—as indicators of burden. These alternative measurements of response burden are free from errors in self-reports and are potentially stronger indicators of burden. We will evaluate the feasibility of using task evoked pupillary responses to measure burden in the survey context by comparing the three indicators to respondents’ self-reports about burden.

The Effects of Pictorial vs. Verbal Examples on Survey Responses.

Hanyu Sun (Westat), Jonas Bertling, Debby Almonte (ETS). Paper presented at the American Association of Public Opinion Research Annual Conference, Austin, TX. (2015)

Web surveys make it earlier to present images to the respondents than other modes of data collection. A few studies have examined the use of pictorial examples in Web surveys and found that the characteristic of the exemplars (e.g., their frequency or typicality) has an impact on the responses that are collected (e.g., Couper, Tourangeau, and Kenyon, 2004; Tourangeau, Conrad, Couper, and Ye, 2014). Tourangeau et al. (2014) compared visual examples with pictorial examples and found that respondents tended to report more foods consumption when they got verbal examples than when they got pictorial examples. The finding suggests that the pictures may narrow the interpretation of the category of interest. However, the findings also suggested that respondents are more likely to attend to the pictorial examples than to verbal examples. However, no direct evidence of respondent attention was collected to support either argument. Using eye-tracking, the current study compared verbal examples with pictorial examples in a lab setting to examine whether respondents attend to pictures more than words, whether items with verbal examples require more effort to answer than those with pictorial examples, and how the processing of items change over time. To address these research questions, we will examine differences in mean number of fixations and the mean duration for items with pictorial examples and verbal examples. The number of fixation is related to the amount of information that a respondent is processing, while the duration of fixations is related to the amount of difficulty that the respondent is having (Ares et al., 2014). The same food consumption questions with examples used in Tourangeau et al. (2014) will be used in current study.

Respondent Processing of Rating Scales and the Scale Direction Effect.

Andrew Caporaso. Paper presented at the American Association of Public Opinion Research Annual Conference, Austin, TX. (4015)

Holding constant other scale features, the direction in which a scale is presented has been found to affect the resulting survey answers; respondents are more likely to select a scale point closer to the start of the scale regardless of its direction, producing primacy effects (Yan, 2015). What remains understudied is the mechanism underlying this scale direction effect. Two common response processing models are offered as possible explanations for these effects: satisficing, and anchoring and adjusting. The satisficing model treats the impact of scale direction as a special case of response order effect and argues that satisficers sequentially process the rating scale and select the first option that seems reasonable. The anchoring-and-adjustment heuristic assumes that respondents start with an initial anchor (the beginning of a scale) and make adjustments to the anchor until a plausible point is reached. Since both notions predict a primacy effect, it is hard to know which notion offers a better account for scale direction effect. To learn more about what’s behind scale direction effects, we will collect eye tracking data from respondents’ as they respond to a web survey. As the eye movement data (e.g., fixation counts and fixation duration) show directly the amount of attention paid to question components, we will first characterize how respondents process a rating scale and how the processing differs respondent characteristics. Then we will explore which of the two notions account for the scale direction effect. This paper demonstrates how eye-tracking can be used to address theoretical issues related to respondents’ use of rating scales.

“Use of Eye-Tracking for Studying Survey Response Process.

Galesic, M., and Yan, T. (2010). In Social Research and the Internet: Advances in Applied Methods and New Research Strategies, edited by M. Das, P. Ester, L. Kaczmirek, and P. Mohler, Chapter 14, Taylor & Francis Publishing Group (pp 349-370).

Issues & Considerations


Equipment & Environment

Eye-tracking equipment is delicate and can be somewhat time-consuming to get up and running. The environment in which eye-tracking takes place needs to have sufficient lighting and be generally consistent across all the participants in your study. Plan to include ample time for designing, practicing, and pilot testing your data collection protocol. Plan to minimize how much the user touches the eye-tracking equipment.

Individual Factors

Some individuals are more difficult to collect eye-tracking data from than others – factors that impact the quality of eye-tracking data include user fatigue and wearing excessive make up; both of these factors can make the eye harder to track. Older individuals and those with vision problems can also be harder to track.

Natural Behavior

Using eye-tracking equipment may affect study participants natural behavior which can limit the external validity of eye-tracking studies. Researchers should think carefully about how best to mitigate this issue when designing their data collection protocol.