In this paper, we present BubbleView, an alternative methodology for eye tracking using discrete mouse clicks to measure which information people consciously choose to examine. BubbleView is a mouse-contingent, moving-window interface in which participants are presented with a series of blurred images and click to reveal "bubbles" - small, circular areas of the image at original resolution, similar to having a confined area of focus like the eye fovea. Across 10 experiments with 28 different parameter combinations, we evaluated BubbleView on a variety of image types: information visualizations, natural images, static webpages, and graphic designs, and compared the clicks to eye fixations collected with eye-trackers in controlled lab settings. We found that BubbleView clicks can both (i) successfully approximate eye fixations on different images, and (ii) be used to rank image and design elements by importance. BubbleView is designed to collect clicks on static images, and works best for defined tasks such as describing the content of an information visualization or measuring image importance. BubbleView data is cleaner and more consistent than related methodologies that use continuous mouse movements. Our analyses validate the use of mouse-contingent, moving-window methodologies as approximating eye fixations for different image and task types.
*Equal contribution
This work has been made possible through support from Google, Xerox, the NSF Graduate Research Fellowship Program, the Natural Sciences and Engineering Research Council of Canada, and the Kwanjeong Educational Foundation. We also acknowledge the support of the Toyota Research Institute / MIT CSAIL Joint Research Center.
1. Choose an image and a parameter setting (bubble radius and blur radius)
2. Click and describe the image in the BubbleView interface.
3. Click 'Next' to see the bubbles generated in the monitoring interface.
The source code used in this demo is available at github.com/namwkim/bubbleview.
It mainly consists of two components: 1) setting up a bubbleview interface on a HTML canvas and 2) displaying collected bubbles in temporal order.
A use case demonstrating how to use the code is shown in this webpage; please take a look at the source code of this page.
The code for running our experiments on Amazon's Mechanical Turk is outdated. We recommend taking a look at the MTurk documentation.
If you are interested in programmatically launching experiments, please refer to Amazon's SDK (e.g, Python, Javascript).
All the data and analysis codes used in BubbleView experiments are available at github.com/cvzoya/bubbleview. You will find detailed information there.