Despite the importance of human vision for perceiving and understanding information visualizations, dominant approaches to quantify users’ visual attention require special-purpose eye tracking equipment. However, such equipment is not always available, has to be calibrated to each user, and is limited to post-hoc optimization of information visualizations.
This project aims to integrate automatic quantification of spatio-temporal visual attention directly into the visualization design process without the need for any eye tracking equipment. To achieve these goals, the project takes inspiration from computational models of visual attention (saliency models) that mimic basic perceptual processing to reproduce attentive behavior. Originally introduced in computational neuroscience, saliency models have been tremendously successful in several research fields, particularly in computer vision. In contrast, few works have investigated the use of saliency models in information visualization. We will develop new methods for data-driven attention prediction for information visualizations as well as joint modeling of bottom-up and top-down visual attention, and use them to investigate attention-driven optimization and real-time adaptation of static and dynamic information visualizations.