A07 | Visual Attention Modeling for Optimization of Information Visualizations

Prof. Andreas Bulling, University of Stuttgart
Email | Website

Andreas Bulling

Dr. Lewis L. Chuang, LMU Munich
Email | Website

Lewis L. Chuang


Despite the importance of human vision for perceiving and understanding information visualizations, dominant approaches to quantify users’ visual attention require special-purpose eye tracking equipment. However, such equipment is not always available, has to be calibrated to each user, and is limited to post-hoc optimization of information visualizations.

This project aims to integrate automatic quantification of spatio-temporal visual attention directly into the visualization design process without the need for any eye tracking equipment. To achieve these goals, the project takes inspiration from computational models of visual attention (saliency models) that mimic basic perceptual processing to reproduce attentive behavior. Originally introduced in computational neuroscience, saliency models have been tremendously successful in several research fields, particularly in computer vision. In contrast, few works have investigated the use of saliency models in information visualization. We will develop new methods for data-driven attention prediction for information visualizations as well as joint modeling of bottom-up and top-down visual attention, and use them to investigate attention-driven optimization and real-time adaptation of static and dynamic information visualizations.

Research Questions

How can we predict attention on a wide range of real-world information visualizations?

Which optimisation goals and mechanisms are required to optimise static information visualisations?

How can we jointly model bottom-up and top-down attention on dynamic information visualisations?

Which mechanisms are required to adapt visual analytics interfaces to users' attention?

Sample Attention Maps

Sample attention maps from different models compared to the ground truth obtained using an eye tracker.

Our maps closely match the ground truth while existing approaches fail to incorporate important user interace areas.