A08 | A Learning-Based Research Methodology for Visualization

Jun.-Prof. Michael Sedlmair, University of Stuttgart
Email | Website

Michael Sedlmair

Prof. Daniel A. Keim, University of Konstanz
Email | Website

Daniel A. Keim

Cristina Morariu, University of Stuttgart – Email | Website

In recent years, machine learning has gained much attention for its ability to model complex human tasks, such as driving cars or composing music. In visualization research, there is currently a large effort to investigate how visualization can support machine learning research and practice.

In this project, we will take the reversed perspective and investigate how machine learning can support visualization research and practice. In particular, we will leverage machine learning to build and evaluate a new generation of models for visual perception and design.

Visualizing data is a process that involves many delicate design choices: How should the data be aggregated? Which visual encoding should be used? And how should it be parametrized?

In oder to make good design choices, many alternatives to aggregate and represent the data need to be evaluated. To make the work with the data more effective and easier, the project pursues several goals.

Goals

Novel models for visual perception and design decisions.

A new user-oriented research methodology.

Evaluating and characterizing the methodology.

Fig.1: Illustration of the proposed learning-based methology using class seperation as an example. This novel user-oriented testing methodology will help us in bridging quantitative and qualitative methodes.

Fig. 2: A typical perceptual task that could be modeled using our methodology is class seperation scatterplots.

Publications

  1. M. Kraus et al., “Assessing 2D and 3D Heatmaps for Comparative Analysis: An Empirical Study,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2020, pp. 546:1–546:14, doi: 10.1145/3313831.3376675.
  2. L. Merino et al., “Toward Agile Situated Visualization: An Exploratory User Study,” in Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA), 2020, p. LBW087:1–LBW087:7, doi: 10.1145/3334480.3383017.
  3. K. Kurzhals, F. Göbel, K. Angerbauer, M. Sedlmair, and M. Raubal, “A View on the Viewer: Gaze-Adaptive Captions for Videos,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2020, pp. 139:1–139:12, doi: 10.1145/3313831.3376266.
  4. N. Pathmanathan et al., “Eye vs. Head: Comparing Gaze Methods for Interaction in Augmented Reality,” in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), Stuttgart, Germany, 2020, pp. 50:1-50:5, doi: 10.1145/3379156.3391829.
  5. S. Öney et al., “Evaluation of Gaze Depth Estimation from Eye Tracking in Augmented Reality,” in Proceedings of the Symposium on Eye Tracking Research & Applications-Short Paper (ETRA-SP), 2020, pp. 49:1-49:5, doi: 10.1145/3379156.3391835.
  6. Y. Wang et al., “Improving the Robustness of Scagnostics,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 1, Art. no. 1, 2019, doi: 10.1109/TVCG.2019.2934796.
  7. M. Aupetit, M. Sedlmair, M. M. Abbas, A. Baggag, and H. Bensmail, “Toward Perception-based Evaluation of Clustering Techniques for Visual Analytics,” in Proceedings of the IEEE Visualization Conference (VIS), 2019, pp. 141–145, doi: 10.1109/VISUAL.2019.8933620.
  8. T. Torsney-Weir, S. Afroozeh, M. Sedlmair, and T. Möller, “Risk Fixers and Sweet Spotters: a Study of the Different Approaches to Using Visual Sensitivity Analysis in an Investment Scenario,” in Proceedings of the Eurographics Conference on Visualization (EuroVis), 2018, pp. 119–123, doi: 10.2312/eurovisshort.20181089.
  9. Y. Wang et al., “A Perception-driven Approach to Supervised Dimensionality Reduction for Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 5, Art. no. 5, 2018, doi: 10.1109/TVCG.2017.2701829.
  10. A. C. Valdez, M. Ziefle, and M. Sedlmair, “Priming and Anchoring Effects in Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 1, Art. no. 1, 2018, doi: 10.1109/TVCG.2017.2744138.
  11. M. Aupetit and M. Sedlmair, “SepMe: 2002 New Visual Separation Measures.,” in Proceedings of the IEEE Pacific Visualization Symposium (PacificVis), 2016, pp. 1–8, doi: 10.1109/PACIFICVIS.2016.7465244.
  12. M. Sedlmair and M. Aupetit, “Data-driven Evaluation of Visual Quality Measures,” Computer Graphics Forum, vol. 34, no. 3, Art. no. 3, 2015, doi: 10.1111/cgf.12632.