A08 | A Learning-Based Research Methodology for Visualization

Prof. Michael Sedlmair, University of Stuttgart
Email | Website

Michael Sedlmair

Prof. Daniel A. Keim, University of Konstanz
Email | Website

Daniel A. Keim

René Cutura, University of Stuttgart – Email | Website

Dr. Quynh Quang Ngo, University of Stuttgart – Email | Website

Katrin Angerbauer, University of Stuttgart – Email | Website

In recent years, machine learning has gained much attention for its ability to model complex human tasks, such as driving cars or composing music. In visualization research, there is currently a large effort to investigate how visualization can support machine learning research and practice.

In this project, we will take the reversed perspective and investigate how machine learning can support visualization research and practice. In particular, we will leverage machine learning to build and evaluate a new generation of models for visual perception and design.

Visualizing data is a process that involves many delicate design choices: How should the data be aggregated? Which visual encoding should be used? And how should it be parametrized?

In oder to make good design choices, many alternatives to aggregate and represent the data need to be evaluated. To make the work with the data more effective and easier, the project pursues several goals.

Goals

Novel models for visual perception and design decisions.

A new user-oriented research methodology.

Evaluating and characterizing the methodology.

Fig.1: Illustration of the proposed learning-based methology using class seperation as an example. This novel user-oriented testing methodology will help us in bridging quantitative and qualitative methodes.

Fig. 2: A typical perceptual task that could be modeled using our methodology is class seperation scatterplots.

Publications

  1. P. Fleck, A. Sousa Calepso, S. Hubenschmid, M. Sedlmair, and D. Schmalstieg, “RagRug: A Toolkit for Situated Analytics,” IEEE Transactions on Visualization and Computer Graphics, 2022, doi: 10.1109/TVCG.2022.3157058.
  2. J. Bernard, M. Hutter, M. Zeppelzauer, M. Sedlmair, and T. Munzner, “ProSeCo: Visual analysis of class separation measures and dataset characteristics,” Computers & Graphics, vol. 96, pp. 48–60, 2021, doi: https://doi.org/10.1016/j.cag.2021.03.004.
  3. C. Krauter, J. Vogelsang, A. S. Calepso, K. Angerbauer, and M. Sedlmair, “Don’t Catch It: An Interactive Virtual-Reality Environment to Learn About COVID-19 Measures Using Gamification Elements,” in Mensch und Computer, 2021, pp. 593--596. doi: 10.1145/3473856.3474031.
  4. M. Kraus et al., “Immersive Analytics with Abstract 3D Visualizations: A Survey,” Computer Graphics Forum, 2021, doi: https://doi.org/10.1111/cgf.14430.
  5. N. Grossmann, J. Bernard, M. Sedlmair, and M. Waldner, “Does the Layout Really Matter? A Study on Visual Model Accuracy Estimation,” in IEEE Visualization Conference  (VIS, Short Paper), 2021, pp. 61--65. doi: 10.1109/VIS49827.2021.9623326.
  6. G. J. Rijken et al., “Illegible Semantics: Exploring the Design Space of Metal Logos,” 2021. [Online]. Available: https://arxiv.org/abs/2109.01688
  7. M. Kraus, K. Klein, J. Fuchs, D. A. Keim, F. Schreiber, and M. Sedlmair, “The Value of Immersive Visualization,” IEEE Computer Graphics and Applications (CG&A), vol. 41, no. 4, Art. no. 4, 2021, doi: 10.1109/MCG.2021.3075258.
  8. J. Bernard, M. Hutter, M. Sedlmair, M. Zeppelzauer, and T. Munzner, “A Taxonomy of Property Measures to Unify Active Learning and Human-centered Approaches to Data Labeling,” ACM Transactions on Interactive Intelligent Systems (TiiS), vol. 11, no. 3–4, Art. no. 3–4, 2021, doi: 10.1145/3439333.
  9. C. Morariu, A. Bibal, R. Cutura, B. Frenay, and M. Sedlmair, “DumbleDR: Predicting User Preferences of Dimensionality Reduction Projection Quality,” arXiv preprint, Technical Report arXiv:2105.09275, 2021. [Online]. Available: https://arxiv.org/abs/2105.09275
  10. K. Lu et al., “Palettailor: Discriminable Colorization for Categorical Data,” IEEE Transactions on Visualization & Computer Graphics, vol. 27, no. 02, Art. no. 02, 2021, doi: 10.1109/TVCG.2020.3030406.
  11. C. Bu et al., “SineStream: Improving the Readability of Streamgraphs by Minimizing Sine Illusion Effects,” IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 2, Art. no. 2, 2021, doi: 10.1109/TVCG.2020.3030404.
  12. R. Cutura, K. Angerbauer, F. Heyen, N. Hube, and M. Sedlmair, “DaRt: Generative Art using Dimensionality Reduction Algorithms,” in 2021 IEEE VIS Arts Program (VISAP), 2021, pp. 59--72. doi: 10.1109/VISAP52981.2021.00013.
  13. R. Cutura, C. Morariu, Z. Cheng, Y. Wang, D. Weiskopf, and M. Sedlmair, “Hagrid — Gridify Scatterplots with Hilbert and Gosper Curves,” in The 14th International Symposium on Visual Information Communication and Interaction, Potsdam, Germany, 2021, p. 1:1—1:8. doi: 10.1145/3481549.3481569.
  14. M. Kraus et al., “Assessing 2D and 3D Heatmaps for Comparative Analysis: An Empirical Study,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2020, pp. 546:1–546:14. doi: 10.1145/3313831.3376675.
  15. L. Merino et al., “Toward Agile Situated Visualization: An Exploratory User Study,” in Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA), 2020, p. LBW087:1–LBW087:7. doi: 10.1145/3334480.3383017.
  16. F. Heyen et al., “ClaVis: An Interactive Visual Comparison System for Classifiers,” in Proceedings of the International Conference on Advanced Visual Interfaces, 2020, pp. 9:1-9:9. doi: 10.1145/3399715.3399814.
  17. J. Bernard, M. Hutter, M. Zeppelzauer, M. Sedlmair, and T. Munzner, “SepEx: Visual Analysis of Class Separation Measures,” in Proceedings of the International Workshop on Visual Analytics (EuroVA), 2020, pp. 1–5. doi: 10.2312/eurova.20201079.
  18. A. Streichert, K. Angerbauer, M. Schwarzl, and M. Sedlmair, “Comparing Input Modalities for Shape Drawing Tasks,” in Proceedings of the Symposium on Eye Tracking Research & Applications-Short Papers (ETRA-SP), 2020, no. 51, pp. 1–5. doi: 10.1145/3379156.3391830.
  19. K. Kurzhals, F. Göbel, K. Angerbauer, M. Sedlmair, and M. Raubal, “A View on the Viewer: Gaze-Adaptive Captions for Videos,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2020, pp. 139:1–139:12. doi: 10.1145/3313831.3376266.
  20. L. Merino, M. Schwarzl, M. Kraus, M. Sedlmair, D. Schmalstieg, and D. Weiskopf, “Evaluating Mixed and Augmented Reality: A Systematic Literature Review (2009 -- 2019),” 2020. doi: doi: 10.1109/ISMAR50242.2020.00069.
  21. N. Pathmanathan et al., “Eye vs. Head: Comparing Gaze Methods for Interaction in Augmented Reality,” in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), Stuttgart, Germany, 2020, pp. 50:1-50:5. doi: 10.1145/3379156.3391829.
  22. M. Kraus et al., “A Comparative Study of Orientation Support Tools in Virtual Reality Environments with Virtual Teleportation,” in 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2020, pp. 227–238. doi: 10.1109/ISMAR50242.2020.00046.
  23. P. Balestrucci et al., “Pipelines Bent, Pipelines Broken: Interdisciplinary Self-Reflection on the Impact of COVID-19 on Current and Future Research (Position Paper),” in 2020 IEEE Workshop on Evaluation and Beyond-Methodological Approaches to Visualization (BELIV), 2020, pp. 11--18. doi: 10.1109/BELIV51497.2020.00009.
  24. S. Öney et al., “Evaluation of Gaze Depth Estimation from Eye Tracking in Augmented Reality,” in Proceedings of the Symposium on Eye Tracking Research & Applications-Short Paper (ETRA-SP), 2020, pp. 49:1-49:5. doi: 10.1145/3379156.3391835.
  25. Y. Wang et al., “Improving the Robustness of Scagnostics,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 1, Art. no. 1, 2019, doi: 10.1109/TVCG.2019.2934796.
  26. M. Aupetit, M. Sedlmair, M. M. Abbas, A. Baggag, and H. Bensmail, “Toward Perception-based Evaluation of Clustering Techniques for Visual Analytics,” in Proceedings of the IEEE Visualization Conference (VIS), 2019, pp. 141–145. doi: 10.1109/VISUAL.2019.8933620.
  27. T. Torsney-Weir, S. Afroozeh, M. Sedlmair, and T. Möller, “Risk Fixers and Sweet Spotters: a Study of the Different Approaches to Using Visual Sensitivity Analysis in an Investment Scenario,” in Proceedings of the Eurographics Conference on Visualization (EuroVis), 2018, pp. 119–123. doi: 10.2312/eurovisshort.20181089.
  28. Y. Wang et al., “A Perception-driven Approach to Supervised Dimensionality Reduction for Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 5, Art. no. 5, 2018, doi: 10.1109/TVCG.2017.2701829.
  29. A. C. Valdez, M. Ziefle, and M. Sedlmair, “Priming and Anchoring Effects in Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 1, Art. no. 1, 2018, doi: 10.1109/TVCG.2017.2744138.
  30. M. Aupetit and M. Sedlmair, “SepMe: 2002 New Visual Separation Measures.,” in Proceedings of the IEEE Pacific Visualization Symposium (PacificVis), 2016, pp. 1–8. doi: 10.1109/PACIFICVIS.2016.7465244.
  31. M. Sedlmair and M. Aupetit, “Data-driven Evaluation of Visual Quality Measures,” Computer Graphics Forum, vol. 34, no. 3, Art. no. 3, 2015, doi: 10.1111/cgf.12632.