A08 | A Learning-Based Research Methodology for Visualization

Prof. Michael Sedlmair, University of Stuttgart
Email | Website

Michael Sedlmair

Prof. Daniel A. Keim, University of Konstanz
Email | Website

Daniel A. Keim

René Cutura, University of Stuttgart – Email | Website

Dr. Quynh Quang Ngo, University of Stuttgart – Email | Website

Katrin Angerbauer, University of Stuttgart – Email | Website

In recent years, machine learning has gained much attention for its ability to model complex human tasks, such as driving cars or composing music. In visualization research, there is currently a large effort to investigate how visualization can support machine learning research and practice.

In this project, we will take the reversed perspective and investigate how machine learning can support visualization research and practice. In particular, we will leverage machine learning to build and evaluate a new generation of models for visual perception and design.

Visualizing data is a process that involves many delicate design choices: How should the data be aggregated? Which visual encoding should be used? And how should it be parametrized?

In oder to make good design choices, many alternatives to aggregate and represent the data need to be evaluated. To make the work with the data more effective and easier, the project pursues several goals.

Goals

Novel models for visual perception and design decisions.

A new user-oriented research methodology.

Evaluating and characterizing the methodology.

Fig.1: Illustration of the proposed learning-based methology using class seperation as an example. This novel user-oriented testing methodology will help us in bridging quantitative and qualitative methodes.

Fig. 2: A typical perceptual task that could be modeled using our methodology is class seperation scatterplots.

Publications

  1. K. Angerbauer and M. Sedlmair, “Toward Inclusion and Accessibility in Visualization Research: Speculations on Challenges, Solution Strategies, and Calls for Action (Position Paper),” in 2022 IEEE Evaluation and Beyond - Methodological Approaches for Visualization (BELIV), in 2022 IEEE Evaluation and Beyond - Methodological Approaches for Visualization (BELIV). Oct. 2022, pp. 20–27. doi: 10.1109/BELIV57783.2022.00007.
  2. S. Dosdall, K. Angerbauer, L. Merino, M. Sedlmair, and D. Weiskopf, “Toward In-Situ Authoring of Situated Visualization with Chorded Keyboards,” in 15th International Symposium on Visual Information Communication and Interaction, VINCI 2022, Chur, Switzerland, August 16-18, 2022, M. Burch, G. Wallner, and D. Limberger, Eds., in 15th International Symposium on Visual Information Communication and Interaction, VINCI 2022, Chur, Switzerland, August 16-18, 2022. ACM, 2022, pp. 1–5. doi: 10.1145/3554944.3554970.
  3. Q. Q. Ngo, F. L. Dennig, D. A. Keim, and M. Sedlmair, “Machine Learning Meets Visualization – Experiences and Lessons Learned,” it - Information Technology, vol. 64, no. 4–5, Art. no. 4–5, 2022, doi: doi:10.1515/itit-2022-0034.
  4. K. Klein, M. Sedlmair, and F. Schreiber, “Immersive Analytics: An Overview,” it - Information Technology, vol. 64, no. 4–5, Art. no. 4–5, 2022, doi: doi:10.1515/itit-2022-0037.
  5. P. Fleck, A. Sousa Calepso, S. Hubenschmid, M. Sedlmair, and D. Schmalstieg, “RagRug: A Toolkit for Situated Analytics,” IEEE Transactions on Visualization and Computer Graphics, 2022, doi: 10.1109/TVCG.2022.3157058.
  6. K. Angerbauer et al., “Accessibility for Color Vision Deficiencies: Challenges and Findings of a Large Scale Study on Paper Figures,” in Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, in Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. New Orleans, LA, USA: Association for Computing Machinery, 2022. doi: 10.1145/3491102.3502133.
  7. K. Lu et al., “Palettailor: Discriminable Colorization for Categorical Data,” IEEE Transactions on Visualization & Computer Graphics, vol. 27, no. 02, Art. no. 02, 2021, doi: 10.1109/TVCG.2020.3030406.
  8. N. Grossmann, J. Bernard, M. Sedlmair, and M. Waldner, “Does the Layout Really Matter? A Study on Visual Model Accuracy Estimation,” in IEEE Visualization Conference  (VIS, Short Paper), in IEEE Visualization Conference  (VIS, Short Paper). 2021, pp. 61--65. doi: 10.1109/VIS49827.2021.9623326.
  9. C. Morariu, A. Bibal, R. Cutura, B. Frenay, and M. Sedlmair, “DumbleDR: Predicting User Preferences of Dimensionality Reduction Projection Quality,” arXiv preprint, Technical Report arXiv:2105.09275, 2021. [Online]. Available: https://arxiv.org/abs/2105.09275
  10. G. J. Rijken et al., “Illegible Semantics: Exploring the Design Space of Metal Logos,” in IEEE VIS alt.VIS Workshop, in IEEE VIS alt.VIS Workshop. 2021. [Online]. Available: https://arxiv.org/abs/2109.01688
  11. J. Bernard, M. Hutter, M. Sedlmair, M. Zeppelzauer, and T. Munzner, “A Taxonomy of Property Measures to Unify Active Learning and Human-centered Approaches to Data Labeling,” ACM Transactions on Interactive Intelligent Systems (TiiS), vol. 11, no. 3–4, Art. no. 3–4, 2021, doi: 10.1145/3439333.
  12. R. Cutura, C. Morariu, Z. Cheng, Y. Wang, D. Weiskopf, and M. Sedlmair, “Hagrid — Gridify Scatterplots with Hilbert and Gosper Curves,” in The 14th International Symposium on Visual Information Communication and Interaction, in The 14th International Symposium on Visual Information Communication and Interaction. Potsdam, Germany: Association for Computing Machinery, 2021, p. 1:1—1:8. doi: 10.1145/3481549.3481569.
  13. J. Bernard, M. Hutter, M. Zeppelzauer, M. Sedlmair, and T. Munzner, “ProSeCo: Visual analysis of class separation measures and dataset characteristics,” Computers & Graphics, vol. 96, pp. 48–60, 2021, doi: https://doi.org/10.1016/j.cag.2021.03.004.
  14. M. Kraus et al., “Immersive Analytics with Abstract 3D Visualizations: A Survey,” Computer Graphics Forum, 2021, doi: https://doi.org/10.1111/cgf.14430.
  15. M. Kraus, K. Klein, J. Fuchs, D. A. Keim, F. Schreiber, and M. Sedlmair, “The Value of Immersive Visualization,” IEEE Computer Graphics and Applications (CG&A), vol. 41, no. 4, Art. no. 4, 2021, doi: 10.1109/MCG.2021.3075258.
  16. C. Bu et al., “SineStream: Improving the Readability of Streamgraphs by Minimizing Sine Illusion Effects,” IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 2, Art. no. 2, 2021, doi: 10.1109/TVCG.2020.3030404.
  17. R. Cutura, K. Angerbauer, F. Heyen, N. Hube, and M. Sedlmair, “DaRt: Generative Art using Dimensionality Reduction Algorithms,” in 2021 IEEE VIS Arts Program (VISAP), in 2021 IEEE VIS Arts Program (VISAP). IEEE, 2021, pp. 59--72. doi: 10.1109/VISAP52981.2021.00013.
  18. C. Krauter, J. Vogelsang, A. S. Calepso, K. Angerbauer, and M. Sedlmair, “Don’t Catch It: An Interactive Virtual-Reality Environment to Learn About COVID-19 Measures Using Gamification Elements,” in Mensch und Computer, in Mensch und Computer. ACM, 2021, pp. 593--596. doi: 10.1145/3473856.3474031.
  19. A. Streichert, K. Angerbauer, M. Schwarzl, and M. Sedlmair, “Comparing Input Modalities for Shape Drawing Tasks,” in Proceedings of the Symposium on Eye Tracking Research & Applications-Short Papers (ETRA-SP), in Proceedings of the Symposium on Eye Tracking Research & Applications-Short Papers (ETRA-SP). ACM, 2020, pp. 1–5. doi: 10.1145/3379156.3391830.
  20. M. Kraus et al., “Assessing 2D and 3D Heatmaps for Comparative Analysis: An Empirical Study,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, in Proceedings of the CHI Conference on Human Factors in Computing Systems. 2020, pp. 546:1–546:14. doi: 10.1145/3313831.3376675.
  21. L. Merino, M. Schwarzl, M. Kraus, M. Sedlmair, D. Schmalstieg, and D. Weiskopf, “Evaluating Mixed and Augmented Reality: A Systematic Literature Review (2009 -- 2019),” in IEEE International Symposium on Mixed and Augmented Reality (ISMAR), in IEEE International Symposium on Mixed and Augmented Reality (ISMAR). 2020. doi: doi: 10.1109/ISMAR50242.2020.00069.
  22. P. Balestrucci et al., “Pipelines Bent, Pipelines Broken: Interdisciplinary Self-Reflection on the Impact of COVID-19 on Current and Future Research (Position Paper),” in 2020 IEEE Workshop on Evaluation and Beyond-Methodological Approaches to Visualization (BELIV), in 2020 IEEE Workshop on Evaluation and Beyond-Methodological Approaches to Visualization (BELIV). IEEE, 2020, pp. 11--18. doi: 10.1109/BELIV51497.2020.00009.
  23. S. Öney et al., “Evaluation of Gaze Depth Estimation from Eye Tracking in Augmented Reality,” in Proceedings of the Symposium on Eye Tracking Research & Applications-Short Paper (ETRA-SP), in Proceedings of the Symposium on Eye Tracking Research & Applications-Short Paper (ETRA-SP). ACM, 2020, pp. 49:1-49:5. doi: 10.1145/3379156.3391835.
  24. F. Heyen et al., “ClaVis: An Interactive Visual Comparison System for Classifiers,” in Proceedings of the International Conference on Advanced Visual Interfaces, in Proceedings of the International Conference on Advanced Visual Interfaces. ACM, 2020, pp. 9:1-9:9. doi: 10.1145/3399715.3399814.
  25. N. Pathmanathan et al., “Eye vs. Head: Comparing Gaze Methods for Interaction in Augmented Reality,” in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA). Stuttgart, Germany: ACM, 2020, pp. 50:1-50:5. doi: 10.1145/3379156.3391829.
  26. K. Kurzhals, F. Göbel, K. Angerbauer, M. Sedlmair, and M. Raubal, “A View on the Viewer: Gaze-Adaptive Captions for Videos,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, in Proceedings of the CHI Conference on Human Factors in Computing Systems. 2020, pp. 139:1–139:12. doi: 10.1145/3313831.3376266.
  27. L. Merino et al., “Toward Agile Situated Visualization: An Exploratory User Study,” in Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA), in Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA). 2020, p. LBW087:1–LBW087:7. doi: 10.1145/3334480.3383017.
  28. J. Bernard, M. Hutter, M. Zeppelzauer, M. Sedlmair, and T. Munzner, “SepEx: Visual Analysis of Class Separation Measures,” in Proceedings of the International Workshop on Visual Analytics (EuroVA), C. Turkay and K. Vrotsou, Eds., in Proceedings of the International Workshop on Visual Analytics (EuroVA). The Eurographics Association, 2020, pp. 1–5. doi: 10.2312/eurova.20201079.
  29. M. Kraus et al., “A Comparative Study of Orientation Support Tools in Virtual Reality Environments with Virtual Teleportation,” in 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), in 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). 2020, pp. 227–238. doi: 10.1109/ISMAR50242.2020.00046.
  30. Y. Wang et al., “Improving the Robustness of Scagnostics,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 1, Art. no. 1, 2019, doi: 10.1109/TVCG.2019.2934796.
  31. M. Aupetit, M. Sedlmair, M. M. Abbas, A. Baggag, and H. Bensmail, “Toward Perception-based Evaluation of Clustering Techniques for Visual Analytics,” in Proceedings of the IEEE Visualization Conference (VIS), in Proceedings of the IEEE Visualization Conference (VIS). IEEE, 2019, pp. 141–145. doi: 10.1109/VISUAL.2019.8933620.
  32. Y. Wang et al., “A Perception-driven Approach to Supervised Dimensionality Reduction for Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 5, Art. no. 5, 2018, doi: 10.1109/TVCG.2017.2701829.
  33. T. Torsney-Weir, S. Afroozeh, M. Sedlmair, and T. Möller, “Risk Fixers and Sweet Spotters: a Study of the Different Approaches to Using Visual Sensitivity Analysis in an Investment Scenario,” in Proceedings of the Eurographics Conference on Visualization (EuroVis), J. Johansson, F. Sadlo, and T. Schreck, Eds., in Proceedings of the Eurographics Conference on Visualization (EuroVis). Eurographics Association, 2018, pp. 119–123. doi: 10.2312/eurovisshort.20181089.
  34. A. C. Valdez, M. Ziefle, and M. Sedlmair, “Priming and Anchoring Effects in Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 1, Art. no. 1, 2018, doi: 10.1109/TVCG.2017.2744138.
  35. M. Aupetit and M. Sedlmair, “SepMe: 2002 New Visual Separation Measures.,” in Proceedings of the IEEE Pacific Visualization Symposium (PacificVis), C. Hansen, I. Viola, and X. Yuan, Eds., in Proceedings of the IEEE Pacific Visualization Symposium (PacificVis). IEEE, 2016, pp. 1–8. doi: 10.1109/PACIFICVIS.2016.7465244.
  36. M. Sedlmair and M. Aupetit, “Data-driven Evaluation of Visual Quality Measures,” Computer Graphics Forum, vol. 34, no. 3, Art. no. 3, 2015, doi: 10.1111/cgf.12632.