C06 | User-Adaptive Mixed Reality

Dr. Lewis L. Chuang, LMU Munich
Email | Website

Prof. Dr. Albrecht Schmidt, LMU Munich
Email | Website

New image
New image

Prof. Dr. Harald Reiterer, University of Konstanz

Email | Website

Harald Reiterer

Francesco Chiossi, LMU Munich – Email

Jesse Grootjen, LMU Munich – Email |

Mixed reality (MR) systems refer the entire broad spectrum that range from physical to virtual reality (VR). It includes instances that overlays virtual content on physical information, i.e. Augmented Reality (AR), as well as those that rely on physical content to increase the realism of virtual environments, i.e. Augmented Virtuality (AV). Such instances tend to be pre-defined for the blend of physical and virtual content.

This project will investigate whether this blend can be adaptive to user states, which are inferred from physiological measurements derived from gaze behavior, peripheral physiology (e.g.., electrodermal activity (EDA); electrocardiograpy (ECG)), and cortical activity (i.e.., electroencephalography (EEG)). In other words, we will investigate the viability and usefulness of MR use scenarios that vary in their blend of virtual and physical content according to the user physiology.

In particular, we intend to investigate how inferred states of user arousal and attention can be leveraged for creating MR scenarios that benefit the user’s ability to process information.

This will build on the acquired expertise and experience of Projects C02 and C03.

The areas of application for MR scenarios are divers. Possible applications would for example be haptic assembly, automated vehicle cockpits, and teamwork analyses of neuroscience and biochemistry datasets.

Research Questions

To what extent can MR systems rely on physiological inputs to infer user state and expectations and, in doing, adapt their visualization in response?

How much information can we provide to users of MR systems, across the various sensory modalities, without resulting in ‘information overload’?

How can users transition between physical and virtual reality and what means should be employed to facilitate this process?

How can computer-supported cooperative work be implemented in a single MR environment that is informed by the physiological inputs of multiple user?

Fig. 1: Virtual graphical rendering allows us to create instances that vary between physical and virtual reality.

Fig. 2: Example of a MR workspace enabling gradual blending between a physical and virtual environment.

Publications

  1. C. Schulz, M. Burch, and D. Weiskopf, Visual Data Cleansing of Eye Tracking Data. 2015.
  2. K. Kurzhals, M. Hlawatsch, M. Burch, and D. Weiskopf, “Fixation-Image Charts,” in ETRA, 2016, vol. Proceedings of the Symposium on Eye Tracking Research & Applications, no. 1, pp. 11–18.
  3. J. Görtler, M. Spicker, C. Schulz, D. Weiskopf, and O. Deussen, “Stippling of 2D scalar fields,” in IEEE Trans. Vis. Comput. Graph., 2019.
  4. C. Schulz et al., “Generative Data Models for Validation and Evaluation of Visualization Techniques.,” in BELIV Workshop 2016, 2016, pp. 112--124.
  5. D. Weiskopf, M. Burch, L. L. Chuang, B. Fischer, and A. Schmidt, Eye Tracking and Visualization: Foundations, Techniques, and Applications. Berlin, Heidelberg: Springer, 2016.
  6. T. Blascheck, F. Beck, S. Baltes, T. Ertl, and D. Weiskopf, “Visual Analysis and Coding of Data-Rich User Behavior,” 2016.
  7. J. Goertler, C. Schulz, O. Deussen, and D. Weiskopf, “Bubble Treemaps for Uncertainty Visualization,” IEEE Transactions on Visualization and Computer Graphics, 2018.
  8. J. Görtler, R. Kehlbeck, and O. Deussen, “A visual exploration of Gaussian processes,” in Proceedings of the Workshop on Visualization for AI Explainability (VISxAI), 2018.
  9. C. Schulz, M. Burch, F. Beck, and D. Weiskopf, “Visual Data Cleansing of Low-Level Eye Tracking Data,” in Extended Papers of ETVIS 2015, 2016.
  10. Y. Wang, Z. Wang, C.-W. Fu, H. Schmauder, O. Deussen, and D. Weiskopf, “Image-based aspect ratio selection,” IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 1, 2019.
  11. C. Schulz, A. Nocaj, J. Goertler, O. Deussen, U. Brandes, and D. Weiskopf, “Probabilistic Graph Layout for Uncertain Network Visualization,” vol. 23, no. 1, 2017.
  12. T. Spinner, J. Körner, J. Görtler, and O. Deussen, “Towards an interpretable latent space,” in Workshop Vis. for AI Explainability (VISxAI), IEEE VIS Berlin, 2018.
  13. K. Srulijes et al., “Visualization of eye-head coordination while walking in healthy subjects and patients with neurodegenerative diseases.” 2017.
  14. K. Kurzhals, B. Fisher, M. Burch, and D. Weiskopf, “Eye Tracking Evaluation of Visual Analytics,” 2015.
  15. P. Gralka, C. Schulz, G. Reina, D. Weiskopf, and T. Ertl, “Visual Exploration of Memory Traces and Call Stacks,” VISSOFT 2017, 2017.
  16. C. Schulz, A. Zeyfang, M. van Garderen, H. Ben Lahmar, M. Herschel, and D. Weiskopf, “Simultaneous Visual Analysis of Multiple Software Hierarchies,” in 2018 IEEE Working Conference on Software Visualization (VISSOFT), 2018, pp. 87--95.
  17. C. Schulz, N. Rodrigues, K. Damarla, A. Henicke, and D. Weiskopf, “Visual Exploration of Mainframe Workloads,” in SA ’17 Symposium on Visualization, 2017.