C06 | User-Adaptive Mixed Reality

Jun.-Prof. Sven Mayer, LMU Munich
Email | Website

Prof. Albrecht Schmidt, LMU Munich
Email | Website

Sven Mayer
Albrecht Schmidt

Prof. Harald Reiterer, University of Konstanz
Email | Website

Jun.- Prof. Tiare Feuchtner, University of Konstanz
Email | Website

Harald Reiterer
New image

Francesco Chiossi, LMU Munich – Email | Website

Kathrin Schnizer, LMU Munich – Email | Website

Mixed reality (MR) systems refer the entire broad spectrum that range from physical to virtual reality (VR). It includes instances that overlays virtual content on physical information, i.e. Augmented Reality (AR), as well as those that rely on physical content to increase the realism of virtual environments, i.e. Augmented Virtuality (AV). Such instances tend to be pre-defined for the blend of physical and virtual content.

This project will investigate whether this blend can be adaptive to user states, which are inferred from physiological measurements derived from gaze behavior, peripheral physiology (e.g.., electrodermal activity (EDA); electrocardiograpy (ECG)), and cortical activity (i.e.., electroencephalography (EEG)). In other words, we will investigate the viability and usefulness of MR use scenarios that vary in their blend of virtual and physical content according to the user physiology.

In particular, we intend to investigate how inferred states of user arousal and attention can be leveraged for creating MR scenarios that benefit the user’s ability to process information.

This will build on the acquired expertise and experience of Projects C02 and C03.

The areas of application for MR scenarios are divers. Possible applications would for example be haptic assembly, automated vehicle cockpits, and teamwork analyses of neuroscience and biochemistry datasets.

Research Questions

To what extent can MR systems rely on physiological inputs to infer user state and expectations and, in doing, adapt their visualization in response?

How much information can we provide to users of MR systems, across the various sensory modalities, without resulting in ‘information overload’?

How can users transition between physical and virtual reality and what means should be employed to facilitate this process?

How can computer-supported cooperative work be implemented in a single MR environment that is informed by the physiological inputs of multiple user?

Fig. 1: Virtual graphical rendering allows us to create instances that vary between physical and virtual reality.

Fig. 2: Example of a MR workspace enabling gradual blending between a physical and virtual environment.

Project Group A

Models and Measures

 

Completed

 

Project Group B

Adaptive Algorithms

 

Completed

 

Project Group C

Interaction

 

Completed

 

Project Group D

Applications

 

Completed