State-of-the-art multimedia systems communicate with the user predominantly via the audiovisual channels which allow for remote interaction. In order to truly get in touch and to physically interact with the multimedia content, however, such multisensory systems necessarily have to involve also the haptic modality. Little-by-little physical interaction capabilities for multimedia systems also conquer the consumer market, as can be seen for example with the newly released OculusTM Touch Controller. Another emerging trend shows that multisensory systems and interactive displays become more and more adaptable. This adaptability can affect the content, the interface, or the interaction capabilities of the systems. All this requires the human user who interacts with these systems and displays to adapt as well. This can lead to interesting dynamics in the dyadic adaptation between the human user and the “intelligent” multisensory system or display. The goal of this proposal is therefore to investigate the human adaptation performance to multisensory systems in such mutual learning situations, in order to gain a better understanding of the human sensorimotor learning processes and to provide guidelines, evaluation criteria, and recommendations for the design of adaptable multimedia systems.
What are the determinants of human adaptation rate in this mutual learning situations?
How does learning depend on statistical parameters such as precision, accuracy?
How does learning depend on reliability, and predictability?
How does learning depend on the active vs passive nature of different sensory modalities?
How do humans handle instabilities in this learning context?
Models and Measures
PRESS AND MEDIA