F. Chiossi
et al., “Adapting visualizations and interfaces to the user,”
it - Information Technology, vol. 64, no. 4–5, Art. no. 4–5, 2022, doi:
10.1515/itit-2022-0035.
Abstract
Adaptive visualization and interfaces pervade our everyday tasks to improve interaction from the point of view of user performance and experience. This approach allows using several user inputs, whether physiological, behavioral, qualitative, or multimodal combinations, to enhance the interaction. Due to the multitude of approaches, we outline the current research trends of inputs used to adapt visualizations and user interfaces. Moreover, we discuss methodological approaches used in mixed reality, physiological computing, visual analytics, and proficiency-aware systems. With this work, we provide an overview of the current research in adaptive systems.BibTeX
P. Balestrucci, D. Wiebusch, and M. O. Ernst, “ReActLab: A Custom Framework for Sensorimotor Experiments ‘in-the-wild,’”
Frontiers in Psychology, vol. 13, Jun. 2022, doi:
10.3389/fpsyg.2022.906643.
Abstract
Over the last few years online platforms for running psychology experiments beyond simple questionnaires and surveys have become increasingly popular. This trend has especially increased after many laboratory facilities had to temporarily avoid in-person data collection following COVID-19-related lockdown regulations. Yet, while offering a valid alternative to in-person experiments in many cases, platforms for online experiments are still not a viable solution for a large part of human-based behavioral research. Two situations in particular pose challenges: First, when the research question requires design features or participant interaction which exceed the customization capability provided by the online platform; and second, when variation among hardware characteristics between participants results in an inadmissible confounding factor. To mitigate the effects of these limitations, we developed ReActLab (Remote Action Laboratory), a framework for programming remote, browser-based experiments using freely available and open-source JavaScript libraries. Since the experiment is run entirely within the browser, our framework allows for portability to any operating system and many devices. In our case, we tested our approach by running experiments using only a specific model of Android tablet. Using ReActLab with this standardized hardware allowed us to optimize our experimental design for our research questions, as well as collect data outside of laboratory facilities without introducing setup variation among participants. In this paper, we describe our framework and show examples of two different experiments carried out with it: one consisting of a visuomotor adaptation task, the other of a visual localization task. Through comparison with results obtained from similar tasks in in-person laboratory settings, we discuss the advantages and limitations for developing browser-based experiments using our framework.BibTeX
J. Zagermann
et al., “Complementary Interfaces for Visual Computing,”
it - Information Technology, vol. 64, no. 4–5, Art. no. 4–5, 2022, doi:
doi:10.1515/itit-2022-0031.
Abstract
With increasing complexity in visual computing tasks, a single device may not be sufficient to adequately support the user’s workflow. Here, we can employ multi-device ecologies such as cross-device interaction, where a workflow can be split across multiple devices, each dedicated to a specific role. But what makes these multi-device ecologies compelling? Based on insights from our research, each device or interface component must contribute a complementary characteristic to increase the quality of interaction and further support users in their current activity. We establish the term complementary interfaces for such meaningful combinations of devices and modalities and provide an initial set of challenges. In addition, we demonstrate the value of complementarity with examples from within our own research.BibTeX
P. Balestrucci, V. Maffei, F. Lacquaniti, and A. Moscatelli, “The Effects of Visual Parabolic Motion on the Subjective Vertical and on Interception,”
Neuroscience, vol. 453, pp. 124–137, Jan. 2021, doi:
10.1016/j.neuroscience.2020.09.052.
Abstract
Observers typically present a strong bias in estimating the orientation of a visual bar when their body is tilted >60° in the roll plane and in the absence of visual background information. Known as the A-effect, this phenomenon likely results from the under-compensation of body tilt. Static visual cues can reduce such bias in the perceived vertical. Yet, it is unknown whether dynamic visual cues would be also effective. Here we presented projectile motions of a visual target along parabolic trajectories with different orientations relative to physical gravity. The aim of the experiment was twofold: First, we assessed whether the projectile motions could bias the estimation of the perceived orientation of a visual bar, measured with a classical subjective visual vertical (SVV) task. Second, we evaluated whether the ability to estimate time-to-contact of the visual target in an interception task was influenced by the orientation of these parabolic trajectories. Two groups of participants performed the experiment, either with their head and body tilted 90° along the roll plane or in an upright position. We found that the perceived orientation of the visual bar in the SVV task was affected by the orientation of the parabolic trajectories. This result was present in the tilted but not in the upright participants. In the interception task, the timing error increased linearly as a function of the orientation of the parabola. These results support the hypothesis that a gravity vector estimated from dynamic visual stimuli contributes to the subjective visual vertical.BibTeX
P. Balestrucci
et al., “Pipelines Bent, Pipelines Broken: Interdisciplinary Self-Reflection on the Impact of COVID-19 on Current and Future Research (Position Paper),” in
2020 IEEE Workshop on Evaluation and Beyond-Methodological Approaches to Visualization (BELIV), in 2020 IEEE Workshop on Evaluation and Beyond-Methodological Approaches to Visualization (BELIV). IEEE, 2020, pp. 11--18. doi:
10.1109/BELIV51497.2020.00009.
Abstract
Among the many changes brought about by the COVID-19 pandemic, one of the most pressing for scientific research concerns user testing. For the researchers who conduct studies with human participants, the requirements for social distancing have created a need for reflecting on methodologies that previously seemed relatively straightforward. It has become clear from the emerging literature on the topic and from first-hand experiences of researchers that the restrictions due to the pandemic affect every aspect of the research pipeline. The current paper offers an initial reflection on user-based research, drawing on the authors' own experiences and on the results of a survey that was conducted among researchers in different disciplines, primarily psychology, human-computer interaction (HCI), and visualization communities. While this sampling of researchers is by no means comprehensive, the multi-disciplinary approach and the consideration of different aspects of the research pipeline allow us to examine current and future challenges for user-based research. Through an exploration of these issues, this paper also invites others in the VIS-as well as in the wider-research community, to reflect on and discuss the ways in which the current crisis might also present new and previously unexplored opportunities.BibTeX
P. Balestrucci and M. Ernst, “Visuo-motor adaptation during interaction with a user-adaptive system,”
Journal of Vision, vol. 19, p. 187a, Sep. 2019, doi:
10.1167/19.10.187a.
Abstract
User-adaptive systems are a current trend in technological development. Such systems are designed to sense the user’s status based on ongoing interaction and automatically change certain features (e.g. content, interface, or interaction capabilities) in order to provide a targeted, personalized experience. In this scenario, users are likely to adapt to the evolving characteristics of the interaction (Burge et al., 2008), changing their own behavior to correctly interact with such systems and thereby leading to dynamics of mutual adaptation between human and machine. We investigated such mutual adaptation dynamics within a visuo-motor adaptation paradigm. Participants were instructed to perform fast pointing movements on a graphic tablet as accurately as possible, while also seeking to minimize the error between target and feedback location on a screen in front of them. The feedback location reflected the pointing performance of the user according to the underlying tablet-to-screen mapping, which changed systematically over time due to the introduction of a step offset. Concurrently, an adaptive algorithm corrected the feedback location according to an estimation of the participant’s error, thus contributing to the reduction of the displayed error over trials. In different experimental conditions, the extent of such contributions varied systematically, and we measured the adaptive performance of the human-machine system as a whole, as well as the underlying motor performance of participants. The greater the correction introduced by the adaptive algorithm, the more effective was the joint system in reducing visual error after the introduction of the step offset. On the other hand, when considering human’s motor behavior alone, the pointing error did not decrease, but tended to increase over time with higher contributions from the algorithm. Our findings indicate that, in order to obtain desired outcomes from interactions with user-adaptive technology, the sensorimotor mechanisms underlying such interactions must be considered.BibTeX
T. Machulla, L. Chuang, F. Kiss, M. O. Ernst, and A. Schmidt, “Sensory Amplification Through Crossmodal Stimulation,” in Proceedings of the CHI Workshop on Amplification and Augmentation of Human Perception, in Proceedings of the CHI Workshop on Amplification and Augmentation of Human Perception. 2017.
BibTeX
T. Waltemate
et al., “The Impact of Latency on Perceptual Judgments and Motor Performance in Closed-loop Interaction in Virtual Reality,” in
Proceedings of the ACM Conference on Virtual Reality Software and Technology (VRST), D. Kranzlmüller and G. Klinker, Eds., in Proceedings of the ACM Conference on Virtual Reality Software and Technology (VRST). ACM, 2016, pp. 27–35. doi:
10.1145/2993369.2993381.
Abstract
Latency between a user's movement and visual feedback is inevitable in every Virtual Reality application, as signal transmission and processing take time. Unfortunately, a high end-to-end latency impairs perception and motor performance. While it is possible to reduce feedback delay to tens of milliseconds, these delays will never completely vanish. Currently, there is a gap in literature regarding the impact of feedback delays on perception and motor performance as well as on their interplay in virtual environments employing full-body avatars. With the present study at hand, we address this gap by performing a systematic investigation of different levels of delay across a variety of perceptual and motor tasks during full-body action inside a Cave Automatic Virtual Environment. We presented participants with their virtual mirror image, which responded to their actions with feedback delays ranging from 45 to 350 ms. We measured the impact of these delays on motor performance, sense of agency, sense of body ownership and simultaneity perception by means of psychophysical procedures. Furthermore, we looked at interaction effects between these aspects to identify possible dependencies. The results show that motor performance and simultaneity perception are affected by latencies above 75 ms. Although sense of agency and body ownership only decline at a latency higher than 125 ms, and deteriorate for a latency greater than 300 ms, they do not break down completely even at the highest tested delay. Interestingly, participants perceptually infer the presence of delays more from their motor error in the task than from the actual level of delay. Whether or not participants notice a delay in a virtual environment might therefore depend on the motor task and their performance rather than on the actual delay.BibTeX