T. Kosch, J. Karolus, J. Zagermann, H. Reiterer, A. Schmidt, and P. W. Woźniak, “A Survey on Measuring Cognitive Workload in Human-Computer Interaction,”
ACM Comput. Surv., Jan. 2023, doi:
10.1145/3582272.
Abstract
The ever-increasing number of computing devices around us results in more and more systems competing for our attention, making cognitive workload a crucial factor for the user experience of human-computer interfaces. Research in Human-Computer Interaction (HCI) has used various metrics to determine users’ mental demands. However, there needs to be a systematic way to choose an appropriate and effective measure for cognitive workload in experimental setups, posing a challenge to their reproducibility. We present a literature survey of past and current metrics for cognitive workload used throughout HCI literature to address this challenge. By initially exploring what cognitive workload resembles in the HCI context, we derive a categorization supporting researchers and practitioners in selecting cognitive workload metrics for system design and evaluation. We conclude with three following research gaps: (1) defining and interpreting cognitive workload in HCI, (2) the hidden cost of the NASA-TLX, and (3) HCI research as a catalyst for workload-aware systems, highlighting that HCI research has to deepen and conceptualize the understanding of cognitive workload in the context of interactive computing systems.BibTeX
C. Schneegass, V. Füseschi, V. Konevych, and F. Draxler, “Investigating the Use of Task Resumption Cues to Support Learning in Interruption-Prone Environments,”
Multimodal Technologies and Interaction, vol. 6, no. 1, Art. no. 1, 2022, doi:
10.3390/mti6010002.
Abstract
The ubiquity of mobile devices in peoples’ everyday life makes them a feasible tool for language learning. Learning anytime and anywhere creates great flexibility but comes with the inherent risk of infrequent learning and learning in interruption-prone environments. No matter the length of the learning break, it can negatively affect knowledge consolidation and recall. This work presents the design and implementation of memory cues to support task resumption in mobile language learning applications and two evaluations to assess their impact on user experience. An initial laboratory experiment (N=15) revealed that while the presentation of the cues had no significant effect on objective performance measures (task completion time and error rate), the users still perceived the cues as helpful and would appreciate them in a mobile learning app. A follow-up study (N=16) investigated revised cue designs in a real-world field setting and found that users particularly appreciated our interactive test cue design. We discuss strengths and limitations of our concept and implications for the application of task resumption cues beyond the scope of mobile language learning.BibTeX
F. Chiossi
et al., “Adapting visualizations and interfaces to the user,”
it - Information Technology, vol. 64, no. 4–5, Art. no. 4–5, 2022, doi:
10.1515/itit-2022-0035.
Abstract
Adaptive visualization and interfaces pervade our everyday tasks to improve interaction from the point of view of user performance and experience. This approach allows using several user inputs, whether physiological, behavioral, qualitative, or multimodal combinations, to enhance the interaction. Due to the multitude of approaches, we outline the current research trends of inputs used to adapt visualizations and user interfaces. Moreover, we discuss methodological approaches used in mixed reality, physiological computing, visual analytics, and proficiency-aware systems. With this work, we provide an overview of the current research in adaptive systems.BibTeX
F. Chiossi, R. Welsch, S. Villa, L. Chuang, and S. Mayer, “Virtual Reality Adaptation Using Electrodermal Activity to Support the User Experience,”
Big Data and Cognitive Computing, vol. 6, no. 2, Art. no. 2, 2022, doi:
10.3390/bdcc6020055.
Abstract
Virtual reality is increasingly used for tasks such as work and education. Thus, rendering scenarios that do not interfere with such goals and deplete user experience are becoming progressively more relevant. We present a physiologically adaptive system that optimizes the virtual environment based on physiological arousal, i.e., electrodermal activity. We investigated the usability of the adaptive system in a simulated social virtual reality scenario. Participants completed an n-back task (primary) and a visual detection (secondary) task. Here, we adapted the visual complexity of the secondary task in the form of the number of non-player characters of the secondary task to accomplish the primary task. We show that an adaptive virtual reality can improve users’ comfort by adapting to physiological arousal regarding the task complexity. Our findings suggest that physiologically adaptive virtual reality systems can improve users’ experience in a wide range of scenarios.BibTeX
D. Dietz
et al., “Walk This Beam: Impact of Different Balance Assistance Strategies and Height Exposure on Performance and Physiological Arousal in VR,” in
28th ACM Symposium on Virtual Reality Software and Technology, in 28th ACM Symposium on Virtual Reality Software and Technology. 2022, pp. 1--12. doi:
10.1145/3562939.3567818.
Abstract
Dynamic balance is an essential skill for the human upright gait; therefore, regular balance training can improve postural control and reduce the risk of injury. Even slight variations in walking conditions like height or ground conditions can significantly impact walking performance. Virtual reality is used as a helpful tool to simulate such challenging situations. However, there is no agreement on design strategies for balance training in virtual reality under stressful environmental conditions such as height exposure. We investigate how two different training strategies, imitation learning, and gamified learning, can help dynamic balance control performance across different stress conditions. Moreover, we evaluate the stress response as indexed by peripheral physiological measures of stress, perceived workload, and user experience. Both approaches were tested against a baseline of no instructions and against each other. Thereby, we show that a learning-by-imitation approach immediately helps dynamic balance control, decreases stress, improves attention focus, and diminishes perceived workload. A gamified approach can lead to users being overwhelmed by the additional task. Finally, we discuss how our approaches could be adapted for balance training and applied to injury rehabilitation and prevention.BibTeX
T. Kosch, R. Welsch, L. Chuang, and A. Schmidt, “The Placebo Effect of Artificial Intelligence in Human-Computer Interaction,”
ACM Transactions on Computer-Human Interaction, 2022, doi:
10.1145/3529225.
Abstract
In medicine, patients can obtain real benefits from a sham treatment. These benefits are known as the placebo effect. We report two experiments (Experiment I: N=369; Experiment II: N=100) demonstrating a placebo effect in adaptive interfaces. Participants were asked to solve word puzzles while being supported by no system or an adaptive AI interface. All participants experienced the same word puzzle difficulty and had no support from an AI throughout the experiments. Our results showed that the belief of receiving adaptive AI support increases expectations regarding the participant’s own task performance, sustained after interaction. These expectations were positively correlated to performance, as indicated by the number of solved word puzzles. We integrate our findings into technological acceptance theories and discuss implications for the future assessment of AI-based user interfaces and novel technologies. We argue that system descriptions can elicit placebo effects through user expectations biasing the results of user-centered studies.BibTeX
A. Huang, P. Knierim, F. Chiossi, L. L. Chuang, and R. Welsch, “Proxemics for Human-Agent Interaction in Augmented Reality,” in
CHI Conference on Human Factors in Computing Systems, in CHI Conference on Human Factors in Computing Systems. 2022, pp. 1--13. doi:
10.1145/3491102.3517593.
Abstract
Augmented Reality (AR) embeds virtual content in physical spaces, including virtual agents that are known to exert a social presence on users. Existing design guidelines for AR rarely consider the social implications of an agent’s personal space (PS) and that it can impact user behavior and arousal. We report an experiment (N=54) where participants interacted with agents in an AR art gallery scenario. When participants approached six virtual agents (i.e., two males, two females, a humanoid robot, and a pillar) to ask for directions, we found that participants respected the agents’ PS and modulated interpersonal distances according to the human-like agents’ perceived gender. When participants were instructed to walk through the agents, we observed heightened skin-conductance levels that indicate physiological arousal. These results are discussed in terms of proxemic theory that result in design recommendations for implementing pervasive AR experiences with virtual agents.BibTeX
J. Zagermann
et al., “Complementary Interfaces for Visual Computing,”
it - Information Technology, vol. 64, no. 4–5, Art. no. 4–5, 2022, doi:
doi:10.1515/itit-2022-0031.
Abstract
With increasing complexity in visual computing tasks, a single device may not be sufficient to adequately support the user’s workflow. Here, we can employ multi-device ecologies such as cross-device interaction, where a workflow can be split across multiple devices, each dedicated to a specific role. But what makes these multi-device ecologies compelling? Based on insights from our research, each device or interface component must contribute a complementary characteristic to increase the quality of interaction and further support users in their current activity. We establish the term complementary interfaces for such meaningful combinations of devices and modalities and provide an initial set of challenges. In addition, we demonstrate the value of complementarity with examples from within our own research.BibTeX
F. Draxler, C. Schneegass, J. Safranek, and H. Hussmann, “Why Did You Stop? - Investigating Origins and Effects of Interruptions during Mobile Language Learning,” in
Mensch Und Computer 2021, in Mensch Und Computer 2021. Ingolstadt, Germany: Association for Computing Machinery, 2021, pp. 21–33. doi:
10.1145/3473856.3473881.
Abstract
The technological advances of smartphones facilitate the transformation of learning from the classroom to an activity that can happen anywhere and anytime. While micro-learning fosters ubiquitous learning, this flexibility comes at the cost of having an uncontrolled learning environment. To this point, we know little about the usage of mobile learning applications, particularly the occurrence of interruptions and the harm they cause. By diverting users’ attention away from the learning task, interruptions can potentially compromise learning performance. We present a four-week in-the-wild study (N = 12) where we investigate learning behavior and the occurrence of interruptions based on device logging and experience sampling questionnaires. We recorded 276 interruptions in 327 learning sessions and found that interruption type as well as users’ context influence learning sessions and the severity of the interruption (i.e., session termination likeliness). We discuss challenges and opportunities for the design of automated mechanisms to detect and mitigate interruptions in mobile learning.BibTeX
D. Bethge
et al., “VEmotion: Using Driving Context for Indirect Emotion Prediction in Real-Time,” in
The 34th Annual ACM Symposium on User Interface Software and Technology, in The 34th Annual ACM Symposium on User Interface Software and Technology. New York, NY, USA: Association for Computing Machinery, 2021, pp. 638–651. doi:
10.1145/3472749.3474775.
Abstract
Detecting emotions while driving remains a challenge in Human-Computer Interaction. Current methods to estimate the driver’s experienced emotions use physiological sensing (e.g., skin-conductance, electroencephalography), speech, or facial expressions. However, drivers need to use wearable devices, perform explicit voice interaction, or require robust facial expressiveness. We present VEmotion (Virtual Emotion Sensor), a novel method to predict driver emotions in an unobtrusive way using contextual smartphone data. VEmotion analyzes information including traffic dynamics, environmental factors, in-vehicle context, and road characteristics to implicitly classify driver emotions. We demonstrate the applicability in a real-world driving study (N = 12) to evaluate the emotion prediction performance. Our results show that VEmotion outperforms facial expressions by 29% in a person-dependent classification and by 8.5% in a person-independent classification. We discuss how VEmotion enables empathic car interfaces to sense the driver’s emotions and will provide in-situ interface adaptations on-the-go.BibTeX
T. Kosch, A. Schmidt, S. Thanheiser, and L. L. Chuang, “One Does Not Simply RSVP: Mental Workload to Select Speed Reading Parameters Using Electroencephalography,” in
Proceedings of the CHI Conference on Human Factors in Computing Systems, in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2020, pp. 637:1-637:13. doi:
10.1145/3313831.3376766.
Abstract
Rapid Serial Visual Presentation (RSVP) has gained popular-ity as a method for presenting text on wearable devices with limited screen space. Nonetheless, it remains unclear how to calibrate RSVP display parameters, such as spatial alignments or presentation rates, to suit the reader’s information process-ing ability at high presentation speeds. Existing methods rely on comprehension and subjective workload scores, which are influenced by the user’s knowledge base and subjective percep-tion. Here, we use electroencephalography (EEG) to directly determine how individual information processing varies with changes in RSVP display parameters. Eighteen participants read text excerpts with RSVP in a repeated-measures design that manipulated the Text Alignment and Presentation Speed of text representation. We evaluated how predictive EEG metrics were of gains in reading speed, subjective workload, and text comprehension. We found significant correlations between EEG and increasing Presentation Speeds and propose how EEG can be used for dynamic selection of RSVP parameters.BibTeX
P. Balestrucci
et al., “Pipelines Bent, Pipelines Broken: Interdisciplinary Self-Reflection on the Impact of COVID-19 on Current and Future Research (Position Paper),” in
2020 IEEE Workshop on Evaluation and Beyond-Methodological Approaches to Visualization (BELIV), in 2020 IEEE Workshop on Evaluation and Beyond-Methodological Approaches to Visualization (BELIV). IEEE, 2020, pp. 11--18. doi:
10.1109/BELIV51497.2020.00009.
Abstract
Among the many changes brought about by the COVID-19 pandemic, one of the most pressing for scientific research concerns user testing. For the researchers who conduct studies with human participants, the requirements for social distancing have created a need for reflecting on methodologies that previously seemed relatively straightforward. It has become clear from the emerging literature on the topic and from first-hand experiences of researchers that the restrictions due to the pandemic affect every aspect of the research pipeline. The current paper offers an initial reflection on user-based research, drawing on the authors' own experiences and on the results of a survey that was conducted among researchers in different disciplines, primarily psychology, human-computer interaction (HCI), and visualization communities. While this sampling of researchers is by no means comprehensive, the multi-disciplinary approach and the consideration of different aspects of the research pipeline allow us to examine current and future challenges for user-based research. Through an exploration of these issues, this paper also invites others in the VIS-as well as in the wider-research community, to reflect on and discuss the ways in which the current crisis might also present new and previously unexplored opportunities.BibTeX
F. Draxler, A. Labrie, A. Schmidt, and L. L. Chuang, “Augmented Reality to Enable Users in Learning Case Grammar from Their Real-World Interactions,” in
Proceedings of the CHI Conference on Human Factors in Computing Systems, in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2020, pp. 410:1-410:12. doi:
10.1145/3313831.3376537.
Abstract
Augmented Reality (AR) provides a unique opportunity to situate learning content in one's environment. In this work, we investigated how AR could be developed to provide an interactive context-based language learning experience. Specifically, we developed a novel handheld-AR app for learning case grammar by dynamically creating quizzes, based on real-life objects in the learner's surroundings. We compared this to the experience of learning with a non-contextual app that presented the same quizzes with static photographic images. Participants found AR suitable for use in their everyday lives and enjoyed the interactive experience of exploring grammatical relationships in their surroundings. Nonetheless, Bayesian tests provide substantial evidence that the interactive and context-embedded AR app did not improve case grammar skills, vocabulary retention, and usability over the experience with equivalent static images. Based on this, we propose how language learning apps could be designed to combine the benefits of contextual AR and traditional approaches.BibTeX
U. Ju, L. L. Chuang, and C. Wallraven, “Acoustic Cues Increase Situational Awareness in Accident Situations: A VR Car-Driving Study,”
IEEE Transactions on Intelligent Transportation Systems, pp. 1–11, 2020, doi:
10.1109/TITS.2020.3035374.
Abstract
Our work for the first time evaluates the effectiveness of visual and acoustic warning systems in an accident situation using a realistic, immersive driving simulation. In a first experiment, 70 participants were trained to complete a course at high speed. The course contained several forks where a wrong turn would lead to the car falling off a cliff and crashing - these forks were indicated either with a visual warning sign for a first, no-sound group or with a visual and auditory warning cue for a second, sound group. In a testing phase, right after the warning signals were given, trees suddenly fell on the road, leaving the (fatal) turn open. Importantly, in the no-sound group, 18 out of 35 people still chose this turn, whereas in the sound group only 5 out of 35 people did so - the added sound therefore had a large and significant increase in situational awareness. We found no other differences between the groups concerning age, physiological responses, or driving experience. In a second replication experiment, the setup was repeated with another 70 participants without emphasis on driving speed. Results fully confirmed the previous findings with 17 out of 35 people in the no-sound group versus only 6 out of 35 in the sound group choosing the turn to the cliff. With these two experiments using a one-shot design to avoid pre-meditation and testing naïve, rapid decision-making, we provide clear evidence for the advantage of visual-auditory in-vehicle warning systems for promoting situational awareness.BibTeX
T. Munz, L. L. Chuang, S. Pannasch, and D. Weiskopf, “VisME: Visual microsaccades explorer,”
Journal of Eye Movement Research, vol. 12, no. 6, Art. no. 6, Dec. 2019, doi:
10.16910/jemr.12.6.5.
Abstract
This work presents a visual analytics approach to explore microsaccade distributions in high-frequency eye tracking data. Research studies often apply filter algorithms and parameter values for microsaccade detection. Even when the same algorithms are employed, different parameter values might be adopted across different studies. In this paper, we present a visual analytics system (VisME) to promote reproducibility in the data analysis of microsaccades. It allows users to interactively vary the parametric values for microsaccade filters and evaluate the resulting influence on microsaccade behavior across individuals and on a group level. In particular, we exploit brushing-and-linking techniques that allow the microsaccadic properties of space, time, and movement direction to be extracted, visualized, and compared across multiple views. We demonstrate in a case study the use of our visual analytics system on data sets collected from natural scene viewing and show in a qualitative usability study the usefulness of this approach for eye tracking researchers. We believe that interactive tools such as VisME will promote greater transparency in eye movement research by providing researchers with the ability to easily understand complex eye tracking data sets; such tools can also serve as teaching systems. VisME is provided as open source software.BibTeX
T. M. Benz, B. Riedl, and L. L. Chuang, “Projection Displays Induce Less Simulator Sickness than Head-Mounted Displays in a Real Vehicle Driving Simulator,” in
Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI), C. P. Janssen, S. F. Donker, L. L. Chuang, and W. Ju, Eds., in Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI). ACM, 2019, pp. 379–387. doi:
10.1145/3342197.3344515.
Abstract
Driving simulators are necessary for evaluating automotive technology for human users. While they can vary in terms of their fidelity, it is essential that users experience minimal simulator sickness and high presence in them. In this paper, we present two experiments that investigate how a virtual driving simulation system could be visually presented within a real vehicle, which moves on a test track but displays a virtual environment. Specifically, we contrasted display presentation of the simulation using either head-mounted displays (HMDs) or fixed displays in the vehicle itself. Overall, we find that fixed displays induced less simulator sickness than HMDs. Neither HMDs or fixed displays induced a stronger presence in our implementation, even when the field-of-view of the fixed display was extended. We discuss the implications of this, particular in the context of scenarios that could induce considerable motion sickness, such as testing non-driving activities in automated vehicles.BibTeX
C. Glatz, S. S. Krupenia, H. H. Bülthoff, and L. L. Chuang, “Use the Right Sound for the Right Job: Verbal Commands and Auditory Icons for a Task-Management System Favor Different Information Processes in the Brain,” in
Proceedings of the CHI Conference on Human Factors in Computing Systems, R. L. Mandryk, M. Hancock, M. Perry, and A. L. Cox, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2018, pp. 472:1-472:13. doi:
10.1145/3173574.3174046.
Abstract
Design recommendations for notifications are typically based on user performance and subjective feedback. In comparison, there has been surprisingly little research on how designed notifications might be processed by the brain for the information they convey. The current study uses EEG/ERP methods to evaluate auditory notifications that were designed to cue long-distance truck drivers for task-management and driving conditions, particularly for automated driving scenarios. Two experiments separately evaluated naive students and professional truck drivers for their behavioral and brain responses to auditory notifications, which were either auditory icons or verbal commands. Our EEG/ERP results suggest that verbal commands were more readily recognized by the brain as relevant targets, but that auditory icons were more likely to update contextual working memory. Both classes of notifications did not differ on behavioral measures. This suggests that auditory icons ought to be employed for communicating contextual information and verbal commands, for urgent requests.BibTeX
M. Scheer, H. H. Bülthoff, and L. L. Chuang, “Auditory Task Irrelevance: A Basis for Inattentional Deafness,”
Human Factors, vol. 60, no. 3, Art. no. 3, 2018, doi:
10.1177/0018720818760919.
Abstract
This study investigates the neural basis of inattentional deafness, which could result from task irrelevance in the auditory modality.
Humans can fail to respond to auditory alarms under high workload situations. This failure, termed inattentional deafness, is often attributed to high workload in the visual modality, which reduces one’s capacity for information processing. Besides this, our capacity for processing auditory information could also be selectively diminished if there is no obvious task relevance in the auditory channel. This could be another contributing factor given the rarity of auditory warnings.BibTeX
K. Hänsel, R. Poguntke, H. Haddadi, A. Alomainy, and A. Schmidt, “What to Put on the User: Sensing Technologies for Studies and Physiology Aware Systems,” in
Proceedings of the CHI Conference on Human Factors in Computing Systems, R. L. Mandryk, M. Hancock, M. Perry, and A. L. Cox, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2018, pp. 145:1-145:14. doi:
10.1145/3173574.3173719.
Abstract
Fitness trackers not just provide easy means to acquire physiological data in real-world environments due to affordable sensing technologies, they further offer opportunities for physiology-aware applications and studies in HCI; however, their performance is not well understood. In this paper, we report findings on the quality of 3 sensing technologies: PPG-based wrist trackers (Apple Watch, Microsoft Band 2), an ECG-belt (Polar H7) and reference device with stick-on ECG electrodes (Nexus 10). We collected physiological (heart rate, electrodermal activity, skin temperature) and subjective data from 21 participants performing combinations of physical activity and stressful tasks. Our empirical research indicates that wrist devices provide a good sensing performance in stationary settings. However, they lack accuracy when participants are mobile or if tasks require physical activity. Based on our findings, we suggest a textitDesign Space for Wearables in Research Settings and reflected on the appropriateness of the investigated technologies in research contexts.BibTeX
T. Kosch, M. Funk, A. Schmidt, and L. L. Chuang, “Identifying Cognitive Assistance with Mobile Electroencephalography: A Case Study with In-Situ Projections for Manual Assembly.,”
Proceedings of the ACM on Human-Computer Interaction (ACMHCI), vol. 2, pp. 11:1-11:20, 2018, doi:
10.1145/3229093.
Abstract
Manual assembly at production is a mentally demanding task. With rapid prototyping and smaller production lot sizes, this results in frequent changes of assembly instructions that have to be memorized by workers. Assistive systems compensate this increase in mental workload by providing "just-in-time" assembly instructions through in-situ projections. The implementation of such systems and their benefits to reducing mental workload have previously been justified with self-perceived ratings. However, there is no evidence by objective measures if mental workload is reduced by in-situ assistance. In our work, we showcase electroencephalography (EEG) as a complementary evaluation tool to assess cognitive workload placed by two different assistive systems in an assembly task, namely paper instructions and in-situ projections. We identified the individual EEG bandwidth that varied with changes in working memory load. We show, that changes in the EEG bandwidth are found between paper instructions and in-situ projections, indicating that they reduce working memory compared to paper instructions. Our work contributes by demonstrating how design claims of cognitive demand can be validated. Moreover, it directly evaluates the use of assistive systems for delivering context-aware information. We analyze the characteristics of EEG as real-time assessment for cognitive workload to provide insights regarding the mental demand placed by assistive systems.BibTeX
Y. Abdelrahman, P. Knierim, P. W. Wozniak, N. Henze, and A. Schmidt, “See Through the Fire: Evaluating the Augmentation of Visual Perception of Firefighters Using Depth and Thermal Cameras,” in
Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing and Symposium on Wearable Computers (UbiComp/ISWC), S. C. Lee, L. Takayama, and K. N. Truong, Eds., in Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing and Symposium on Wearable Computers (UbiComp/ISWC). ACM, 2017, pp. 693–696. doi:
10.1145/3123024.3129269.
Abstract
Our visual perception is limited to the abilities of our eyes, where we only perceive visible light. This limitation might influence how we perceive and react to our surroundings, however, this limitation might endanger us in certain scenarios e.g. firefighting. In this paper, we explore the potential of augmenting the visual sensing of the firefighters using depth and thermal imaging to increase their awareness about the environment. Additionally, we built and evaluated two form factors, hand held and head mounted display. To evaluate our built prototypes, we conducted two user studies in a simulated fire environment with real firefighters. In this workshop paper, we present our findings from the evaluation of the concept and prototypes with real firefighters.BibTeX
L. L. Chuang, C. Glatz, and S. S. Krupenia, “Using EEG to Understand why Behavior to Auditory In-vehicle Notifications Differs Across Test Environments,” in
Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI), S. Boll, B. Pfleging, B. Donmez, I. Politis, and D. R. Large, Eds., in Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI). ACM, 2017, pp. 123–133. doi:
10.1145/3122986.3123017.
Abstract
In this study, we employ EEG methods to clarify why auditory notifications, which were designed for task management in highly automated trucks, resulted in different performance behavior, when deployed in two different test settings: (a) student volunteers in a lab environment, (b) professional truck drivers in a realistic vehicle simulator. Behavioral data showed that professional drivers were slower and less sensitive in identifying notifications compared to their counterparts. Such differences can be difficult to interpret and frustrates the deployment of implementations from the laboratory to more realistic settings. Our EEG recordings of brain activity reveal that these differences were not due to differences in the detection and recognition of the notifications. Instead, it was due to differences in EEG activity associated with response generation. Thus, we show how measuring brain activity can deliver insights into how notifications are processed, at a finer granularity than can be afforded by behavior alone.BibTeX
J. Karolus, P. W. Wozniak, L. L. Chuang, and A. Schmidt, “Robust Gaze Features for Enabling Language Proficiency Awareness,” in
Proceedings of the CHI Conference on Human Factors in Computing Systems, G. Mark, S. R. Fussell, C. Lampe, m. c. schraefel, J. P. Hourcade, C. Appert, and D. Wigdor, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2017, pp. 2998–3010. doi:
10.1145/3025453.3025601.
Abstract
We are often confronted with information interfaces designed in an unfamiliar language, especially in an increasingly globalized world, where the language barrier inhibits interaction with the system. In our work, we explore the design space for building interfaces that can detect the user's language proficiency. Specifically, we look at how a user's gaze properties can be used to detect whether the interface is presented in a language they understand. We report a study (N=21) where participants were presented with questions in multiple languages, whilst being recorded for gaze behavior. We identified fixation and blink durations to be effective indicators of the participants' language proficiencies. Based on these findings, we propose a classification scheme and technical guidelines for enabling language proficiency awareness on information displays using gaze data.BibTeX
T. Dingler, A. Schmidt, and T. Machulla, “Building Cognition-Aware Systems: A Mobile Toolkit for Extracting Time-of-Day Fluctuations of Cognitive Performance,”
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), vol. 1, no. 3, Art. no. 3, 2017, doi:
10.1145/3132025.
Abstract
People’s alertness fluctuates across the day: at some times we are highly focused while at others we feel unable to concentrate. So far, extracting fluctuation patterns has been time and cost-intensive. Using an in-the-wild approach with 12 participants, we evaluated three cognitive tasks regarding their adequacy as a mobile and economical assessment tool of diurnal changes in mental performance. Participants completed the five-minute test battery on their smartphones multiple times a day for a period of 1-2 weeks. Our results show that people’s circadian rhythm can be obtained under unregulated non-laboratory conditions. Along with this validation study, we release our test battery as an open source library for future work towards cognition-aware systems as well as a tool for psychological and medical research. We discuss ways of integrating the toolkit and possibilities for implicitly measuring performance variations in common applications. The ability to detect systematic patterns in alertness levels will allow cognition-aware systems to provide in-situ assistance in accordance with users’ current cognitive capabilities and limitations.BibTeX
J. Allsop, R. Gray, H. Bülthoff, and L. Chuang, “Eye Movement Planning on Single-Sensor-Single-Indicator Displays is Vulnerable to User Anxiety and Cognitive Load,”
Journal of Eye Movement Research, vol. 10, no. 5, Art. no. 5, 2017, doi:
10.16910/jemr.10.5.8.
Abstract
In this study, we demonstrate the effects of anxiety and cognitive load on eye movement planning in an instrument flight task adhering to a single-sensor-single-indicator data visualisation design philosophy. The task was performed in neutral and anxiety conditions, while a low or high cognitive load, auditory n-back task was also performed. Cognitive load led to a reduction in the number of transitions between instruments, and impaired task performance. Changes in self-reported anxiety between the neutral and anxiety conditions positively correlated with changes in the randomness of eye movements between instruments, but only when cognitive load was high. Taken together, the results suggest that both cognitive load and anxiety impact gaze behavior, and that these effects should be explored when designing data visualization displays.BibTeX
B. Pfleging, D. K. Fekety, A. Schmidt, and A. L. Kun, “A Model Relating Pupil Diameter to Mental Workload and Lighting Conditions,” in
Proceedings of the CHI Conference on Human Factors in Computing Systems, J. Kaye, A. Druin, C. Lampe, D. Morris, and J. P. Hourcade, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2016, pp. 5776–5788. doi:
10.1145/2858036.2858117.
Abstract
In this paper, we present a proof-of-concept approach to estimating mental workload by measuring the user's pupil diameter under various controlled lighting conditions. Knowing the user's mental workload is desirable for many application scenarios, ranging from driving a car, to adaptive workplace setups. Typically, physiological sensors allow inferring mental workload, but these sensors might be rather uncomfortable to wear. Measuring pupil diameter through remote eye-tracking instead is an unobtrusive method. However, a practical eye-tracking-based system must also account for pupil changes due to variable lighting conditions. Based on the results of a study with tasks of varying mental demand and six different lighting conditions, we built a simple model that is able to infer the workload independently of the lighting condition in 75% of the tested conditions.BibTeX
M. Greis, P. El.Agroudy, H. Schuff, T. Machulla, and A. Schmidt, “Decision-Making under Uncertainty: How the Amount of Presented Uncertainty Influences User Behavior,” in
Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI), ACM, Ed., in Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI), vol. 2016. 2016. doi:
10.1145/2971485.2971535.
Abstract
In everyday life, people regularly make decisions based on uncertain data, e.g., when using a navigation device or looking at the weather forecast. In our work, we compare four representations that communicate different amounts of uncertainty information to the user. We compared them in a study by publishing a web-based game on Facebook. In total, 44 users played 991 turns. We analyzed the turns by logging game metrics such as the gain per turn and included a survey element. The results show that abundance of uncertainty information leads to taking unnecessary risks. However, representations with aggregated detailed uncertainty provide a good trade-off between being understandable by the players and encouraging medium risks with high gains. Absence of uncertainty information reduces the risk taking and leads to more won turns, but with the lowest money gain.BibTeX