A. Nesti, G. Rognini, B. Herbelin, H. H. Bülthoff, L. L. Chuang, and O. Blanke, “Modulation of Vection Latencies in the Full-Body Illusion,”
PLoS ONE, vol. 13, no. 12, Art. no. 12, 2018, doi:
10.1371/journal.pone.0209189.
Abstract
Current neuroscientific models of bodily self-consciousness (BSC) argue that inaccurate integration of sensory signals leads to altered states of BSC. Indeed, using virtual reality technology, observers viewing a fake or virtual body while being exposed to tactile stimulation of the real body, can experience illusory ownership over–and mislocalization towards—the virtual body (Full-Body Illusion, FBI). Among the sensory inputs contributing to BSC, the vestibular system is believed to play a central role due to its importance in estimating self-motion and orientation. This theory is supported by clinical evidence that vestibular loss patients are more prone to altered BSC states, and by recent experimental evidence that visuo-vestibular conflicts can disrupt BSC in healthy individuals. Nevertheless, the contribution of vestibular information and self-motion perception to BSC remains largely unexplored. Here, we investigate the relationship between alterations of BSC and self-motion sensitivity in healthy individuals. Fifteen participants were exposed to visuo-vibrotactile conflicts designed to induce an FBI, and subsequently to visual rotations that evoked illusory self-motion (vection). We found that synchronous visuo-vibrotactile stimulation successfully induced the FBI, and further observed a relationship between the strength of the FBI and the time necessary for complete vection to arise. Specifically, higher self-reported FBI scores across synchronous and asynchronous conditions were associated to shorter vection latencies. Our findings are in agreement with clinical observations that vestibular loss patients have higher FBI susceptibility and lower vection latencies, and argue for increased visual over vestibular dependency during altered states of BSC.BibTeX
C. Glatz and L. L. Chuang, “The Time Course of Auditory Looming Cues in Redirecting Visuo-Spatial Attention,”
Nature - Scientific Reports, vol. 9, pp. 743:1-743:10, 2018, doi:
10.1038/s41598-018-36033-8.
Abstract
By orienting attention, auditory cues can improve the discrimination of spatially congruent visual targets. Looming sounds that increase in intensity are processed preferentially by the brain. Thus, we investigated whether auditory looming cues can orient visuo-spatial attention more effectively than static and receding sounds. Specifically, different auditory cues could redirect attention away from a continuous central visuo-motor tracking task to peripheral visual targets that appeared occasionally. To investigate the time course of crossmodal cuing, Experiment 1 presented visual targets at different time-points across a 500 ms auditory cue’s presentation. No benefits were found for simultaneous audio-visual cue-target presentation. The largest crossmodal benefit occurred at early cue-target asynchrony onsets (i.e., CTOA = 250 ms), regardless of auditory cue type, which diminished at CTOA = 500 ms for static and receding cues. However, auditory looming cues showed a late crossmodal cuing benefit at CTOA = 500 ms. Experiment 2 showed that this late auditory looming cue benefit was independent of the cue’s intensity when the visual target appeared. Thus, we conclude that the late crossmodal benefit throughout an auditory looming cue’s presentation is due to its increasing intensity profile. The neural basis for this benefit and its ecological implications are discussed.BibTeX
M. Scheer, H. H. Bülthoff, and L. L. Chuang, “Auditory Task Irrelevance: A Basis for Inattentional Deafness,”
Human Factors, vol. 60, no. 3, Art. no. 3, 2018, doi:
10.1177/0018720818760919.
Abstract
This study investigates the neural basis of inattentional deafness, which could result from task irrelevance in the auditory modality.
Humans can fail to respond to auditory alarms under high workload situations. This failure, termed inattentional deafness, is often attributed to high workload in the visual modality, which reduces one’s capacity for information processing. Besides this, our capacity for processing auditory information could also be selectively diminished if there is no obvious task relevance in the auditory channel. This could be another contributing factor given the rarity of auditory warnings.BibTeX
S. S. Borojeni, S. C. J. Boll, W. Heuten, H. H. Bülthoff, and L. L. Chuang, “Feel the Movement: Real Motion Influences Responses to Take-Over Requests in Highly Automated Vehicles,” in
Proceedings of the CHI Conference on Human Factors in Computing Systems, R. L. Mandryk, M. Hancock, M. Perry, and A. L. Cox, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2018, pp. 246:1-246:13. doi:
10.1145/3173574.3173820.
Abstract
Take-over requests (TORs) in highly automated vehicles are cues that prompt users to resume control. TORs however, are often evaluated in non-moving driving simulators. This ignores the role of motion, an important source of information for users who have their eyes off the road while engaged in non-driving related tasks. We ran a user study in a moving-base driving simulator to investigate the effect of motion on TOR responses. We found that with motion, user responses to TORs vary depending on the road context where TORs are issued. While previous work showed that participants are fast to respond to urgent cues, we show that this is true only when TORs are presented on straight roads. Urgent cues issued on curved roads elicit slower responses than non-urgent cues on curved roads. Our findings indicate that TORs should be designed to be aware of road context to accommodate natural user responses.BibTeX
T. Kosch, M. Funk, A. Schmidt, and L. L. Chuang, “Identifying Cognitive Assistance with Mobile Electroencephalography: A Case Study with In-Situ Projections for Manual Assembly.,”
Proceedings of the ACM on Human-Computer Interaction (ACMHCI), vol. 2, pp. 11:1-11:20, 2018, doi:
10.1145/3229093.
Abstract
Manual assembly at production is a mentally demanding task. With rapid prototyping and smaller production lot sizes, this results in frequent changes of assembly instructions that have to be memorized by workers. Assistive systems compensate this increase in mental workload by providing "just-in-time" assembly instructions through in-situ projections. The implementation of such systems and their benefits to reducing mental workload have previously been justified with self-perceived ratings. However, there is no evidence by objective measures if mental workload is reduced by in-situ assistance. In our work, we showcase electroencephalography (EEG) as a complementary evaluation tool to assess cognitive workload placed by two different assistive systems in an assembly task, namely paper instructions and in-situ projections. We identified the individual EEG bandwidth that varied with changes in working memory load. We show, that changes in the EEG bandwidth are found between paper instructions and in-situ projections, indicating that they reduce working memory compared to paper instructions. Our work contributes by demonstrating how design claims of cognitive demand can be validated. Moreover, it directly evaluates the use of assistive systems for delivering context-aware information. We analyze the characteristics of EEG as real-time assessment for cognitive workload to provide insights regarding the mental demand placed by assistive systems.BibTeX
L. L. Chuang, C. Glatz, and S. S. Krupenia, “Using EEG to Understand why Behavior to Auditory In-vehicle Notifications Differs Across Test Environments,” in
Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI), S. Boll, B. Pfleging, B. Donmez, I. Politis, and D. R. Large, Eds., in Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI). ACM, 2017, pp. 123–133. doi:
10.1145/3122986.3123017.
Abstract
In this study, we employ EEG methods to clarify why auditory notifications, which were designed for task management in highly automated trucks, resulted in different performance behavior, when deployed in two different test settings: (a) student volunteers in a lab environment, (b) professional truck drivers in a realistic vehicle simulator. Behavioral data showed that professional drivers were slower and less sensitive in identifying notifications compared to their counterparts. Such differences can be difficult to interpret and frustrates the deployment of implementations from the laboratory to more realistic settings. Our EEG recordings of brain activity reveal that these differences were not due to differences in the detection and recognition of the notifications. Instead, it was due to differences in EEG activity associated with response generation. Thus, we show how measuring brain activity can deliver insights into how notifications are processed, at a finer granularity than can be afforded by behavior alone.BibTeX
V. Schwind, P. Knierim, L. L. Chuang, and N. Henze, “‘Where’s Pinky?’: The Effects of a Reduced Number of Fingers in Virtual Reality,” in
Proceedings of the Annual Symposium on Computer-Human Interaction in Play (CHI PLAY), B. A. M. Schouten, P. Markopoulos, Z. O. Toups, P. A. Cairns, and T. Bekker, Eds., in Proceedings of the Annual Symposium on Computer-Human Interaction in Play (CHI PLAY). ACM, 2017, pp. 507–515. doi:
10.1145/3116595.3116596.
Abstract
The hands of one's avatar are possibly the most visible aspect when interacting in virtual reality (VR). As video games in VR proliferate, it is important to understand how the appearance of avatar hands influence the user experience. Designers of video games often stylize hands and reduce the number of fingers of game characters. Previous work shows that the appearance of avatar hands has significant effects on the user's presence - the feeling of `being' and `acting' in VR. However, little is known about the effects of missing fingers of an avatar in VR. In this paper, we present a study (N=24) that investigated the effect of hand representations by parametrically varying the number of fingers of abstract and realistically rendered hands. We show that decreasing the number of fingers of realistic hands leads to significantly lower levels of presence, which is not the case for abstract hands. Qualitative feedback collected through think-aloud and video revealed potential reasons for the different assessment of realistic and abstract hands with fewer fingers in VR. We contribute design implications and recommend considering the human-likeness when a reduction of the number of fingers of avatar hands is desired.BibTeX
K. de Winkel, A. Nesti, H. Ayaz, and H. Bülthoff, “Neural Correlates of Decision Making on Whole Body Yaw Rotation: an fNIRS Study,”
Neuroscience Letters, vol. 654, pp. 56–62, 2017, doi:
10.1016/j.neulet.2017.04.053.
Abstract
Prominent accounts of decision making state that decisions are made on the basis of an accumulationof sensory evidence, orchestrated by networks of prefrontal and parietal neural populations. Here weassess whether these findings generalize to decisions on self-motion.Participants were presented with whole body yaw rotations of different durations in a 2-Interval-Forced-Choice paradigm, and tasked to discriminate motions on the basis of their amplitude. The corticalhemodynamic response was recorded using functional near-infrared spectroscopy (fNIRS) while partic-ipants were performing the task.The imaging data was used to predict the specific response on individual experimental trials, and topredict whether the comparison stimulus would be judged larger than the reference. Classifier perfor-mance on the former variable was negligible. However, considerable performance was achieved for thelatter variable, specifically using parietal imaging data. The findings provide support for the notion thatactivity in the parietal cortex reflects modality independent decision variables that represent the strengthof the neural evidence in favor of a decision. The results are encouraging for the use of fNIRS as a methodto perform neuroimaging in moving individuals.BibTeX
A. Nesti, K. de Winkel, and H. Bülthoff, “Accumulation of Inertial Sensory Information in the Perception of Whole Body Yaw Rotation,”
PloS ONE, vol. 12, no. 1, Art. no. 1, 2017, doi:
10.1371/journal.pone.0170497.
Abstract
Whilemovingthroughtheenvironment,ourcentralnervoussystemaccumulatessensoryinformationovertimeto provideanestimateof ourself-motion,allowingforcompletingcrucialtaskssuchasmaintainingbalance.However,littleis knownonhowthedurationof themotionstimuliinfluencesourperformancesin a self-motiondiscriminationtask.Herewestudythehumanabilityto discriminateintensitiesof sinusoidal(0.5Hz)self-rotationsaroundtheverticalaxis(yaw)forfourdifferentstimulusdurations(1,2, 3 and5 s) in darkness.In a typicaltrial,par-ticipantsexperiencedtwoconsecutiverotationsof equaldurationanddifferentpeakamplitude,andreportedtheoneperceivedasstronger.Foreachstimulusduration,wedeterminedthesmallestdetectablechangein stimulusintensity(differentialthreshold)fora referencevelocityof 15deg/s.Resultsindicatethatdifferentialthresholdsdecreasewithstimulusdurationandasymptoticallyconvergeto a constant,positivevalue.Thissuggeststhatthecentralnervoussystemaccumulatessensoryinformationonself-motionovertime,resultingin improveddis-criminationperformances. Observedtrendsin differentialthresholdsareconsistentwithpre-dictionsbasedona driftdiffusionmodelwithleakyintegrationof sensoryevidence.BibTeX
J. Allsop, R. Gray, H. Bülthoff, and L. Chuang, “Eye Movement Planning on Single-Sensor-Single-Indicator Displays is Vulnerable to User Anxiety and Cognitive Load,”
Journal of Eye Movement Research, vol. 10, no. 5, Art. no. 5, 2017, doi:
10.16910/jemr.10.5.8.
Abstract
In this study, we demonstrate the effects of anxiety and cognitive load on eye movement planning in an instrument flight task adhering to a single-sensor-single-indicator data visualisation design philosophy. The task was performed in neutral and anxiety conditions, while a low or high cognitive load, auditory n-back task was also performed. Cognitive load led to a reduction in the number of transitions between instruments, and impaired task performance. Changes in self-reported anxiety between the neutral and anxiety conditions positively correlated with changes in the randomness of eye movements between instruments, but only when cognitive load was high. Taken together, the results suggest that both cognitive load and anxiety impact gaze behavior, and that these effects should be explored when designing data visualization displays.BibTeX
D. Weiskopf, M. Burch, L. L. Chuang, B. Fischer, and A. Schmidt,
Eye Tracking and Visualization: Foundations, Techniques, and Applications. Berlin, Heidelberg: Springer, 2016. doi:
10.1007/978-3-319-47024-5_7.
Abstract
This book discusses research, methods, and recent developments in the interdisciplinary field that spans research in visualization, eye tracking, human-computer interaction, and psychology. It presents extended versions of papers from the First Workshop on Eye Tracking and Visualization (ETVIS), which was organized as a workshop of the IEEE VIS Conference 2015. Topics include visualization and visual analytics of eye-tracking data, metrics and cognitive models, eye-tracking experiments in the context of visualization interfaces, and eye tracking in 3D and immersive environments. The extended ETVIS papers are complemented by a chapter offering an overview of visualization approaches for analyzing eye-tracking data and a chapter that discusses electrooculography (EOG) as an alternative of acquiring information about eye movements. Covering scientific visualization, information visualization, and visual analytics, this book is a valuable resource for eye-tracking researchers within the visualization community.BibTeX
M. Scheer, H. H. Bülthoff, and L. L. Chuang, “Steering Demands Diminish the Early-P3, Late-P3 and RON Components of the Event-Related Potential of Task-Irrelevant Environmental Sounds,” in
Frontiers in Human Neuroscience, F. in Human Neuroscience, Ed., in Frontiers in Human Neuroscience, vol. 10. 2016, pp. 73:1-73:15. doi:
10.3389/fnhum.2016.00073.
Abstract
The current study investigates the demands that steering places on mental resources. Instead of a conventional dual-task paradigm, participants of this study were only required to perform a steering task while task-irrelevant auditory distractor probes (environmental sounds and beep tones) were intermittently presented. The event-related potentials (ERPs), which were generated by these probes, were analyzed for their sensitivity to the steering task’s demands. The steering task required participants to counteract unpredictable roll disturbances and difficulty was manipulated either by adjusting the bandwidth of the roll disturbance or by varying the complexity of the control dynamics. A mass univariate analysis revealed that steering selectively diminishes the amplitudes of early P3, late P3, and the re-orientation negativity (RON) to task-irrelevant environmental sounds but not to beep tones. Our findings are in line with a three-stage distraction model, which interprets these ERPs to reflect the post-sensory detection of the task-irrelevant stimulus, engagement, and re-orientation back to the steering task. This interpretation is consistent with our manipulations for steering difficulty. More participants showed diminished amplitudes for these ERPs in the “hard” steering condition relative to the “easy” condition. To sum up, the current work identifies the spatiotemporal ERP components of task-irrelevant auditory probes that are sensitive to steering demands on mental resources. This provides a non-intrusive method for evaluating mental workload in novel steering environments.BibTeX
N. Flad, J. C. Ditz, A. Schmidt, H. H. Bülthoff, and L. L. Chuang, “Data-Driven Approaches to Unrestricted Gaze-Tracking Benefit from Saccade Filtering,” in
Proceedings of the Second Workshop on Eye Tracking and Visualization (ETVIS), M. Burch, L. L. Chuang, and A. T. Duchowski, Eds., in Proceedings of the Second Workshop on Eye Tracking and Visualization (ETVIS). IEEE, 2016, pp. 1–5. doi:
10.1109/ETVIS.2016.7851156.
Abstract
Unrestricted gaze tracking that allows for head and body movements can enable us to understand interactive gaze behavior with large-scale visualizations. Approaches that support this, by simultaneously recording eye- and user-movements, can either be based on geometric or data-driven regression models. A data-driven approach can be implemented more flexibly but its performance can suffer with poor quality training data. In this paper, we introduce a pre-processing procedure to remove training data for periods when the gaze is not fixating the presented target stimuli. Our procedure is based on a velocity-based filter for rapid eye-movements (i.e., saccades). Our results show that this additional procedure improved the accuracy of our unrestricted gaze-tracking model by as much as 56 %. Future improvements to data-driven approaches for unrestricted gaze-tracking are proposed, in order to allow for more complex dynamic visualizations.BibTeX
L. Lischke, S. Mayer, K. Wolf, N. Henze, H. Reiterer, and A. Schmidt, “Screen Arrangements and Interaction Areas for Large Display Work Places,” in
Proceedings of the ACM International Symposium on Pervasive Displays (PerDis), ACM, Ed., in Proceedings of the ACM International Symposium on Pervasive Displays (PerDis), vol. 5. ACM, 2016, pp. 228–234. doi:
10.1145/2914920.2915027.
Abstract
Size and resolution of computer screens are constantly increasing. Individual screens can easily be combined to wall-sized displays. This enables computer displays that are folded, straight, bow shaped or even spread. As possibilities for arranging the screens are manifold, it is unclear what arrangements are appropriate. Moreover, it is unclear how content and applications should be arranged on such large displays. To determine guidelines for the arrangement of multiple screens and for content and application layouts, we conducted a design study. In the study, we asked 16 participants to arrange a large screen setup as well as to create layouts of multiple common application windows. Based on the results we provide a classification for screen arrangements and interaction areas. We identified, that screen space should be divided into a central area for interactive applications and peripheral areas, mainly for displaying additional content.BibTeX
Abstract
Gaze-tracking technology is used increasingly to determinehow and which information is accessed and processed ina given interface environment, such as in-vehicle informa-tion systems in automobiles. Typically, fixations on regionsof interest (e.g., windshield, GPS) are treated as an indica-tion that the underlying information has been attended toand is, thus, vital to the task. Therefore, decisions such asoptimal instrument placement are often made on the basisof the distribution of recorded fixations. In this paper, webriefly introduce gaze-tracking methods for in-vehicle moni-toring, followed by a discussion on the relationship betweengaze and user-attention. We posit that gaze-tracking datacan yield stronger insights on the utility of novel regions-of-interests if they are considered in terms of their devia-tion from basic gaze patterns. In addition, we suggest howEEG recordings could complement gaze-tracking data andraise outstanding challenges in its implementation. It is con-tended that gaze-tracking is a powerful tool for understand-ing how visual information is processed in a given environ-ment, provided it is understood in the context of a modelthat first specifies the task that has to be carried oBibTeX
L. L. Chuang, “Error Visualization and Information-Seeking Behavior for Air-Vehicle Control,” in
Foundations of Augmented Cognition. AC 2015. Lecture Notes in Computer Science, D. Schmorrow and C. M. Fidopiastis, Eds., in Foundations of Augmented Cognition. AC 2015. Lecture Notes in Computer Science, vol. 9183. Springer, 2015, pp. 3–11. doi:
10.1007/978-3-319-20816-9_1.
Abstract
A control schema for a human-machine system allows the human operator to be integrated as a mathematical description in a closed-loop control system, i.e., a pilot in an aircraft. Such an approach typically assumes that error feedback is perfectly communicated to the pilot who is responsible for tracking a single flight variable. However, this is unlikely to be true in a flight simulator or a real flight environment. This paper discusses different aspects that pertain to error visualization and the pilot’s ability in seeking out relevant information across a range of flight variables.BibTeX
N. Flad, T. Fomina, H. H. Bülthoff, and L. L. Chuang, “Unsupervised Clustering of EOG as a Viable Substitute for Optical Eye Tracking,” in
Eye Tracking and Visualization: Foundations, Techniques, and Applications, M. Burch, L. L. Chuang, B. D. Fisher, A. Schmidt, and D. Weiskopf, Eds., in Eye Tracking and Visualization: Foundations, Techniques, and Applications. Springer International Publishing, 2015, pp. 151–167. doi:
10.1007/978-3-319-47024-5_9.
Abstract
Eye-movements are typically measured with video cameras and image recognition algorithms. Unfortunately, these systems are susceptible to changes in illumination during measurements. Electrooculography (EOG) is another approach for measuring eye-movements that does not suffer from the same weakness. Here, we introduce and compare two methods that allow us to extract the dwells of our participants from EOG signals under presentation conditions that are too difficult for optical eye tracking. The first method is unsupervised and utilizes density-based clustering. The second method combines the optical eye-tracker’s methods to determine fixations and saccades with unsupervised clustering. Our results show that EOG can serve as a sufficiently precise and robust substitute for optical eye tracking, especially in studies with changing lighting conditions. Moreover, EOG can be recorded alongside electroencephalography (EEG) without additional effort.BibTeX