J. Karolus, H. Schuff, T. Kosch, P. W. Wozniak, and A. Schmidt, “EMGuitar: Assisting Guitar Playing with Electromyography,” in
Proceedings of the Designing Interactive Systems Conference (DIS), I. Koskinen, Y.-K. Lim, T. C. Pargman, K. K. N. Chow, and W. Odom, Eds., in Proceedings of the Designing Interactive Systems Conference (DIS). ACM, 2018, pp. 651–655. doi:
10.1145/3196709.3196803.
Abstract
Mastering fine motor tasks, such as playing the guitar, takes years of time-consuming practice. Commonly, expensive guidance by experts is essential for adjusting the training program to the student's proficiency. In our work, we showcase the suitability of Electromyography to detect fine-grained hand and finger postures in an exemplary guitar tutor scenario. We present EMGuitar, an interactive guitar tutoring system, that assists students by reporting on play correctness and adjusts playback tempi automatically. We report person-dependent classification utilizing a ring of electrodes around the forearm with an F1 score of up to 0.89 on recorded calibration data. Furthermore, our system was received well by neither diminishing ease of use nor being disruptive for the participants. Based on the received comments, we identified the need for detailed play accuracy feedback down to individual chords, for which we suggest an adapted visualization and an algorithmic approach.BibTeX
T. Dingler, R. Rzayev, A. S. Shirazi, and N. Henze, “Designing Consistent Gestures Across Device Types: Eliciting RSVP Controls for Phone, Watch, and Glasses,” in
Proceedings of the CHI Conference on Human Factors in Computing Systems, R. L. Mandryk, M. Hancock, M. Perry, and A. L. Cox, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2018, pp. 419:1–419:12. doi:
10.1145/3173574.3173993.
Abstract
In the era of ubiquitous computing, people expect applications to work across different devices. To provide a seamless user experience it is therefore crucial that interfaces and interactions are consistent across different device types. In this paper, we present a method to create gesture sets that are consistent and easily transferable. Our proposed method entails 1) the gesture elicitation on each device type, 2) the consolidation of a unified gesture set, and 3) a final validation by calculating a transferability score. We tested our approach by eliciting a set of user-defined gestures for reading with Rapid Serial Visual Presentation (RSVP) of text for three device types: phone, watch, and glasses. We present the resulting, unified gesture set for RSVP reading and show the feasibility of our method to elicit gesture sets that are consistent across device types with different form factors.BibTeX
T. Kosch, M. Funk, A. Schmidt, and L. L. Chuang, “Identifying Cognitive Assistance with Mobile Electroencephalography: A Case Study with In-Situ Projections for Manual Assembly.,”
Proceedings of the ACM on Human-Computer Interaction (ACMHCI), vol. 2, pp. 11:1-11:20, 2018, doi:
10.1145/3229093.
Abstract
Manual assembly at production is a mentally demanding task. With rapid prototyping and smaller production lot sizes, this results in frequent changes of assembly instructions that have to be memorized by workers. Assistive systems compensate this increase in mental workload by providing "just-in-time" assembly instructions through in-situ projections. The implementation of such systems and their benefits to reducing mental workload have previously been justified with self-perceived ratings. However, there is no evidence by objective measures if mental workload is reduced by in-situ assistance. In our work, we showcase electroencephalography (EEG) as a complementary evaluation tool to assess cognitive workload placed by two different assistive systems in an assembly task, namely paper instructions and in-situ projections. We identified the individual EEG bandwidth that varied with changes in working memory load. We show, that changes in the EEG bandwidth are found between paper instructions and in-situ projections, indicating that they reduce working memory compared to paper instructions. Our work contributes by demonstrating how design claims of cognitive demand can be validated. Moreover, it directly evaluates the use of assistive systems for delivering context-aware information. We analyze the characteristics of EEG as real-time assessment for cognitive workload to provide insights regarding the mental demand placed by assistive systems.BibTeX
J. Karolus, P. W. Wozniak, L. L. Chuang, and A. Schmidt, “Robust Gaze Features for Enabling Language Proficiency Awareness,” in
Proceedings of the CHI Conference on Human Factors in Computing Systems, G. Mark, S. R. Fussell, C. Lampe, m. c. schraefel, J. P. Hourcade, C. Appert, and D. Wigdor, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2017, pp. 2998–3010. doi:
10.1145/3025453.3025601.
Abstract
We are often confronted with information interfaces designed in an unfamiliar language, especially in an increasingly globalized world, where the language barrier inhibits interaction with the system. In our work, we explore the design space for building interfaces that can detect the user's language proficiency. Specifically, we look at how a user's gaze properties can be used to detect whether the interface is presented in a language they understand. We report a study (N=21) where participants were presented with questions in multiple languages, whilst being recorded for gaze behavior. We identified fixation and blink durations to be effective indicators of the participants' language proficiencies. Based on these findings, we propose a classification scheme and technical guidelines for enabling language proficiency awareness on information displays using gaze data.BibTeX
D. Weiskopf, M. Burch, L. L. Chuang, B. Fischer, and A. Schmidt,
Eye Tracking and Visualization: Foundations, Techniques, and Applications. Berlin, Heidelberg: Springer, 2016. doi:
10.1007/978-3-319-47024-5_7.
Abstract
This book discusses research, methods, and recent developments in the interdisciplinary field that spans research in visualization, eye tracking, human-computer interaction, and psychology. It presents extended versions of papers from the First Workshop on Eye Tracking and Visualization (ETVIS), which was organized as a workshop of the IEEE VIS Conference 2015. Topics include visualization and visual analytics of eye-tracking data, metrics and cognitive models, eye-tracking experiments in the context of visualization interfaces, and eye tracking in 3D and immersive environments. The extended ETVIS papers are complemented by a chapter offering an overview of visualization approaches for analyzing eye-tracking data and a chapter that discusses electrooculography (EOG) as an alternative of acquiring information about eye movements. Covering scientific visualization, information visualization, and visual analytics, this book is a valuable resource for eye-tracking researchers within the visualization community.BibTeX
N. Flad, J. C. Ditz, A. Schmidt, H. H. Bülthoff, and L. L. Chuang, “Data-Driven Approaches to Unrestricted Gaze-Tracking Benefit from Saccade Filtering,” in
Proceedings of the Second Workshop on Eye Tracking and Visualization (ETVIS), M. Burch, L. L. Chuang, and A. T. Duchowski, Eds., in Proceedings of the Second Workshop on Eye Tracking and Visualization (ETVIS). IEEE, 2016, pp. 1–5. doi:
10.1109/ETVIS.2016.7851156.
Abstract
Unrestricted gaze tracking that allows for head and body movements can enable us to understand interactive gaze behavior with large-scale visualizations. Approaches that support this, by simultaneously recording eye- and user-movements, can either be based on geometric or data-driven regression models. A data-driven approach can be implemented more flexibly but its performance can suffer with poor quality training data. In this paper, we introduce a pre-processing procedure to remove training data for periods when the gaze is not fixating the presented target stimuli. Our procedure is based on a velocity-based filter for rapid eye-movements (i.e., saccades). Our results show that this additional procedure improved the accuracy of our unrestricted gaze-tracking model by as much as 56 %. Future improvements to data-driven approaches for unrestricted gaze-tracking are proposed, in order to allow for more complex dynamic visualizations.BibTeX
J. Karolus, P. W. Woźniak, and L. L. Chuang, “Towards Using Gaze Properties to Detect Language Proficiency,” in
Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI), in Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI). New York, NY, USA: ACM, 2016, pp. 118:1-118:6. doi:
10.1145/2971485.2996753.
Abstract
Humans are inherently skilled at using subtle physiological cues from other persons, for example gaze direction in a conversation. Personal computers have yet to explore this implicit input modality. In a study with 14 participants, we investigate how a user's gaze can be leveraged in adaptive computer systems. In particular, we examine the impact of different languages on eye movements by presenting simple questions in multiple languages to our participants. We found that fixation duration is sufficient to ascertain if a user is highly proficient in a given language. We propose how these findings could be used to implement adaptive visualizations that react implicitly on the user's gaze.BibTeX
L. Lischke, V. Schwind, K. Friedrich, A. Schmidt, and N. Henze, “MAGIC-Pointing on Large High-Resolution Displays,” in
Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA), J. Kaye, A. Druin, C. Lampe, D. Morris, and J. P. Hourcade, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA). ACM, 2016, pp. 1706–1712. doi:
10.1145/2851581.2892479.
Abstract
Display space in offices constantly increased in the last decades. We believe that this trend will continue and ultimately result in the use of wall-sized displays in the future office. One of the most challenging tasks while interacting with large high-resolution displays is target acquisition. The most important challenges reported in previous work are the long distances that need to be traveled with the pointer while still enabling precise selection as well as seeking for the pointer on the large display. In this paper, we investigate if MAGIC-Pointing, controlling the pointer through eye gaze, can help overcome both challenges. We implemented MAGIC-Pointing for a 2.85m x 1.13m large display. Using this system we conducted a target selection study. The results show that using MAGIC-Pointing for selecting targets on wall-sized displays decreases the task completion time significantly and it also decreases the users' task load. We therefore argue that MAGIC-Pointing can help to make interaction with wall-sized displays usable.BibTeX
L. Lischke, S. Mayer, K. Wolf, N. Henze, H. Reiterer, and A. Schmidt, “Screen Arrangements and Interaction Areas for Large Display Work Places,” in
Proceedings of the ACM International Symposium on Pervasive Displays (PerDis), ACM, Ed., in Proceedings of the ACM International Symposium on Pervasive Displays (PerDis), vol. 5. ACM, 2016, pp. 228–234. doi:
10.1145/2914920.2915027.
Abstract
Size and resolution of computer screens are constantly increasing. Individual screens can easily be combined to wall-sized displays. This enables computer displays that are folded, straight, bow shaped or even spread. As possibilities for arranging the screens are manifold, it is unclear what arrangements are appropriate. Moreover, it is unclear how content and applications should be arranged on such large displays. To determine guidelines for the arrangement of multiple screens and for content and application layouts, we conducted a design study. In the study, we asked 16 participants to arrange a large screen setup as well as to create layouts of multiple common application windows. Based on the results we provide a classification for screen arrangements and interaction areas. We identified, that screen space should be divided into a central area for interactive applications and peripheral areas, mainly for displaying additional content.BibTeX
L. Lischke, J. Grüninger, K. Klouche, A. Schmidt, P. Slusallek, and G. Jacucci, “Interaction Techniques for Wall-Sized Screens,”
Proceedings of the International Conference on Interactive Tabletops & Surfaces (ITS), pp. 501–504, 2015, doi:
10.1145/2817721.2835071.
Abstract
Large screen displays are part of many future visions, such as i-LAND that describes the possible workspace of the future. Research showed that wall-sized screens provide clear benefits for data exploration, collaboration and organizing work in office environments. With the increase of computational power and falling display prices wall-sized screens currently make the step out of research labs and specific settings into office environments and private life. Today, there is no standard set of interaction techniques for interacting with wall-sized displays and it is even unclear if a single mode of input is suitable for all potential applications. In this workshop, we will bring together researchers from academia and industry who work on large screens. Together, we will survey current research directions, review promising interaction techniques, and identify the underlying fundamental research challenges.BibTeX
L. Lischke, P. Knierim, and H. Klinke, “Mid-Air Gestures for Window Management on Large Displays,” in
Mensch und Computer 2015 – Tagungsband (MuC), S. Diefenbach, N. Henze, and M. Pielot, Eds., in Mensch und Computer 2015 – Tagungsband (MuC). De Gruyter, 2015, pp. 439–442. doi:
20.500.12116/7858.
Abstract
We can observe a continuous trend for using larger screens with higher resolutions and greater pixel density. With advances in hard- and software technology, wall-sized displays for daily office work are already on the horizon. We assume that there will be no hard paradigm change in interaction techniques in the near future. Therefore, new concepts for wall-sized displays will be included in existing products. Designing interaction concepts for wall-sized displays in an office environment is a challenging task. Most crucial is designing appropriate input techniques. Moving the mouse pointer from one corner to another over a longer distance is cumbersome. However, pointing with a mouse is precise and commonplace. We propose using mid-air gestures to support input with mouse and keyboard on large displays. In particular, we designed a gesture set for manipulating regular windows.BibTeX