T. Kosch, J. Karolus, J. Zagermann, H. Reiterer, A. Schmidt, and P. W. Woźniak, “A Survey on Measuring Cognitive Workload in Human-Computer Interaction,”
ACM Comput. Surv., Jan. 2023, doi:
10.1145/3582272.
Abstract
The ever-increasing number of computing devices around us results in more and more systems competing for our attention, making cognitive workload a crucial factor for the user experience of human-computer interfaces. Research in Human-Computer Interaction (HCI) has used various metrics to determine users’ mental demands. However, there needs to be a systematic way to choose an appropriate and effective measure for cognitive workload in experimental setups, posing a challenge to their reproducibility. We present a literature survey of past and current metrics for cognitive workload used throughout HCI literature to address this challenge. By initially exploring what cognitive workload resembles in the HCI context, we derive a categorization supporting researchers and practitioners in selecting cognitive workload metrics for system design and evaluation. We conclude with three following research gaps: (1) defining and interpreting cognitive workload in HCI, (2) the hidden cost of the NASA-TLX, and (3) HCI research as a catalyst for workload-aware systems, highlighting that HCI research has to deepen and conceptualize the understanding of cognitive workload in the context of interactive computing systems.BibTeX
S. Hubenschmid, J. Zagermann, D. Leicht, H. Reiterer, and T. Feuchtner, “ARound the Smartphone: Investigating the Effects of Virtually-Extended Display Size on Spatial Memory,” in
Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23), in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23). New York, NY, USA: ACM, 2023. doi:
https://doi.org/10.1145/3544548.3581438.
Abstract
Smartphones conveniently place large information spaces in the palms of our hands. While research has shown that larger screens positively affect spatial memory, workload, and user experience, smartphones remain fairly compact for the sake of device ergonomics and portability. Thus, we investigate the use of hybrid user interfaces to virtually increase the available display size by complementing the smartphone with an augmented reality head-worn display. We thereby combine the benefits of familiar touch interaction with the near-infinite visual display space afforded by augmented reality. To better understand the potential of virtually-extended displays and the possible issues of splitting the user’s visual attention between two screens (real and virtual), we conducted a within-subjects experiment with 24 participants completing navigation tasks using different virtually-augmented display sizes. Our findings reveal that a desktop monitor size represents a “sweet spot” for extending smartphones with augmented reality, informing the design of hybrid user interfaces.BibTeX
F. Chiossi
et al., “Adapting visualizations and interfaces to the user,”
it - Information Technology, vol. 64, no. 4–5, Art. no. 4–5, 2022, doi:
10.1515/itit-2022-0035.
Abstract
Adaptive visualization and interfaces pervade our everyday tasks to improve interaction from the point of view of user performance and experience. This approach allows using several user inputs, whether physiological, behavioral, qualitative, or multimodal combinations, to enhance the interaction. Due to the multitude of approaches, we outline the current research trends of inputs used to adapt visualizations and user interfaces. Moreover, we discuss methodological approaches used in mixed reality, physiological computing, visual analytics, and proficiency-aware systems. With this work, we provide an overview of the current research in adaptive systems.BibTeX
P. Fleck, A. Sousa Calepso, S. Hubenschmid, M. Sedlmair, and D. Schmalstieg, “RagRug: A Toolkit for Situated Analytics,”
IEEE Transactions on Visualization and Computer Graphics, 2022, doi:
10.1109/TVCG.2022.3157058.
Abstract
We present RagRug, an open-source toolkit for situated analytics. The abilities of RagRug go beyond previous immersive analytics toolkits by focusing on specific requirements emerging when using augmented reality (AR) rather than virtual reality. RagRug combines state of the art visual encoding capabilities with a comprehensive physical-virtual model, which lets application developers systematically describe the physical objects in the real world and their role in AR. We connect AR visualization with data streams from the Internet of Things using distributed dataflow. To this aim, we use reactive programming patterns so that visualizations become context-aware, i.e., they adapt to events coming in from the environment. The resulting authoring system is low-code; it emphasises describing the physical and the virtual world and the dataflow between the elements contained therein. We describe the technical design and implementation of RagRug, and report on five example applications illustrating the toolkit's abilities.BibTeX
J. Zagermann
et al., “Complementary Interfaces for Visual Computing,”
it - Information Technology, vol. 64, no. 4–5, Art. no. 4–5, 2022, doi:
doi:10.1515/itit-2022-0031.
Abstract
With increasing complexity in visual computing tasks, a single device may not be sufficient to adequately support the user’s workflow. Here, we can employ multi-device ecologies such as cross-device interaction, where a workflow can be split across multiple devices, each dedicated to a specific role. But what makes these multi-device ecologies compelling? Based on insights from our research, each device or interface component must contribute a complementary characteristic to increase the quality of interaction and further support users in their current activity. We establish the term complementary interfaces for such meaningful combinations of devices and modalities and provide an initial set of challenges. In addition, we demonstrate the value of complementarity with examples from within our own research.BibTeX
S. Hubenschmid
et al., “ReLive: Bridging In-Situ and Ex-Situ Visual Analytics for Analyzing Mixed Reality User Studies,” in
CHI Conference on Human Factors in Computing Systems (CHI ’22), in CHI Conference on Human Factors in Computing Systems (CHI ’22). New York, NY: ACM, 2022, pp. 1–20. doi:
10.1145/3491102.3517550.
Abstract
The nascent field of mixed reality is seeing an ever-increasing need for user studies and field evaluation, which are particularly challenging given device heterogeneity, diversity of use, and mobile deployment. Immersive analytics tools have recently emerged to support such analysis in situ, yet the complexity of the data also warrants an ex-situ analysis using more traditional non-immersive visual analytics setups. To bridge the gap between both approaches, we introduce ReLive: a mixed-immersion visual analytics framework for exploring and analyzing mixed reality user studies. ReLive combines an in-situ virtual reality view with a complementary ex-situ desktop view. While the virtual reality view allows users to relive interactive spatial recordings replicating the original study, the synchronized desktop view provides a familiar interface for analyzing aggregated data. We validated our concepts in a two-step evaluation consisting of a design walkthrough and an empirical expert user study.BibTeX
D. I. Fink, J. Zagermann, H. Reiterer, and H.-C. Jetter, “Re-Locations: Augmenting Personal and Shared Workspaces to Support Remote Collaboration in Incongruent Spaces,”
Proc. ACM Hum.-Comput. Interact., vol. 6, no. ISS, Art. no. ISS, Nov. 2022, doi:
10.1145/3567709.
Abstract
Augmented reality (AR) can create the illusion of being virtually co-located during remote collaboration, e.g., by visualizing remote co-workers as avatars. However, spatial awareness of each other's activities is limited as physical spaces, including the position of physical devices, are often incongruent. Therefore, alignment methods are needed to support activities on physical devices. In this paper, we present the concept of Re-locations, a method for enabling remote collaboration with augmented reality in incongruent spaces. The idea of the concept is to enrich remote collaboration activities on multiple physical devices with attributes of co-located collaboration such as spatial awareness and spatial referencing by locally relocating remote user representations to user-defined workspaces. We evaluated the Re-locations concept in an explorative user study with dyads using an authentic, collaborative task. Our findings indicate that Re-locations introduce attributes of co-located collaboration like spatial awareness and social presence. Based on our findings, we provide implications for future research and design of remote collaboration systems using AR.BibTeX
S. Hubenschmid, J. Zagermann, S. Butscher, and H. Reiterer, “STREAM: Exploring the Combination of Spatially-Aware Tablets with Augmented Reality Head-Mounted Displays for Immersive Analytics,” in
Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, in Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. New York, NY, USA: Association for Computing Machinery, 2021. doi:
10.1145/3411764.3445298.
Abstract
Recent research in the area of immersive analytics demonstrated the utility of head-mounted augmented reality devices for visual data analysis. However, it can be challenging to use the by default supported mid-air gestures to interact with visualizations in augmented reality (e.g. due to limited precision). Touch-based interaction (e.g. via mobile devices) can compensate for these drawbacks, but is limited to two-dimensional input. In this work we present STREAM: Spatially-aware Tablets combined with Augmented Reality Head-Mounted Displays for the multimodal interaction with 3D visualizations. We developed a novel eyes-free interaction concept for the seamless transition between the tablet and the augmented reality environment. A user study reveals that participants appreciated the novel interaction concept, indicating the potential for spatially-aware tablets in augmented reality. Based on our findings, we provide design insights to foster the application of spatially-aware touch devices in augmented reality and research implications indicating areas that need further investigation.BibTeX
J. Wieland, J. Zagermann, J. Müller, and H. Reiterer, “Separation, Composition, or Hybrid? : Comparing Collaborative 3D Object Manipulation Techniques for Handheld Augmented Reality,” in
2021 IEEE International Symposium on Mixed and Augmented Reality, in 2021 IEEE International Symposium on Mixed and Augmented Reality. Piscataway, NJ: IEEE, 2021, pp. 403--412. doi:
10.1109/ISMAR52148.2021.00057.
BibTeX
S. Hubenschmid, J. Zagermann, D. Fink, J. Wieland, T. Feuchtner, and H. Reiterer, “Towards Asynchronous Hybrid User Interfaces for Cross-Reality Interaction,” in
ISS’21 Workshop Proceedings: “Transitional Interfaces in Mixed and Cross-Reality: A new frontier?,” H.-C. Jetter, J.-H. Schröder, J. Gugenheimer, M. Billinghurst, C. Anthes, M. Khamis, and T. Feuchtner, Eds., in ISS’21 Workshop Proceedings: “Transitional Interfaces in Mixed and Cross-Reality: A new frontier?” 2021. doi:
10.18148/kops/352-2-84mm0sggczq02.
Abstract
Hybrid user interfaces combine cross-reality devices (e. g., head-mounted display) with other heterogeneous device technologies (e. g., smartphone) to compensate for the disadvantages of one device with the advantages of the other, such as addressing the lack of haptic feedback in mid-air interaction with touchscreen input. Such hybrid user interfaces typically involve the synchronous use of multiple input and output technologies. In this work, we instead consider the asynchronous use of heterogeneous devices (e. g., using a desktop and virtual reality device in sequence). While the sequential use of different technologies does not necessarily offset the individual device-specific disadvantages, it allows users to choose the more appropriate device for a particular sub-task. In this context, transitional interfaces play an essential role in enabling the switch between devices, while allowing the user to maintain a mental connection between realities and seamlessly continue with their task where they left off.BibTeX
K. Vock, S. Hubenschmid, J. Zagermann, S. Butscher, and H. Reiterer, “IDIAR : Augmented Reality Dashboards to Supervise Mobile Intervention Studies,” in
Mensch und Computer 2021 (MuC ’21), in Mensch und Computer 2021 (MuC ’21). New York, NY: ACM, 2021. doi:
10.1145/3473856.3473876.
Abstract
Mobile intervention studies employ mobile devices to observe participants’ behavior change over several weeks. Researchers regularly monitor high-dimensional data streams to ensure data quality and prevent data loss (e.g., missing engagement or malfunctions). The multitude of problem sources hampers possible automated detection of such irregularities – providing a use case for interactive dashboards. With the advent of untethered head-mounted AR devices, these dashboards can be placed anywhere in the user's physical environment, leveraging the available space and allowing for flexible information arrangement and natural navigation. In this work, we present the user-centered design and the evaluation of IDIAR: Interactive Dashboards in AR, combining a head-mounted display with the familiar interaction of a smartphone. A user study with 15 domain experts for mobile intervention studies shows that participants appreciated the multimodal interaction approach. Based on our findings, we provide implications for research and design of interactive dashboards in AR.BibTeX
J. Zagermann, U. Pfeil, P. von Bauer, D. Fink, and H. Reiterer, “‘It’s in my other hand!’: Studying the Interplay of Interaction Techniques and Multi-Tablet Activities,” in
Proceedings of the CHI Conference on Human Factors in Computing Systems, in Proceedings of the CHI Conference on Human Factors in Computing Systems. 2020, pp. 413:1-413:13. doi:
10.1145/3313831.3376540.
Abstract
Cross-device interaction with tablets is a popular topic in HCI research. Recent work has shown the benefits of including multiple devices into users’ workflows while various interaction techniques allow transferring content across devices. However, users are only reluctantly using multiple devices in combination. At the same time, research on cross-device interaction struggles to find a frame of reference to compare techniques or systems. In this paper, we try to address these challenges by studying the interplay of interaction techniques, device utilization, and task-specific activities in a user study with 24 participants from different but complementary angles of evaluation using an abstract task, a sensemaking task, and three interaction techniques. We found that different interaction techniques have a lower influence than expected, that work behaviors and device utilization depend on the task at hand, and that participants value specific aspects of cross-device interaction.BibTeX
M. Borowski, J. Zagermann, C. N. Klokmose, H. Reiterer, and R. Rädle, “Exploring the Benefits and Barriers of Using Computational Notebooks for Collaborative Programming Assignments,” in
Proceedings of the ACM Technical Symposium on Computer Science Education (SIGCSE), in Proceedings of the ACM Technical Symposium on Computer Science Education (SIGCSE). 2020, pp. 468–474. doi:
10.1145/3328778.3366887.
Abstract
Programming assignments in computer science courses are often processed in pairs or groups of students. While working together, students face several shortcomings in today's software: The lack of real-time collaboration capabilities, the setup time of the development environment, and the use of different devices or operating systems can hamper students when working together on assignments. Text processing platforms like Google Docs solve these problems for the writing process of prose text, and computational notebooks like Google Colaboratory for data analysis tasks. However, none of these platforms allows users to implement interactive applications. We deployed a web-based literate programming system for three months during an introductory course on application development to explore how collaborative programming practices unfold and how the structure of computational notebooks affect the development. During the course, pairs of students solved weekly programming assignments. We analyzed data from weekly questionnaires, three focus groups with students and teaching assistants, and keystroke-level log data to facilitate the understanding of the subtleties of collaborative programming with computational notebooks. Findings reveal that there are distinct collaboration patterns; the preferred collaboration pattern varied between pairs and even varied within pairs over the course of three months. Recognizing these distinct collaboration patterns can help to design future computational notebooks for collaborative programming assignments.BibTeX
F. Bishop, J. Zagermann, U. Pfeil, G. Sanderson, H. Reiterer, and U. Hinrichs, “Construct-A-Vis: Exploring the Free-Form Visualization Processes of Children,”
IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 1, Art. no. 1, 2020, doi:
10.1109/TVCG.2019.2934804.
Abstract
Building data analysis skills is part of modern elementary school curricula. Recent research has explored how to facilitate children's understanding of visual data representations through completion exercises which highlight links between concrete and abstract mappings. This approach scaffolds visualization activities by presenting a target visualization to children. But how can we engage children in more free-form visual data mapping exercises that are driven by their own mapping ideas? How can we scaffold a creative exploration of visualization techniques and mapping possibilities? We present Construct-A-Vis, a tablet-based tool designed to explore the feasibility of free-form and constructive visualization activities with elementary school children. Construct-A-Vis provides adjustable levels of scaffolding visual mapping processes. It can be used by children individually or as part of collaborative activities. Findings from a study with elementary school children using Construct-A-Vis individually and in pairs highlight the potential of this free-form constructive approach, as visible in children's diverse visualization outcomes and their critical engagement with the data and mapping processes. Based on our study findings we contribute insights into the design of free-form visualization tools for children, including the role of tool-based scaffolding mechanisms and shared interactions to guide visualization activities with children.BibTeX
J. Müller, J. Zagermann, J. Wieland, U. Pfeil, and H. Reiterer, “A Qualitative Comparison Between Augmented and Virtual Reality Collaboration with Handheld Devices,” in
Mensch und Computer 2019 – Tagungsband (MuC), F. Alt, A. Bulling, and T. Döring, Eds., in Mensch und Computer 2019 – Tagungsband (MuC). GI, ACM, 2019, pp. 399–410. doi:
10.1145/3340764.3340773.
Abstract
Handheld Augmented Reality (AR) displays offer a see-through option to create the illusion of virtual objects being integrated into the viewer’s physical environment. Some AR display technologies also allow for the deactivation of the see-through option, turning AR tablets into Virtual Reality (VR) devices that integrate the virtual objects into an exclusively virtual environment. Both display configurations are typically available on handheld devices, raising the question of their influence on users’ experience during collaborative activities. In two experiments, we studied how the different display configurations influence user experience, workload, and team performance of co-located and distributed collaborators during a spatial referencing task. A mixed-methods approach revealed that participants’ opinions were polarized towards the two display configurations, regardless of the spatial distribution of collaboration. Based on our findings, we identify critical aspects to be addressed in future research to better understand and support co-located and distributed collaboration using AR and VR displays.BibTeX
S. Hubenschmid, J. Zagermann, S. Butscher, and H. Reiterer, “Employing Tangible Visualisations in Augmented Reality with Mobile Devices,” in
Proceedings of the Working Conference on Advanced Visual Interfaces (AVI), in Proceedings of the Working Conference on Advanced Visual Interfaces (AVI). 2018, pp. 1–4. [Online]. Available:
http://nbn-resolving.de/urn:nbn:de:bsz:352-2-1iooenfo4fofm8Abstract
Recent research has demonstrated the benefits of mixedrealities for information visualisation. Often the focus lieson the visualisation itself, leaving interaction opportunitiesthrough different modalities largely unexplored. Yet, mixedreality in particular can benefit from a combination of differ-ent modalities. This work examines an existing mixed realityvisualisation which is combined with a large tabletop fortouch interaction. Although this allows for familiar operation,the approach comes with some limitations which we ad-dress by employing mobile devices, thus adding tangibilityand proxemics as input modalitiesBibTeX
L. L. Chuang and U. Pfeil, “Transparency and Openness Promotion Guidelines for HCI,” in
Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA), R. L. Mandryk, M. Hancock, M. Perry, and A. L. Cox, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA). ACM, 2018, p. SIG04:1-SIG04:4. doi:
10.1145/3170427.3185377.
Abstract
This special interest group addresses the status quo of HCI research with regards to research practices of transparency and openness. Specifically, it discusses whether current practices are in line with the standards applied to other fields (e.g., psychology, economics, medicine). It seeks to identify current practices that are more progressive and worth communicating to other disciplines, while evaluating whether practices in other disciplines are likely to apply to HCI research constructively. Potential outcomes include: (1) a review of current HCI research policies, (2) a report on recommended practices, and (3) a replication project of key findings in HCI research.BibTeX
M. Blumenschein
et al., “SMARTexplore: Simplifying High-Dimensional Data Analysis through a Table-Based Visual Analytics Approach,” in
Proceedings of the IEEE Conference on Visual Analytics Science and Technology (VAST), R. Chang, H. Qu, and T. Schreck, Eds., in Proceedings of the IEEE Conference on Visual Analytics Science and Technology (VAST). IEEE, 2018, pp. 36–47. doi:
10.1109/VAST.2018.8802486.
Abstract
We present SMARTEXPLORE, a novel visual analytics technique that simplifies the identification and understanding of clusters, correlations, and complex patterns in high-dimensional data. The analysis is integrated into an interactive table-based visualization that maintains a consistent and familiar representation throughout the analysis. The visualization is tightly coupled with pattern matching, subspace analysis, reordering, and layout algorithms. To increase the analyst's trust in the revealed patterns, SMARTEXPLORE automatically selects and computes statistical measures based on dimension and data properties. While existing approaches to analyzing high-dimensional data (e.g., planar projections and Parallel coordinates) have proven effective, they typically have steep learning curves for non-visualization experts. Our evaluation, based on three expert case studies, confirms that non-visualization experts successfully reveal patterns in high-dimensional data when using SMARTEXPLOREBibTeX
J. Zagermann, U. Pfeil, and H. Reiterer, “Studying Eye Movements as a Basis for Measuring Cognitive Load,”
Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA), p. LBW095:1-LBW095:6, 2018, doi:
10.1145/3170427.3188628.
Abstract
Users' cognitive load while interacting with a system is a valuable metric for evaluations in HCI. We encourage the analysis of eye movements as an unobtrusive and widely available way to measure cognitive load. In this paper, we report initial findings from a user study with 26 participants working on three visual search tasks that represent different levels of difficulty. Also, we linearly increased the cognitive demand while solving the tasks. This allowed us to analyze the reaction of individual eye movements to different levels of task difficulty. Our results show how pupil dilation, blink rate, and the number of fixations and saccades per second individually react to changes in cognitive activity. We discuss how these measurements could be combined in future work to allow for a comprehensive investigation of cognitive load in interactive settings.BibTeX
J. Zagermann, U. Pfeil, C. Acevedo, and H. Reiterer, “Studying the Benefits and Challenges of Spatial Distribution and Physical Affordances in a Multi-device Workspace,” in
Proceedings of the International Conference on Mobile and Ubiquitous Multimedia (MUM), in Proceedings of the International Conference on Mobile and Ubiquitous Multimedia (MUM). 2017, pp. 249–259. doi:
https://doi.org/10.1145/3152832.3152855.
Abstract
In recent years, research on cross-device interaction has become a popular topic in HCI leading to novel interaction techniques mutually interfering with new evolving theoretical paradigms. Building on previous research, we implemented an individual multi-device work environment for creative activities. In a study with 20 participants, we compared a traditional toolbar-based condition with two conditions facilitating spatially distributed tools on digital panels and on physical devices. We analyze participants' interactions with the tools, encountered problems and corresponding solutions, as well as subjective task load and user experience. Our findings show that the spatial distribution of tools indeed offers advantages, but also elicits new problems, that can partly be leveraged by the physical affordances of mobile devices.BibTeX
J. Zagermann, U. Pfeil, D. Fink, P. von Bauer, and H. Reiterer, “Memory in Motion: The Influence of Gesture- and Touch-based Input Modalities on Spatial Memory,” in
Proceedings of the CHI Conference on Human Factors in Computing Systems, G. Mark, S. R. Fussell, C. Lampe, m. c. schraefel, J. P. Hourcade, C. Appert, and D. Wigdor, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2017, pp. 1899–1910. doi:
10.1145/3025453.3026001.
Abstract
People's ability to remember and recall spatial information can be harnessed to improve navigation and search performances in interactive systems. In this paper, we investigate how display size and input modality influence spatial memory, especially in relation to efficiency and user satisfaction. Based on an experiment with 28 participants, we analyze the effect of three input modalities (trackpad, direct touch, and gesture-based motion controller) and two display sizes (10.6" and 55") on people's ability to navigate to spatially spread items and recall their positions. Our findings show that the impact of input modality and display size on spatial memory is not straightforward, but characterized by trade-offs between spatial memory, efficiency, and user satisfaction.BibTeX
D. Jäckle, F. Stoffel, S. Mittelstädt, D. A. Keim, and H. Reiterer, “Interpretation of Dimensionally-Reduced Crime Data: A Study with Untrained Domain Experts,” in
Proceedings of the Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP), in Proceedings of the Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP), vol. 3. 2017, pp. 164–175. doi:
http://dx.doi.org/10.5220/0006265101640175.
Abstract
Dimensionality reduction (DR) techniques aim to reduce the amount of considered dimensions, yet preserving as much information as possible. According to many visualization researchers, DR results lack interpretability, in particular for domain experts not familiar with machine learning or advanced statistics. Thus, interactive visual methods have been extensively researched for their ability to improve transparency and ease the interpretation of results. However, these methods have primarily been evaluated using case studies and interviews with experts trained in DR. In this paper, we describe a phenomenological analysis investigating if researchers with no or only limited training in machine learning or advanced statistics can interpret the depiction of a data projection and what their incentives are during interaction. We, therefore, developed an interactive system for DR, which unifies mixed data types as they appear in real-world data. Based on this system, we provided data analys ts of a Law Enforcement Agency (LEA) with dimensionally-reduced crime data and let them explore and analyze domain-relevant tasks without providing further conceptual information. Results of our study reveal that these untrained experts encounter few difficulties in interpreting the results and drawing conclusions given a domain relevant use case and their experience. We further discuss the results based on collected informal feedback and observations.BibTeX
L. Lischke, S. Mayer, K. Wolf, N. Henze, H. Reiterer, and A. Schmidt, “Screen Arrangements and Interaction Areas for Large Display Work Places.,” in
Proceedings of the ACM International Symposium on Pervasive Displays (PerDis), T. Ojala, V. Kostakos, J. Müller, and N. Memarovic, Eds., in Proceedings of the ACM International Symposium on Pervasive Displays (PerDis). ACM, 2016, pp. 228–234. doi:
10.1145/2914920.2915027.
Abstract
Size and resolution of computer screens are constantly increasing. Individual screens can easily be combined to wall-sized displays. This enables computer displays that are folded, straight, bow shaped or even spread. As possibilities for arranging the screens are manifold, it is unclear what arrangements are appropriate. Moreover, it is unclear how content and applications should be arranged on such large displays. To determine guidelines for the arrangement of multiple screens and for content and application layouts, we conducted a design study. In the study, we asked 16 participants to arrange a large screen setup as well as to create layouts of multiple common application windows. Based on the results we provide a classification for screen arrangements and interaction areas. We identified, that screen space should be divided into a central area for interactive applications and peripheral areas, mainly for displaying additional content.BibTeX
J. Zagermann, U. Pfeil, and H. Reiterer, “Measuring Cognitive Load using Eye Tracking Technology in Visual Computing,” in
Proceedings of the Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization (BELIV), M. Sedlmair, P. Isenberg, T. Isenberg, N. Mahyar, and H. Lam, Eds., in Proceedings of the Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization (BELIV). ACM, 2016, pp. 78–85. doi:
10.1145/2993901.2993908.
Abstract
In this position paper we encourage the use of eye tracking measurements to investigate users' cognitive load while interacting with a system. We start with an overview of how eye movements can be interpreted to provide insight about cognitive processes and present a descriptive model representing the relations of eye movements and cognitive load. Then, we discuss how specific characteristics of human-computer interaction (HCI) interfere with the model and impede the application of eye tracking data to measure cognitive load in visual computing. As a result, we present a refined model, embedding the characteristics of HCI into the relation of eye tracking data and cognitive load. Based on this, we argue that eye tracking should be considered as a valuable instrument to analyze cognitive processes in visual computing and suggest future research directions to tackle outstanding issues.BibTeX
J. Zagermann, U. Pfeil, R. Rädle, H.-C. Jetter, C. N. Klokmose, and H. Reiterer, “When Tablets meet Tabletops: The Effect of Tabletop Size on Around-the-Table Collaboration with Personal Tablets,” in
Proceedings of the CHI Conference on Human Factors in Computing Systems, J. Kaye, A. Druin, C. Lampe, D. Morris, and J. P. Hourcade, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2016, pp. 5470–5481. doi:
10.1145/2858036.2858224.
Abstract
Cross-device collaboration with tablets is an increasingly popular topic in HCI. Previous work has shown that tablet-only collaboration can be improved by an additional shared workspace on an interactive tabletop. However, large tabletops are costly and need space, raising the question to what extent the physical size of shared horizontal surfaces really pays off. In order to analyse the suitability of smaller-than-tabletop devices (e.g. tablets) as a low-cost alternative, we studied the effect of the size of a shared horizontal interactive workspace on users' attention, awareness, and efficiency during cross-device collaboration. In our study, 15 groups of two users executed a sensemaking task with two personal tablets (9.7") and a horizontal shared display of varying sizes (10.6", 27", and 55"). Our findings show that different sizes lead to differences in participants' interaction with the tabletop and in the groups' communication styles. To our own surprise we found that larger tabletops do not necessarily improve collaboration or sensemaking results, because they can divert users' attention away from their collaborators and towards the shared display.BibTeX
S. Butscher and H. Reiterer, “Applying Guidelines for the Design of Distortions on Focus+Context Interfaces,” in
Proceedings of the Working Conference on Advanced Visual Interfaces (AVI), P. Buono, R. Lanzilotti, M. Matera, and M. F. Costabile, Eds., in Proceedings of the Working Conference on Advanced Visual Interfaces (AVI). ACM, 2016, pp. 244–247. doi:
10.1145/2909132.2909284.
Abstract
Distortion-based visualization techniques allow users to examine focused regions of a multiscale space at high scales but preserve their contextual information. However, the distortion can come at the coast of confusion, disorientation and impairment of the users' spatial memory. Yet, how distortions influence users' ability to build up spatial memory, while taking into account human skills of perception, interpretation and comprehension, remains underexplored. This note reports findings of an experimental comparison between a distortion-based focus+context interface and an undistorted overview+detail interface. The focus+context technique follows guidelines for the design of comprehensible distortions: make use of real-world metaphors, visual clues like shading, smooth transitions and scaled-only focus regions. The results show that the focus+context technique designed following these guidelines help to keep track of the position within the multiscale space and does not impair users' spatial memory.BibTeX
J. Müller, R. Rädle, and H. Reiterer, “Virtual Objects as Spatial Cues in Collaborative Mixed Reality Environments: How They Shape Communication Behavior and User Task Load,” in
Proceedings of the CHI Conference on Human Factors in Computing Systems, J. Kaye, A. Druin, C. Lampe, D. Morris, and J. P. Hourcade, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2016, pp. 1245–1249. doi:
10.1145/2858036.2858043.
Abstract
In collaborative activities, collaborators can use physical objects in their shared environment as spatial cues to guide each other's attention. Collaborative mixed reality environments (MREs) include both, physical and digital objects. To study how virtual objects influence collaboration and whether they are used as spatial cues, we conducted a controlled lab experiment with 16 dyads. Results of our study show that collaborators favored the digital objects as spatial cues over the physical environment and the physical objects: Collaborators used significantly less deictic gestures in favor of more disambiguous verbal references and a decreased subjective workload when virtual objects were present. This suggests adding additional virtual objects as spatial cues to MREs to improve user experience during collaborative mixed reality tasks.BibTeX
L. Lischke
et al., “Using Space: Effect of Display Size on Users’ Search Performance,” in
Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA), B. Begole, J. Kim, K. Inkpen, and W. Woo, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA). ACM, 2015, pp. 1845–1850. doi:
10.1145/2702613.2732845.
Abstract
Due to advances in technology large displays with very high resolution started to become affordable for daily work. Today it is possible to build display walls with a pixel density that is comparable to standard office screens. Previous work indicates that physical navigation enables a deeper engagement with the data set. In particular, the visibility of detailed data subsets on large screens supports the user's work and understanding of large data. In contrast to previous work we explore how users' performance scales with an increasing amount of large display space when working with text documents. In a controlled experiment, we determine participants' performance when searching for titles and images in large text documents using one to six 50" 4K monitors. Our results show that the users' visual search performance does not linearly increase with an increasing amount of display space.BibTeX