N. Rodrigues, C. Schulz, S. Doring, D. Baumgartner, T. Krake, and D. Weiskopf, “Relaxed Dot Plots: Faithful Visualization of Samples and Their Distribution,”
IEEE Transactions on Visualization and Computer Graphics, vol. 29, no. 1, Art. no. 1, Jan. 2023, doi:
10.1109/TVCG.2022.3209429.
Abstract
We introduce relaxed dot plots as an improvement of nonlinear dot plots for unit visualization. Our plots produce more faithful data representations and reduce moire´ effects. Their contour is based on a customized kernel frequency estimation to match the shape of the distribution of underlying data values. Previous nonlinear layouts introduce column-centric nonlinear scaling of dot diameters for visualization of high-dynamic-range data with high peaks. We provide a mathematical approach to convert that column-centric scaling to our smooth envelope shape. This formalism allows us to use linear, root, and logarithmic scaling to find ideal dot sizes. Our method iteratively relaxes the dot layout for more correct and aesthetically pleasing results. To achieve this, we modified Lloyd's algorithm with additional constraints and heuristics. We evaluate the layouts of relaxed dot plots against a previously existing nonlinear variant and show that our algorithm produces less error regarding the underlying data while establishing the blue noise property that works against moire´ effects. Further, we analyze the readability of our relaxed plots in three crowd-sourced experiments. The results indicate that our proposed technique surpasses traditional dot plots.BibTeX
F. Chiossi
et al., “Adapting visualizations and interfaces to the user,”
it - Information Technology, vol. 64, no. 4–5, Art. no. 4–5, 2022, doi:
10.1515/itit-2022-0035.
Abstract
Adaptive visualization and interfaces pervade our everyday tasks to improve interaction from the point of view of user performance and experience. This approach allows using several user inputs, whether physiological, behavioral, qualitative, or multimodal combinations, to enhance the interaction. Due to the multitude of approaches, we outline the current research trends of inputs used to adapt visualizations and user interfaces. Moreover, we discuss methodological approaches used in mixed reality, physiological computing, visual analytics, and proficiency-aware systems. With this work, we provide an overview of the current research in adaptive systems.BibTeX
M. Koch, D. Weiskopf, and K. Kurzhals, “A Spiral into the Mind: Gaze Spiral Visualization for Mobile Eye Tracking,”
Proceedings of the ACM on Computer Graphics and Interactive Techniques, vol. 5, no. 2, Art. no. 2, May 2022, doi:
10.1145/3530795.
Abstract
Comparing mobile eye tracking data from multiple participants without information about areas of interest (AOIs) is challenging because of individual timing and coordinate systems. We present a technique, the gaze spiral, that visualizes individual recordings based on image content of the stimulus. The spiral layout of the slitscan visualization is used to create a compact representation of scanpaths. The visualization provides an overview of multiple recordings even for long time spans and helps identify and annotate recurring patterns within recordings. The gaze spirals can also serve as glyphs that can be projected to 2D space based on established scanpath metrics in order to interpret the metrics and identify groups of similar viewing behavior. We present examples based on two egocentric datasets to demonstrate the effectiveness of our approach for annotation and comparison tasks. Our examples show that the technique has the potential to let users compare even long-term recordings of pervasive scenarios without manual annotation.BibTeX
P. Schäfer, N. Rodrigues, D. Weiskopf, and S. Storandt, “Group Diagrams for Simplified Representation of Scanpaths,” in
Proceedings of the ACM Symposium on Visual Information Communication and Interaction (VINCI), in Proceedings of the ACM Symposium on Visual Information Communication and Interaction (VINCI). ACM, Aug. 2022. doi:
10.1145/3554944.3554971.
Abstract
We instrument Group Diagrams (GDs) to reduce clutter in sets
of eye-tracking scanpaths. Group Diagrams consist of trajectory
subsets that cover, or represent, the whole set of trajectories with
respect to some distance measure and an adjustable distance threshold.
The original GDs allow for an application of various distance
measures. We implement the GD framework and evaluate it on
scanpaths that were collected by a former user study on public transit
maps. We find that the Fréchet distance is the most appropriate
measure to get meaningful results, yet it is flexible enough to cover
outliers.We discuss several implementation-specific challenges and
improve the scalability of the algorithm.BibTeX
T. Krake, A. Bruhn, B. Eberhardt, and D. Weiskopf, “Efficient and Robust Background Modeling with Dynamic Mode Decomposition,”
Journal of Mathematical Imaging and Vision (2022), 2022, doi:
10.1007/s10851-022-01068-0.
Abstract
A large number of modern video background modeling algorithms deal with computational costly minimization problems that often need parameter adjustments. While in most cases spatial and temporal constraints are added artificially to the minimization process, our approach is to exploit Dynamic Mode Decomposition (DMD), a spectral decomposition technique that naturally extracts spatio-temporal patterns from data. Applied to video data, DMD can compute background models. However, the original DMD algorithm for background modeling is neither efficient nor robust. In this paper, we present an equivalent reformulation with constraints leading to a more suitable decomposition into fore- and background. Due to the reformulation, which uses sparse and low-dimensional structures, an efficient and robust algorithm is derived that computes accurate background models. Moreover, we show how our approach can be extended to RGB data, data with periodic parts, and streaming data enabling a versatile use.BibTeX
F. Schreiber and D. Weiskopf, “Quantitative Visual Computing,”
it - Information Technology, vol. 64, no. 4–5, Art. no. 4–5, 2022, doi:
doi:10.1515/itit-2022-0048.
BibTeX
T. Krake, D. Klötzl, B. Eberhardt, and D. Weiskopf, “Constrained Dynamic Mode Decomposition,”
IEEE Transactions on Visualization and Computer Graphics, pp. 1–11, 2022, doi:
10.1109/TVCG.2022.3209437.
Abstract
Frequency-based decomposition of time series data is used in many visualization applications. Most of these decomposition methods (such as Fourier transform or singular spectrum analysis) only provide interaction via pre- and post-processing, but no means to influence the core algorithm. A method that also belongs to this class is Dynamic Mode Decomposition (DMD), a spectral decomposition method that extracts spatio-temporal patterns from data. In this paper, we incorporate frequency-based constraints into DMD for an adaptive decomposition that leads to user-controllable visualizations, allowing analysts to include their knowledge into the process. To accomplish this, we derive an equivalent reformulation of DMD that implicitly provides access to the eigenvalues (and therefore to the frequencies) identified by DMD. By utilizing a constrained minimization problem customized to DMD, we can guarantee the existence of desired frequencies by minimal changes to DMD. We complement this core approach by additional techniques for constrained DMD to facilitate explorative visualization and investigation of time series data. With several examples, we demonstrate the usefulness of constrained DMD and compare it to conventional frequency-based decomposition methods.BibTeX
N. Rodrigues, L. Shao, J. J. Yan, T. Schreck, and D. Weiskopf, “Eye Gaze on Scatterplot: Concept and First Results of Recommendations for Exploration of SPLOMs Using Implicit Data Selection,” in
2022 Symposium on Eye Tracking Research and Applications, in 2022 Symposium on Eye Tracking Research and Applications. Seattle, WA, USA: Association for Computing Machinery, 2022, pp. 59:1-59:7. doi:
10.1145/3517031.3531165.
Abstract
We propose a three-step concept and visual design for supporting the visual exploration of high-dimensional data in scatterplots through eye-tracking. First, we extract subsets in the underlying data using existing classifications, automated clustering algorithms, or eye-tracking. For the latter, we map gaze to the underlying data dimensions in the scatterplot. Clusters of data points that have been the focus of the viewers’ gaze are marked as clusters of interest (eye-mind hypothesis). In a second step, our concept extracts various properties from statistics and scagnostics from the clusters. The third step uses these measures to compare the current data clusters from the main scatterplot to the same data in other dimensions. The results enable analysts to retrieve similar or dissimilar views as guidance to explore the entire data set. We provide a proof-of-concept implementation as a test bench and describe a use case to show a practical application and initial results.BibTeX
T. Krake, M. von Scheven, J. Gade, M. Abdelaal, D. Weiskopf, and M. Bischoff, “Efficient Update of Redundancy Matrices for Truss and Frame Structures,”
Journal of Theoretical, Computational and Applied Mechanics, 2022, doi:
10.46298/jtcam.9615.
Abstract
Redundancy matrices provide insights into the load carrying behavior of statically indeterminate structures. This information can be employed for the design and analysis of structures with regard to certain objectives, for example reliability, robustness, or adaptability. In this context, the structure is often iteratively examined with the help of slight adjustments. However, this procedure generally requires a high computational effort for the recalculation of the redundancy matrix due to the necessity of costly matrix operations. This paper addresses this problem by providing generic algebraic formulations for efficiently updating the redundancy matrix (and related matrices). The formulations include various modifications like adding, removing, and exchanging elements and are applicable to truss and frame structures. With several examples, we demonstrate the interaction between the formulas and their mechanical interpretation. Finally, a performance test for a scaleable structure is presented.BibTeX
K. Angerbauer
et al., “Accessibility for Color Vision Deficiencies: Challenges and Findings of a Large Scale Study on Paper Figures,” in
Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, in Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. New Orleans, LA, USA: Association for Computing Machinery, 2022. doi:
10.1145/3491102.3502133.
Abstract
We present an exploratory study on the accessibility of images in publications when viewed with color vision deficiencies (CVDs). The study is based on 1,710 images sampled from a visualization dataset (VIS30K) over five years. We simulated four CVDs on each image. First, four researchers (one with a CVD) identified existing issues and helpful aspects in a subset of the images. Based on the resulting labels, 200 crowdworkers provided 30,000 ratings on present CVD issues in the simulated images. We analyzed this data for correlations, clusters, trends, and free text comments to gain a first overview of paper figure accessibility. Overall, about 60 % of the images were rated accessible. Furthermore, our study indicates that accessibility issues are subjective and hard to detect. On a meta-level, we reflect on our study experience to point out challenges and opportunities of large-scale accessibility studies for future research directions.BibTeX
L. Zhou, C. R. Johnson, and D. Weiskopf, “Data-Driven Space-Filling Curves,”
IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 2, Art. no. 2, 2021, doi:
10.1109/TVCG.2020.3030473.
Abstract
We propose a data-driven space-filling curve method for 2D and 3D visualization. Our flexible curve traverses the data elements in the spatial domain in a way that the resulting linearization better preserves features in space compared to existing methods. We achieve such data coherency by calculating a Hamiltonian path that approximately minimizes an objective function that describes the similarity of data values and location coherency in a neighborhood. Our extended variant even supports multiscale data via quadtrees and octrees. Our method is useful in many areas of visualization including multivariate or comparative visualization ensemble visualization of 2D and 3D data on regular grids or multiscale visual analysis of particle simulations. The effectiveness of our method is evaluated with numerical comparisons to existing techniques and through examples of ensemble and multivariate datasets.BibTeX
T. Krake, S. Reinhardt, M. Hlawatsch, B. Eberhardt, and D. Weiskopf, “Visualization and Selection of Dynamic Mode Decomposition Components for Unsteady Flow,”
Visual Informatics, vol. 5, no. 3, Art. no. 3, 2021, doi:
10.1016/j.visinf.2021.06.003.
Abstract
Dynamic Mode Decomposition (DMD) is a data-driven and model-free decomposition technique. It is suitable for revealing spatio-temporal features of both numerically and experimentally acquired data. Conceptually, DMD performs a low-dimensional spectral decomposition of the data into the following components: the modes, called DMD modes, encode the spatial contribution of the decomposition, whereas the DMD amplitudes specify their impact. Each associated eigenvalue, referred to as DMD eigenvalue, characterizes the frequency and growth rate of the DMD mode. In this paper, we demonstrate how the components of DMD can be utilized to obtain temporal and spatial information from time-dependent flow fields. We begin with the theoretical background of DMD and its application to unsteady flow. Next, we examine the conventional process with DMD mathematically and put it in relationship to the discrete Fourier transform. Our analysis shows that the current use of DMD components has several drawbacks. To resolve these problems we adjust the components and provide new and meaningful insights into the decomposition: we show that our improved components capture the spatio-temporal patterns of the flow better. Moreover, we remove redundancies in the decomposition and clarify the interplay between components, allowing users to understand the impact of components. These new representations, which respect the spatio-temporal character of DMD, enable two clustering methods that segment the flow into physically relevant sections and can therefore be used for the selection of DMD components. With a number of typical examples, we demonstrate that the combination of these techniques allows new insights with DMD for unsteady flow.BibTeX
R. Bian
et al., “Implicit Multidimensional Projection of Local Subspaces,”
IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 2, Art. no. 2, 2021, doi:
10.1109/TVCG.2020.3030368.
Abstract
We propose a visualization method to understand the effect of multidimensional projection on local subspaces, using implicit function differentiation. Here, we understand the local subspace as the multidimensional local neighborhood of data points. Existing methods focus on the projection of multidimensional data points, and the neighborhood information is ignored. Our method is able to analyze the shape and directional information of the local subspace to gain more insights into the global structure of the data through the perception of local structures. Local subspaces are fitted by multidimensional ellipses that are spanned by basis vectors. An accurate and efficient vector transformation method is proposed based on analytical differentiation of multidimensional projections formulated as implicit functions. The results are visualized as glyphs and analyzed using a full set of specifically-designed interactions supported in our efficient web-based visualization tool. The usefulness of our method is demonstrated using various multi- and high-dimensional benchmark datasets. Our implicit differentiation vector transformation is evaluated through numerical comparisons; the overall method is evaluated through exploration examples and use cases.BibTeX
M. Burch, W. Huang, M. Wakefield, H. C. Purchase, D. Weiskopf, and J. Hua, “The State of the Art in Empirical User Evaluation of Graph Visualizations,”
IEEE Access, vol. 9, pp. 4173–4198, 2021, doi:
10.1109/ACCESS.2020.3047616.
Abstract
While graph drawing focuses more on the aesthetic representation of node-link diagrams, graph visualization takes into account other visual metaphors making them useful for graph exploration tasks in information visualization and visual analytics. Although there are aesthetic graph drawing criteria that describe how a graph should be presented to make it faster and more reliably explorable, many controlled and uncontrolled empirical user studies flourished over the past years. The goal of them is to uncover how well the human user performs graph-specific tasks, in many cases compared to previously designed graph visualizations. Due to the fact that many parameters in a graph dataset as well as the visual representation of them might be varied and many user studies have been conducted in this space, a state-of-the-art survey is needed to understand evaluation results and findings to inform the future design, research, and application of graph visualizations. In this article, we classify the present literature on the topmost level into graph interpretation, graph memorability, and graph creation where the users with their tasks stand in focus of the evaluation, not the computational aspects. As another outcome of this work, we identify the white spots in this field and sketch ideas for future research directions.BibTeX
L. Zhou, M. Rivinius, C. R. Johnson, and D. Weiskopf, “Photographic High-Dynamic-Range Scalar Visualization,”
IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 6, Art. no. 6, 2020, doi:
10.1109/TVCG.2020.2970522.
Abstract
We propose a photographic method to show scalar values of high dynamic range (HDR) by color mapping for 2D visualization. We combine (1) tone-mapping operators that transform the data to the display range of the monitor while preserving perceptually important features, based on a systematic evaluation, and (2) simulated glares that highlight high-value regions. Simulated glares are effective for highlighting small areas (of a few pixels) that may not be visible with conventional visualizations; through a controlled perception study, we confirm that glare is preattentive. The usefulness of our overall photographic HDR visualization is validated through the feedback of expert users.BibTeX
R. Garcia and D. Weiskopf, “Inner-Process Visualization of Hidden States in Recurrent Neural Networks,” in
Proceedings of the 13th International Symposium on Visual Information Communication and Interaction, in Proceedings of the 13th International Symposium on Visual Information Communication and Interaction. Eindhoven, Netherlands: ACM, 2020, pp. 20:1-20:5. doi:
10.1145/3430036.3430047.
Abstract
In this paper, we introduce a visualization technique aimed to help machine learning experts to analyze the hidden states of layers in recurrent neural networks (RNNs). Our technique allows the user to visually inspect how hidden states store and process information throughout the feeding of an input sequence into the network. It can answer questions such as which parts of the input data had a higher impact on the prediction and how the model correlates each hidden state configuration with a certain output. Our visualization comprises several components: our input visualization shows the input sequence and how it relates to the output (using color coding); hidden states are visualized by nonlinear projection to 2-D visualization space via t-SNE in order to understand the shape of the space of hidden states; time curves are employed to show the details of the evolution of hidden state configurations; and a time-multi-class heatmap matrix visualizes the evolution of expected predictions for multi-class classifiers. To demonstrate the capability of our approach, we discuss two typical use cases for long short-term memory (LSTM) models applied to two widely used natural language processing (NLP) datasets.BibTeX
A. Kumar, D. Mohanty, K. Kurzhals, F. Beck, D. Weiskopf, and K. Mueller, “Demo of the EyeSAC System for Visual Synchronization, Cleaning, and Annotation of Eye Movement Data,” in
ACM Symposium on Eye Tracking Research and Applications, in ACM Symposium on Eye Tracking Research and Applications. Stuttgart, Germany: Association for Computing Machinery, 2020. doi:
10.1145/3379157.3391988.
Abstract
Eye movement data analysis plays an important role in examining human cognitive processes and perceptions. Such analysis at times needs data recording from additional sources too during experiments. In this paper, we study a pair programming based collaboration using two eye trackers, stimulus recording, and an external camera recording. To analyze the collected data, we introduce the EyeSAC system that synchronizes the data from different sources and that removes the noisy and missing gazes from eye tracking data with the help of visual feedback from the external recording. The synchronized and cleaned data is further annotated using our system and then exported for further analysis.BibTeX
S. Öney
et al., “Evaluation of Gaze Depth Estimation from Eye Tracking in Augmented Reality,” in
Proceedings of the Symposium on Eye Tracking Research & Applications-Short Paper (ETRA-SP), in Proceedings of the Symposium on Eye Tracking Research & Applications-Short Paper (ETRA-SP). ACM, 2020, pp. 49:1-49:5. doi:
10.1145/3379156.3391835.
Abstract
Gaze tracking in 3D has the potential to improve interaction with objects and visualizations in augmented reality. However, previous research showed that subjective perception of distance varies between real and virtual surroundings. We wanted to determine whether objectively measured 3D gaze depth through eye tracking also exhibits differences between entirely real and augmented environments. To this end, we conducted an experiment (N = 25) in which we used Microsoft HoloLens with a binocular eye tracking add-on from Pupil Labs. Participants performed a task that required them to look at stationary real and virtual objects while wearing a HoloLens device. We were not able to find significant differences in the gaze depth measured by eye tracking. Finally, we discuss our findings and their implications for gaze interaction in immersive analytics, and the quality of the collected gaze data.BibTeX
A. Kumar, P. Howlader, R. Garcia, D. Weiskopf, and K. Mueller, “Challenges in Interpretability of Neural Networks for Eye Movement Data,” in
ACM Symposium on Eye Tracking Research and Applications, in ACM Symposium on Eye Tracking Research and Applications. Stuttgart, Germany: Association for Computing Machinery, 2020. doi:
10.1145/3379156.3391361.
Abstract
Many applications in eye tracking have been increasingly employing neural networks to solve machine learning tasks. In general, neural networks have achieved impressive results in many problems over the past few years, but they still suffer from the lack of interpretability due to their black-box behavior. While previous research on explainable AI has been able to provide high levels of interpretability for models in image classification and natural language processing tasks, little effort has been put into interpreting and understanding networks trained with eye movement datasets. This paper discusses the importance of developing interpretability methods specifically for these models. We characterize the main problems for interpreting neural networks with this type of data, how they differ from the problems faced in other domains, and why existing techniques are not sufficient to address all of these issues. We present preliminary experiments showing the limitations that current techniques have and how we can improve upon them. Finally, based on the evaluation of our experiments, we suggest future research directions that might lead to more interpretable and explainable neural networks for eye tracking.BibTeX
D. Weiskopf, “Vis4Vis: Visualization for (Empirical) Visualization Research,” in
Foundations of Data Visualization, M. Chen, H. Hauser, P. Rheingans, and G. Scheuermann, Eds., in Foundations of Data Visualization. Springer International Publishing, 2020, pp. 209--224. doi:
10.1007/978-3-030-34444-3_10.
Abstract
Appropriate evaluation is a key component in visualization research. It is typically based on empirical studies that assess visualization components or complete systems. While such studies often include the user of the visualization, empirical research is not necessarily restricted to user studies but may also address the technical performance of a visualization system such as its computational speed or memory consumption. Any such empirical experiment faces the issue that the underlying visualization is becoming increasingly sophisticated, leading to an increasingly difficult evaluation in complex environments. Therefore, many of the established methods of empirical studies can no longer capture the full complexity of the evaluation. One promising solution is the use of data-rich observations that we can acquire during studies to obtain more reliable interpretations of empirical research. For example, we have been witnessing an increasing availability and use of physiological sensor information from eye tracking, electrodermal activity sensors, electroencephalography, etc. Other examples are various kinds of logs of user activities such as mouse, keyboard, or touch interaction. Such data-rich empirical studies promise to be especially useful for studies in the wild and similar scenarios outside of the controlled laboratory environment. However, with the growing availability of large, complex, time-dependent, heterogeneous, and unstructured observational data, we are facing the new challenge of how we can analyze such data. This challenge can be addressed by establishing the subfield of visualization for visualization (Vis4Vis): visualization as a means of analyzing and communicating data from empirical studies to advance visualization research.BibTeX
L. Merino, M. Schwarzl, M. Kraus, M. Sedlmair, D. Schmalstieg, and D. Weiskopf, “Evaluating Mixed and Augmented Reality: A Systematic Literature Review (2009 -- 2019),” in
IEEE International Symposium on Mixed and Augmented Reality (ISMAR), in IEEE International Symposium on Mixed and Augmented Reality (ISMAR). 2020. doi:
doi: 10.1109/ISMAR50242.2020.00069.Abstract
We present a systematic review of 45S papers that report on evaluations in mixed and augmented reality (MR/AR) published in ISMAR, CHI, IEEE VR, and UIST over a span of 11 years (2009-2019). Our goal is to provide guidance for future evaluations of MR/AR approaches. To this end, we characterize publications by paper type (e.g., technique, design study), research topic (e.g., tracking, rendering), evaluation scenario (e.g., algorithm performance, user performance), cognitive aspects (e.g., perception, emotion), and the context in which evaluations were conducted (e.g., lab vs. in-thewild). We found a strong coupling of types, topics, and scenarios. We observe two groups: (a) technology-centric performance evaluations of algorithms that focus on improving tracking, displays, reconstruction, rendering, and calibration, and (b) human-centric studies that analyze implications of applications and design, human factors on perception, usability, decision making, emotion, and attention. Amongst the 458 papers, we identified 248 user studies that involved 5,761 participants in total, of whom only 1,619 were identified as female. We identified 43 data collection methods used to analyze 10 cognitive aspects. We found nine objective methods, and eight methods that support qualitative analysis. A majority (216/248) of user studies are conducted in a laboratory setting. Often (138/248), such studies involve participants in a static way. However, we also found a fair number (30/248) of in-the-wild studies that involve participants in a mobile fashion. We consider this paper to be relevant to academia and industry alike in presenting the state-of-the-art and guiding the steps to designing, conducting, and analyzing results of evaluations in MR/AR.BibTeX
N. Rodrigues, C. Schulz, A. Lhuillier, and D. Weiskopf, “Cluster-Flow Parallel Coordinates: Tracing Clusters Across Subspaces,” in
Proceedings of Graphics Interface 2020, in Proceedings of Graphics Interface 2020. Canadian Human-Computer Communications Society / Société canadienne du dialogue humain-machine, 2020, pp. 382–392. doi:
10.20380/GI2020.38.
Abstract
We present a novel variant of parallel coordinates plots (PCPs) in which we show clusters in 2D subspaces of multivariate data and emphasize flow between them. We achieve this by duplicating and stacking individual axes vertically. On a high level, our clusterflow layout shows how data points move from one cluster to another in different subspaces. We achieve cluster-based bundling and limit plot growth through the reduction of available vertical space for each duplicated axis. Although we introduce space between clusters, we preserve the readability of intra-cluster correlations by starting and ending with the original slopes from regular PCPs and drawing Hermite spline segments in between. Moreover, our rendering technique enables the visualization of small and large data sets alike. Cluster-flow PCPs can even propagate the uncertainty inherent to fuzzy clustering through the layout and rendering stages of our pipeline. Our layout algorithm is based on A*. It achieves an optimal result with regard to a novel set of cost functions that allow us to arrange axes horizontally (dimension ordering) and vertically (cluster ordering).BibTeX
N. Pathmanathan
et al., “Eye vs. Head: Comparing Gaze Methods for Interaction in Augmented Reality,” in
Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA). Stuttgart, Germany: ACM, 2020, pp. 50:1-50:5. doi:
10.1145/3379156.3391829.
Abstract
Visualization in virtual 3D environments can provide a natural way for users to explore data. Often, arm and short head movements are required for interaction in augmented reality, which can be tiring and strenuous though. In an effort toward more user-friendly interaction, we developed a prototype that allows users to manipulate virtual objects using a combination of eye gaze and an external clicker device. Using this prototype, we performed a user study comparing four different input methods of which head gaze plus clicker was preferred by most participants.BibTeX
K. Kurzhals
et al., “Visual Analytics and Annotation of Pervasive Eye Tracking Video,” in
Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA). Stuttgart, Germany: ACM, 2020, pp. 16:1-16:9. doi:
10.1145/3379155.3391326.
Abstract
We propose a new technique for visual analytics and annotation of long-term pervasive eye tracking data for which a combined analysis of gaze and egocentric video is necessary. Our approach enables two important tasks for such data for hour-long videos from individual participants: (1) efficient annotation and (2) direct interpretation of the results. Exemplary time spans can be selected by the user and are then used as a query that initiates a fuzzy search of similar time spans based on gaze and video features. In an iterative refinement loop, the query interface then provides suggestions for the importance of individual features to improve the search results. A multi-layered timeline visualization shows an overview of annotated time spans. We demonstrate the efficiency of our approach for analyzing activities in about seven hours of video in a case study and discuss feedback on our approach from novices and experts performing the annotation task.BibTeX
K. Kurzhals, M. Burch, and D. Weiskopf, “What We See and What We Get from Visualization: Eye Tracking Beyond Gaze Distributions and Scanpaths,”
CoRR, vol. abs/2009.14515, 2020, [Online]. Available:
https://arxiv.org/abs/2009.14515Abstract
Technical progress in hardware and software enables us to record gaze data in everyday situations and over long time spans. Among a multitude of research opportunities, this technology enables visualization researchers to catch a glimpse behind performance measures and into the perceptual and cognitive processes of people using visualization techniques. The majority of eye tracking studies performed for visualization research is limited to the analysis of gaze distributions and aggregated statistics, thus only covering a small portion of insights that can be derived from gaze data. We argue that incorporating theories and methodology from psychology and cognitive science will benefit the design and evaluation of eye tracking experiments for visualization. This position paper outlines our experiences with eye tracking in visualization and states the benefits that an interdisciplinary research field on visualization psychology might bring for better understanding how people interpret visualizations.BibTeX
N. Silva
et al., “Eye Tracking Support for Visual Analytics Systems: Foundations, Current Applications, and Research Challenges,” in
Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), K. Krejtz and B. Sharif, Eds., in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA). ACM, 2019, pp. 11:1-11:9. doi:
10.1145/3314111.3319919.
Abstract
Visual analytics (VA) research provides helpful solutions for interactive visual data analysis when exploring large and complex datasets. Due to recent advances in eye tracking technology, promising opportunities arise to extend these traditional VA approaches. Therefore, we discuss foundations for eye tracking support in VA systems. We first review and discuss the structure and range of typical VA systems. Based on a widely used VA model, we present five comprehensive examples that cover a wide range of usage scenarios. Then, we demonstrate that the VA model can be used to systematically explore how concrete VA systems could be extended with eye tracking, to create supportive and adaptive analytics systems. This allows us to identify general research and application opportunities, and classify them into research themes. In a call for action, we map the road for future research to broaden the use of eye tracking and advance visual analytics.BibTeX
T. Munz, L. L. Chuang, S. Pannasch, and D. Weiskopf, “VisME: Visual microsaccades explorer,”
Journal of Eye Movement Research, vol. 12, no. 6, Art. no. 6, Dec. 2019, doi:
10.16910/jemr.12.6.5.
Abstract
This work presents a visual analytics approach to explore microsaccade distributions in high-frequency eye tracking data. Research studies often apply filter algorithms and parameter values for microsaccade detection. Even when the same algorithms are employed, different parameter values might be adopted across different studies. In this paper, we present a visual analytics system (VisME) to promote reproducibility in the data analysis of microsaccades. It allows users to interactively vary the parametric values for microsaccade filters and evaluate the resulting influence on microsaccade behavior across individuals and on a group level. In particular, we exploit brushing-and-linking techniques that allow the microsaccadic properties of space, time, and movement direction to be extracted, visualized, and compared across multiple views. We demonstrate in a case study the use of our visual analytics system on data sets collected from natural scene viewing and show in a qualitative usability study the usefulness of this approach for eye tracking researchers. We believe that interactive tools such as VisME will promote greater transparency in eye movement research by providing researchers with the ability to easily understand complex eye tracking data sets; such tools can also serve as teaching systems. VisME is provided as open source software.BibTeX
Y. Wang, Z. Wang, C.-W. Fu, H. Schmauder, O. Deussen, and D. Weiskopf, “Image-Based Aspect Ratio Selection.,”
IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 1, Art. no. 1, 2019, doi:
10.1109/TVCG.2018.2865266.
Abstract
Selecting a good aspect ratio is crucial for effective 2D diagrams. There are several aspect ratio selection methods for function plots and line charts, but only few can handle general, discrete diagrams such as 2D scatter plots. However, these methods either lack a perceptual foundation or heavily rely on intermediate isoline representations, which depend on choosing the right isovalues and are time-consuming to compute. This paper introduces a general image-based approach for selecting aspect ratios for a wide variety of 2D diagrams, ranging from scatter plots and density function plots to line charts. Our approach is derived from Federer's co-area formula and a line integral representation that enable us to directly construct image-based versions of existing selection methods using density fields. In contrast to previous methods, our approach bypasses isoline computation, so it is faster to compute, while following the perceptual foundation to select aspect ratios. Furthermore, this approach is complemented by an anisotropic kernel density estimation to construct density fields, allowing us to more faithfully characterize data patterns, such as the subgroups in scatterplots or dense regions in time series. We demonstrate the effectiveness of our approach by quantitatively comparing to previous methods and revisiting a prior user study. Finally, we present extensions for ROI banking, multi-scale banking, and the application to image data.BibTeX
V. Bruder, K. Kurzhals, S. Frey, D. Weiskopf, and T. Ertl, “Space-Time Volume Visualization of Gaze and Stimulus,” in
Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), K. Krejtz and B. Sharif, Eds., in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA). ACM, 2019, pp. 12:1-12:9. doi:
10.1145/3314111.3319812.
Abstract
We present a method for the spatio-temporal analysis of gaze data from multiple participants in the context of a video stimulus. For such data, an overview of the recorded patterns is important to identify common viewing behavior (such as attentional synchrony) and outliers. We adopt the approach of space-time cube visualization, which extends the spatial dimensions of the stimulus by time as the third dimension. Previous work mainly handled eye tracking data in the space-time cube as point cloud, providing no information about the stimulus context. This paper presents a novel visualization technique that combines gaze data, a dynamic stimulus, and optical flow with volume rendering to derive an overview of the data with contextual information. With specifically designed transfer functions, we emphasize different data aspects, making the visualization suitable for explorative analysis and for illustrative support of statistical findings alike.BibTeX
R. Netzel, N. Rodrigues, A. Haug, and D. Weiskopf, “Compensation of Simultaneous Orientation Contrast in Superimposed Textures,” in
Proceedings of the Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP), A. Kerren, C. Hurter, and J. Braz, Eds., in Proceedings of the Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP), vol. 3: IVAPP. SciTePress, 2019, pp. 48–57. doi:
10.5220/0007356800480057.
BibTeX
L. Zhou, R. Netzel, D. Weiskopf, and C. R. Johnson, “Spectral Visualization Sharpening.,” in
Proceedings of the ACM Symposium on Applied Perception (SAP), S. Neyret, E. Kokkinara, M. González-Franco, L. Hoyet, D. W. Cunningham, and J. Swidrak, Eds., in Proceedings of the ACM Symposium on Applied Perception (SAP). ACM, 2019, pp. 18:1-18:9. doi:
10.1145/3343036.3343133.
Abstract
In this paper, we propose a perceptually-guided visualization sharpening technique. We analyze the spectral behavior of an established comprehensive perceptual model to arrive at our approximated model based on an adapted weighting of the bandpass images from a Gaussian pyramid. The main benefit of this approximated model is its controllability and predictability for sharpening color-mapped visualizations. Our method can be integrated into any visualization tool as it adopts generic image-based post-processing, and it is intuitive and easy to use as viewing distance is the only parameter. Using highly diverse datasets, we show the usefulness of our method across a wide range of typical visualizations.BibTeX
V. Bruder
et al., “Volume-Based Large Dynamic Graph Analysis Supported by Evolution Provenance,”
Multimedia Tools and Applications, vol. 78, no. 23, Art. no. 23, 2019, doi:
10.1007/s11042-019-07878-6.
Abstract
We present an approach for the visualization and interactive analysis of dynamic graphs that contain a large number of time steps. A specific focus is put on the support of analyzing temporal aspects in the data. Central to our approach is a static, volumetric representation of the dynamic graph based on the concept of space-time cubes that we create by stacking the adjacency matrices of all time steps. The use of GPU-accelerated volume rendering techniques allows us to render this representation interactively. We identified four classes of analytics methods as being important for the analysis of large and complex graph data, which we discuss in detail: data views, aggregation and filtering, comparison, and evolution provenance. Implementations of the respective methods are presented in an integrated application, enabling interactive exploration and analysis of large graphs. We demonstrate the applicability, usefulness, and scalability of our approach by presenting two examples for analyzing dynamic graphs. Furthermore, we let visualization experts evaluate our analytics approach.BibTeX
N. Rodrigues and D. Weiskopf, “Nonlinear Dot Plots,”
IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 1, Art. no. 1, 2018, doi:
10.1109/TVCG.2017.2744018.
Abstract
Conventional dot plots use a constant dot size and are typically applied to show the frequency distribution of small data sets. Unfortunately, they are not designed for a high dynamic range of frequencies. We address this problem by introducing nonlinear dot plots. Adopting the idea of nonlinear scaling from logarithmic bar charts, our plots allow for dots of varying size so that columns with a large number of samples are reduced in height. For the construction of these diagrams, we introduce an efficient two-way sweep algorithm that leads to a dense and symmetrical layout. We compensate aliasing artifacts at high dot densities by a specifically designed low-pass filtering method. Examples of nonlinear dot plots are compared to conventional dot plots as well as linear and logarithmic histograms. Finally, we include feedback from an expert review.BibTeX
M. Behrisch
et al., “Quality Metrics for Information Visualization,”
Computer Graphics Forum, vol. 37, no. 3, Art. no. 3, 2018, doi:
https://doi.org/10.1111/cgf.13446.
Abstract
The visualization community has developed to date many intuitions and understandings of how to judge the quality of views in visualizing data. The computation of a visualization's quality and usefulness ranges from measuring clutter and overlap, up to the existence and perception of specific (visual) patterns. This survey attempts to report, categorize and unify the diverse understandings and aims to establish a common vocabulary that will enable a wide audience to understand their differences and subtleties. For this purpose, we present a commonly applicable quality metric formalization that should detail and relate all constituting parts of a quality metric. We organize our corpus of reviewed research papers along the data types established in the information visualization community: multi‐ and high‐dimensional, relational, sequential, geospatial and text data. For each data type, we select the visualization subdomains in which quality metrics are an active research field and report their findings, reason on the underlying concepts, describe goals and outline the constraints and requirements. One central goal of this survey is to provide guidance on future research opportunities for the field and outline how different visualization communities could benefit from each other by applying or transferring knowledge to their respective subdomain. Additionally, we aim to motivate the visualization community to compare computed measures to the perception of humans.BibTeX
C. Schulz, A. Zeyfang, M. van Garderen, H. Ben Lahmar, M. Herschel, and D. Weiskopf, “Simultaneous Visual Analysis of Multiple Software Hierarchies,” in
Proceedings of the IEEE Working Conference on Software Visualization (VISSOFT), in Proceedings of the IEEE Working Conference on Software Visualization (VISSOFT). IEEE, 2018, pp. 87–95. doi:
10.1109/VISSOFT.2018.00017.
Abstract
We propose a tree visualization technique for comparison of structures and attributes across multiple hierarchies. Many software systems are structured hierarchically by design. For example, developers subdivide source code into libraries, modules, and functions. This design propagates to software configuration and business processes, rendering software hierarchies even more important. Often these structural elements are attributed with reference counts, code quality metrics, and the like. Throughout the entire software life cycle, these hierarchies are reviewed, integrated, debugged, and changed many times by different people so that the identity of a structural element and its attributes is not clearly traceable. We argue that pairwise comparison of similar trees is a tedious task due to the lack of overview, especially when applied to a large number of hierarchies. Therefore, we strive to visualize multiple similar trees as a whole by merging them into one supertree. To merge structures and combine attributes from different trees, we leverage the Jaccard similarity and solve a matching problem while keeping track of the origin of a structure element and its attributes. Our visualization approach allows users to inspect these supertrees using node-link diagrams and indented tree plots. The nodes in these plots depict aggregated attributes and, using word-sized line plots, detailed data. We demonstrate the usefulness of our method by exploring the evolution of software repositories and debugging data processing pipelines using provenance data.BibTeX
N. Rodrigues, R. Netzel, J. Spalink, and D. Weiskopf, “Multiscale Scanpath Visualization and Filtering,” in
Proceedings of the Symposium on Eye Tracking and Visualization (ETVIS), L. L. Chuang, M. Burch, and K. Kurzhals, Eds., in Proceedings of the Symposium on Eye Tracking and Visualization (ETVIS). ACM, 2018, pp. 2:1-2:5. doi:
10.1145/3205929.3205931.
Abstract
The analysis of eye-tracking data can be very useful when evaluating controlled user studies. To support the analysis in a fast and easy fashion, we have developed a web-based framework for a visual inspection of eye-tracking data and a comparison of scanpaths based on filtering of fixations and similarity measures. Concerning the first part, we introduce a multiscale aggregation of fixations and saccades based on a spatial partitioning that reduces visual clutter of overlaid scanpaths without changing the overall impression of large-scale eye movements. The multiscale technique abstracts the individual scanpaths and allows an analyst to visually identify clusters or patterns inherent to the gaze data without the need for lengthy precomputations. For the second part, we introduce an approach where analysts can remove fixations from a pair of scanpaths in order to increase the similarity between them. This can be useful to discover and understand reasons for dissimilarity between scanpaths, data cleansing, and outlier detection. Our implementation uses the MultiMatch algorithm to predict similarities after the removal of individual fixations. Finally, we demonstrate the usefulness of our techniques in a use case with scanpaths that were recorded in a study with metro maps.BibTeX
R. Netzel, M. Hlawatsch, M. Burch, S. Balakrishnan, H. Schmauder, and D. Weiskopf, “An Evaluation of Visual Search Support in Maps,”
IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, Art. no. 1, 2017, doi:
10.1109/TVCG.2016.2598898.
Abstract
Visual search can be time-consuming, especially if the scene contains a large number of possibly relevant objects. An instance of this problem is present when using geographic or schematic maps with many different elements representing cities, streets, sights, and the like. Unless the map is well-known to the reader, the full map or at least large parts of it must be scanned to find the elements of interest. In this paper, we present a controlled eye-tracking study (30 participants) to compare four variants of map annotation with labels: within-image annotations, grid reference annotation, directional annotation, and miniature annotation. Within-image annotation places labels directly within the map without any further search support. Grid reference annotation corresponds to the traditional approach known from atlases. Directional annotation utilizes a label in combination with an arrow pointing in the direction of the label within the map. Miniature annotation shows a miniature grid to guide the reader to the area of the map in which the label is located. The study results show that within-image annotation is outperformed by all other annotation approaches. Best task completion times are achieved with miniature annotation. The analysis of eye-movement data reveals that participants applied significantly different visual task solution strategies for the different visual annotations.BibTeX
M. Burch, M. Hlawatsch, and D. Weiskopf, “Visualizing a Sequence of a Thousand Graphs (or Even More),”
Computer Graphics Forum, vol. 36, no. 3, Art. no. 3, 2017, doi:
10.1111/cgf.13185.
Abstract
The visualization of dynamic graphs demands visually encoding at least three major data dimensions: vertices, edges, and time steps. Many of the state‐of‐the‐art techniques can show an overview of vertices and edges but lack a data‐scalable visual representation of the time aspect. In this paper, we address the problem of displaying dynamic graphs with a thousand or more time steps. Our proposed interleaved parallel edge splatting technique uses a time‐to‐space mapping and shows the complete dynamic graph in a static visualization. It provides an overview of all data dimensions, allowing for visually detecting time‐varying data patterns; hence, it serves as a starting point for further data exploration. By applying clustering and ordering techniques on the vertices, edge splatting on the links, and a dense time‐to‐space mapping, our approach becomes visually scalable in all three dynamic graph data dimensions. We illustrate the usefulness of our technique by applying it to call graphs and US domestic flight data with several hundred vertices, several thousand edges, and more than a thousand time steps.BibTeX
K. Kurzhals, E. Çetinkaya, Y. Hu, W. Wang, and D. Weiskopf, “Close to the Action: Eye-tracking Evaluation of Speaker-following Subtitles,” in
Proceedings of the CHI Conference on Human Factors in Computing Systems, ACM, Ed., in Proceedings of the CHI Conference on Human Factors in Computing Systems. 2017, pp. 6559–6568. doi:
https://doi.org/10.1145/3025453.3025772.
Abstract
The incorporation of subtitles in multimedia content plays an important role in communicating spoken content. For example, subtitles in the respective language are often preferred to expensive audio translation of foreign movies. The traditional representation of subtitles displays text centered at the bottom of the screen. This layout can lead to large distances between text and relevant image content, causing eye strain and even that we miss visual content. As a recent alternative, the technique of speaker-following subtitles places subtitle text in speech bubbles close to the current speaker. We conducted a controlled eye-tracking laboratory study (n = 40) to compare the regular approach (center-bottom subtitles) with content-sensitive, speaker-following subtitles. We compared different dialog-heavy video clips with the two layouts. Our results show that speaker-following subtitles lead to higher fixation counts on relevant image regions and reduce saccade length, which is an important factor for eye strain.BibTeX
N. Rodrigues
et al., “Visualization of Time Series Data with Spatial Context: Communicating the Energy Production of Power Plants,” in
Proceedings of the ACM Symposium on Visual Information Communication and Interaction (VINCI), in Proceedings of the ACM Symposium on Visual Information Communication and Interaction (VINCI). 2017, pp. 37–44. doi:
https://doi.org/10.1145/3105971.3105982.
Abstract
Visualizing time series data with a spatial context is a problem that appears more and more often, since small and lightweight GPS devices allow us to enrich the time series data with position information. One example is the visualization of the energy output of power plants. We present a web-based application that aims to provide information about the energy production of a specified region, along with location information about the power plants. The application is intended to be used as a solid data basis for political discussions, nudging, and story telling about the German energy transition to renewables, called "Energiewende". It was therefore designed to be intuitive, easy to use, and provide information for a broad spectrum of users that do not need any domain-specific knowledge. Users are able to select different categories of power plants and look up their positions on an overview map. Glyphs indicate their exact positions and a selection mechanism allows users to compare the power output on different time scales using stacked area charts or ThemeRivers. As an evaluation of the application, we have collected web access statistics and conducted an online survey with respect to the intuitiveness, usability, and informativeness.BibTeX
K. Kurzhals, M. Hlawatsch, C. Seeger, and D. Weiskopf, “Visual Analytics for Mobile Eye Tracking,”
IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, Art. no. 1, 2017, doi:
10.1109/TVCG.2016.2598695.
Abstract
The analysis of eye tracking data often requires the annotation of areas of interest (AOIs) to derive semantic interpretations of human viewing behavior during experiments. This annotation is typically the most time-consuming step of the analysis process. Especially for data from wearable eye tracking glasses, every independently recorded video has to be annotated individually and corresponding AOIs between videos have to be identified. We provide a novel visual analytics approach to ease this annotation process by image-based, automatic clustering of eye tracking data integrated in an interactive labeling and analysis system. The annotation and analysis are tightly coupled by multiple linked views that allow for a direct interpretation of the labeled data in the context of the recorded video stimuli. The components of our analytics environment were developed with a user-centered design approach in close cooperation with an eye tracking expert. We demonstrate our approach with eye tracking data from a real experiment and compare it to an analysis of the data by manual annotation of dynamic AOIs. Furthermore, we conducted an expert user study with 6 external eye tracking researchers to collect feedback and identify analysis strategies they used while working with our application.BibTeX
N. Rodrigues, M. Burch, L. Di Silvestro, and D. Weiskopf, “A Visual Analytics Approach for Word Relevances in Multiple Texts,” in
Proceedings of the International Conference on Information Visualisation (IV), in Proceedings of the International Conference on Information Visualisation (IV). IEEE, 2017, pp. 1–7. doi:
10.1109/iV.2017.62.
Abstract
We investigate the problem of analyzing word frequencies in multiple text sources with the aim to give an overview of word-based similarities in several texts as a starting point for further analysis. To reach this goal, we designed a visual analytics approach composed of typical stages and processes, combining algorithmic analysis, visualization techniques, the human users with their perceptual abilities, as well as interaction methods for both the data analysis and the visualization component. By our algorithmic analysis, we first generate a multivariate dataset where words build the cases and the individual text sources the attributes. Real-valued relevances express the significances of each word in each of the text sources. From the visualization perspective, we describe how this multivariate dataset can be visualized to generate, confirm, rebuild, refine, or reject hypotheses with the goal to derive meaning, knowledge, and insights from several text sources. We discuss benefits and drawbacks of the visualization approaches when analyzing word relevances in multiple texts.BibTeX
K. Kurzhals, M. Stoll, A. Bruhn, and D. Weiskopf, “FlowBrush: Optical Flow Art,” in
Symposium on Computational Aesthetics, Sketch-Based Interfaces and Modeling, and Non-Photorealistic Animation and Rendering (EXPRESSIVE, co-located with SIGGRAPH)., in Symposium on Computational Aesthetics, Sketch-Based Interfaces and Modeling, and Non-Photorealistic Animation and Rendering (EXPRESSIVE, co-located with SIGGRAPH). 2017, pp. 1:1-1:9. doi:
10.1145/3092912.3092914.
Abstract
The depiction of motion in static representations has a long tradition in art and science alike. Often, motion is depicted by spatio-temporal summarizations that try to preserve as much information of the original dynamic content as possible. In our approach to depicting motion, we remove the spatial constraints and generate new content steered by the temporal changes in motion. Applying particle steering in combination with the dynamic color palette of the video content, we can create a wide range of different image styles. With recorded videos, or by live interaction with a webcam, one can influence the resulting image. We provide a set of intuitive parameters to affect the style of the result, the final image content depends on the video input. Based on a collection of results gathered from test users, we discuss example styles that can be achieved with FlowBrush. In general, our approach provides an open sandbox for creative people to generate aesthetic images from any video content they apply.BibTeX
C. Schulz, N. Rodrigues, K. Damarla, A. Henicke, and D. Weiskopf, “Visual Exploration of Mainframe Workloads,” in
Proceedings of the SIGGRAPH Asia Symposium on Visualization, in Proceedings of the SIGGRAPH Asia Symposium on Visualization. ACM, 2017, pp. 4:1-4:7. [Online]. Available:
http://dx.doi.org/10.1145/3139295.3139312Abstract
We present a visual analytics approach to support the workload management process for z/OS mainframes at IBM. This process typically requires the analysis of records consisting of 100 to 150 performance-related metrics, sampled over time. We aim at replacing the previous spreadsheet-based workflow with an easier, faster, and more scalable one regarding measurement periods and collected performance metrics. To achieve this goal, we collaborate with a developer embedded at IBM in a formative process. Based on that experience, we discuss the application background and formulate requirements to support decision making based on performance data for large-scale systems. Our visual approach helps analysts find outliers, patterns, and relations between performance metrics by data exploration through various visualizations. We demonstrate the usefulness and applicability of line plots, scatter plots, scatter plot matrices, parallel coordinates, and correlation matrices for workload management. Finally, we evaluate our approach in a qualitative user study with IBM domain experts.BibTeX
R. Netzel, J. Vuong, U. Engelke, S. I. O’Donoghue, D. Weiskopf, and J. Heinrich, “Comparative Eye-tracking Evaluation of Scatterplots and Parallel Coordinates,”
Visual Informatics, vol. 1, no. 2, Art. no. 2, 2017, doi:
10.1016/j.visinf.2017.11.001.
Abstract
We investigate task performance and reading characteristics for scatterplots (Cartesian coordinates) and parallel coordinates. In a controlled eye-tracking study, we asked 24 participants to assess the relative distance of points in multidimensional space, depending on the diagram type (parallel coordinates or a horizontal collection of scatterplots), the number of data dimensions (2, 4, 6, or 8), and the relative distance between points (15%, 20%, or 25%). For a given reference point and two target points, we instructed participants to choose the target point that was closer to the reference point in multidimensional space. We present a visual scanning model that describes different strategies to solve this retrieval task for both diagram types, and propose corresponding hypotheses that we test using task completion time, accuracy, and gaze positions as dependent variables. Our results show that scatterplots outperform parallel coordinates significantly in 2 dimensions, however, the task was solved more quickly and more accurately with parallel coordinates in 8 dimensions. The eye-tracking data further shows significant differences between Cartesian and parallel coordinates, as well as between different numbers of dimensions. For parallel coordinates, there is a clear trend toward shorter fixations and longer saccades with increasing number of dimensions. Using an area-of-interest (AOI) based approach, we identify different reading strategies for each diagram type: For parallel coordinates, the participants’ gaze frequently jumped back and forth between pairs of axes, while axes were rarely focused on when viewing Cartesian coordinates. We further found that participants’ attention is biased: toward the center of the whole plotfor parallel coordinates and skewed to the center/left side for Cartesian coordinates. We anticipate that these results may support the design of more effective visualizations for multidimensional data.BibTeX
D. Weiskopf, M. Burch, L. L. Chuang, B. Fischer, and A. Schmidt,
Eye Tracking and Visualization: Foundations, Techniques, and Applications. Berlin, Heidelberg: Springer, 2016. doi:
10.1007/978-3-319-47024-5_7.
Abstract
This book discusses research, methods, and recent developments in the interdisciplinary field that spans research in visualization, eye tracking, human-computer interaction, and psychology. It presents extended versions of papers from the First Workshop on Eye Tracking and Visualization (ETVIS), which was organized as a workshop of the IEEE VIS Conference 2015. Topics include visualization and visual analytics of eye-tracking data, metrics and cognitive models, eye-tracking experiments in the context of visualization interfaces, and eye tracking in 3D and immersive environments. The extended ETVIS papers are complemented by a chapter offering an overview of visualization approaches for analyzing eye-tracking data and a chapter that discusses electrooculography (EOG) as an alternative of acquiring information about eye movements. Covering scientific visualization, information visualization, and visual analytics, this book is a valuable resource for eye-tracking researchers within the visualization community.BibTeX
K. Kurzhals, M. Hlawatsch, F. Heimerl, M. Burch, T. Ertl, and D. Weiskopf, “Gaze Stripes: Image-based Visualization of Eye Tracking Data,”
IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 1, Art. no. 1, 2016, doi:
10.1109/TVCG.2015.2468091.
Abstract
We present a new visualization approach for displaying eye tracking data from multiple participants. We aim to show the spatio-temporal data of the gaze points in the context of the underlying image or video stimulus without occlusion. Our technique, denoted as gaze stripes, does not require the explicit definition of areas of interest but directly uses the image data around the gaze points, similar to thumbnails for images. A gaze stripe consists of a sequence of such gaze point images, oriented along a horizontal timeline. By displaying multiple aligned gaze stripes, it is possible to analyze and compare the viewing behavior of the participants over time. Since the analysis is carried out directly on the image data, expensive post-processing or manual annotation are not required. Therefore, not only patterns and outliers in the participants' scanpaths can be detected, but the context of the stimulus is available as well. Furthermore, our approach is especially well suited for dynamic stimuli due to the non-aggregated temporal mapping. Complementary views, i.e., markers, notes, screenshots, histograms, and results from automatic clustering, can be added to the visualization to display analysis results. We illustrate the usefulness of our technique on static and dynamic stimuli. Furthermore, we discuss the limitations and scalability of our approach in comparison to established visualization techniques.BibTeX
R. Netzel, M. Burch, and D. Weiskopf, “Interactive Scanpath-oriented Annotation of Fixations,”
Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), pp. 183–187, 2016, doi:
10.1145/2857491.2857498.
Abstract
In this short paper, we present a lightweight application for the interactive annotation of eye tracking data for both static and dynamic stimuli. The main functionality is the annotation of fixations that takes into account the scanpath and stimulus. Our visual interface allows the annotator to work through a sequence of fixations, while it shows the context of the scanpath in the form of previous and subsequent fixations. The context of the stimulus is included as visual overlay. Our application supports the automatic initial labeling according to areas of interest (AOIs), but is not dependent on AOIs. The software is easily configurable, supports user-defined annotation schemes, and fits in existing workflows of eye tracking experiments and the evaluation thereof by providing import and export functionalities for data files.BibTeX
M. Burch, R. Woods, R. Netzel, and D. Weiskopf, “The Challenges of Designing Metro Maps,” in
Proceedings of the Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP), in Proceedings of the Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP), vol. 2: IVAPP. SciTePress, 2016. doi:
10.5220/0005679601950202.
Abstract
Metro maps can be regarded as a particular version of information visualization. The goal is to produce readable and effective map designs. In this paper, we combine the expertise of design experts and visualization researchers to achieve this goal. The aesthetic design of the maps should play a major role as the intention of the designer is to make them attractive for the human viewer in order to use the designs in a way that is the most efficient. The designs should invoke accurate actions by the user—in the case of a metro map, the user would be making journeys. We provide two views on metro map designs: one from a designer point of view and one from a visualization expert point of view. The focus of this work is to find a combination of both worlds from which the designer as well as the visualizer can benefit. To reach this goal we first describe the designer’s work when designing metro maps, then we take a look at how a visualizer measures performance from an end user perspective by tracking people’s eyes when working with the formerly designed maps while answering a route finding task.BibTeX
K. Kurzhals, B. Fisher, M. Burch, and D. Weiskopf, “Eye Tracking Evaluation of Visual Analytics,”
Information Visualization, vol. 15, no. 4, Art. no. 4, 2016, doi:
10.1177/1473871615609787.
Abstract
The application of eye tracking for the evaluation of humans’ viewing behavior is a common approach in psy-chological research. So far, the use of this technique for the evaluation of visual analytics and visualization isless prominent. We investigate recent scientific publications from the main visualization and visual analyticsconferences and journals, as well as related research fields that include an evaluation by eye tracking.Furthermore, we provide an overview of evaluation goals that can be achieved by eye tracking and state-of-the-art analysis techniques for eye tracking data. Ideally, visual analytics leads to a mixed-initiative cognitivesystem where the mechanism of distribution is the interaction of the user with the visualization environment.Therefore, we also include a discussion of cognitive approaches and models to include the user in the evalua-tion process. Based on our review of the current use of eye tracking evaluation in our field and the cognitivetheory, we propose directions for future research on evaluation methodology, leading to the grand challengeof developing an evaluation approach to the mixed-initiative cognitive system of visual analytics.BibTeX
K. Kurzhals, M. Hlawatsch, M. Burch, and D. Weiskopf, “Fixation-Image Charts,” in
Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), ACM, Ed., in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), vol. 1. ACM, 2016, pp. 11–18. [Online]. Available:
http://dx.doi.org/10.1145/2857491.2857507Abstract
We facilitate the comparative visual analysis of eye tracking data from multiple participants with a visualization that represents the temporal changes of viewing behavior. Common approaches to visually analyze eye tracking data either occlude or ignore the underlying visual stimulus, impairing the interpretation of displayed measures. We introduce fixation-image charts: a new technique to display the temporal changes of fixations in the context of the stimulus without visual overlap between participants. Fixation durations, the distance and direction of saccades between consecutive fixations, as well as the stimulus context can be interpreted in one visual representation. Our technique is not limited to static stimuli, but can be applied to dynamic stimuli as well. Using fixation metrics and the visual similarity of stimulus regions, we complement our visualization technique with an interactive filter concept that allows for the identification of interesting fixation sequences without the time-consuming annotation of areas of interest. We demonstrate how our technique can be applied to different types of stimuli to perform a range of analysis tasks. Furthermore, we discuss advantages and shortcomings derived from a preliminary user study.BibTeX
R. Netzel and D. Weiskopf, “Hilbert Attention Maps for Visualizing Spatiotemporal Gaze Data,” in
Proceedings of the Symposium on Eye Tracking and Visualization (ETVIS), E. 2016, Ed., in Proceedings of the Symposium on Eye Tracking and Visualization (ETVIS). 2016, pp. 21–25. doi:
10.1109/ETVIS.2016.7851160.
Abstract
Attention maps-often in the form of heatmaps-are a common visualization approach to obtaining an overview of the spatial distribution of gaze data from eye tracking experiments. However, attention maps are not designed to let us easily analyze the temporal information of gaze data: they completely ignore temporal information by aggregating over time, or they use animation to build a sequence of attention maps. To overcome this issue, we introduce Hilbert attention maps: a 2D static visualization of the spatiotemporal distribution of gaze points. The visualization is based on the projection of the 2D spatial domain onto a space-filling Hilbert curve that is used as one axis of our new attention map; the other axis represents time. We visualize Hilbert attention maps either as dot displays or heatmaps. This 2D visualization works for data from individual participants or large groups of participants, it supports static and dynamic stimuli alike, and it does not require any preprocessing or definition of areas of interest. We demonstrate how our visualization allows analysts to identify spatiotemporal patterns of visual reading behavior, including attentional synchrony and smooth pursuit.BibTeX
A. Kumar, R. Netzel, M. Burch, D. Weiskopf, and K. Mueller, “Multi-Similarity Matrices of Eye Movement Data,” in
Proceedings of the Symposium on Eye Tracking and Visualization (ETVIS), E. 2016, Ed., in Proceedings of the Symposium on Eye Tracking and Visualization (ETVIS). 2016, pp. 26–30. doi:
10.1109/ETVIS.2016.7851161.
BibTeX
T. Blascheck, F. Beck, S. Baltes, T. Ertl, and D. Weiskopf, “Visual Analysis and Coding of Data-rich User Behavior,” in
Proceedings of the IEEE Conference on Visual Analytics Science and Technology (VAST), G. L. Andrienko, S. Liu, and J. T. Stasko, Eds., in Proceedings of the IEEE Conference on Visual Analytics Science and Technology (VAST). IEEE, 2016, pp. 141–150. doi:
10.1109/VAST.2016.7883520.
Abstract
Investigating user behavior involves abstracting low-level events to higher-level concepts. This requires an analyst to study individual user activities, assign codes which categorize behavior, and develop a consistent classification scheme. To better support this reasoning process of an analyst, we suggest a novel visual analytics approach which integrates rich user data including transcripts, videos, eye movement data, and interaction logs. Word-sized visualizations embedded into a tabular representation provide a space-efficient and detailed overview of user activities. An analyst assigns codes, grouped into code categories, as part of an interactive process. Filtering and searching helps to select specific activities and focus an analysis. A comparison visualization summarizes results of coding and reveals relationships between codes. Editing features support efficient assignment, refinement, and aggregation of codes. We demonstrate the practical applicability and usefulness of our approach in a case study and describe expert feedback.BibTeX
R. Netzel, M. Burch, and D. Weiskopf, “User Performance and Reading Strategies for Metro Maps: An Eye Tracking Study,”
Spatial Cognition and Computation, Special Issue: Eye Tracking for Spatial Research, 2016, doi:
http://dx.doi.org/10.1080/13875868.2016.1226839.
Abstract
We conducted a controlled empirical eye tracking study with 40 participants using schematic metro maps. The study focused on two aspects: determining different reading strategies and assessing user performance. We considered the following factors: color encoding (color vs. gray-scale), map complexity (three levels), and task difficulty (three levels). There was one type of task: find a route from a start to a target location and state the number of transfers that have to be performed. To identify reading strategies, we annotated fixations of scanpaths, computed a transition matrix of each annotated scanpath, and used these matrices as input to cluster scanpaths into groups of similar behavior. We show how these reading strategies relate to the geodesic structure of the scanpaths' fixations projected onto the geodesic line that connects start and target locations. The analysis of the eye tracking data is complemented by statistical inference working on two eye tracking metrics (average fixation duration and saccade length). User performance was evaluated with a statistical analysis of task correctness and completion time. Our study shows that the design factors have a significant impact on user task performance. Also, we were able to identify typical reading strategies like directly finding a path from start to target location. Often, participants check the correctness of their result multiple times by moving back and forth between start and target. Our findings also indicate that the choice of reading strategies does not depend on whether color or gray-scale encoding is used.BibTeX
K. Kurzhals, M. Burch, T. Pfeiffer, and D. Weiskopf, “Eye Tracking in Computer-based Visualization,”
Computing in Science & Engineering, vol. 17, no. 5, Art. no. 5, 2015, doi:
10.1109/MCSE.2015.93.
Abstract
The authors describe the creation of a tridimensional fly-through animation across the largest map of galaxies to date. This project represented a challenge: creating a scientifically accurate representation of the galaxy distribution that was aesthetically pleasing. The animation shows almost half a million galaxies as the viewer travels through the vast intergalactic regions, giving a glimpse of the sheer size of the universe.BibTeX