Selected Paper Awards & Personal Awards

For more awards, please browse our news section.

All Publications

  1. 2024

    1. P. Gralka, C. Müller, M. Heinemann, G. Reina, D. Weiskopf, and T. Ertl, “Power Overwhelming: The One With the Oscilloscopes,” Journal of Visualization, Aug. 2024, doi: 10.1007/s12650-024-01001-0.
    2. Y. Wang, Y. Jiang, Z. Hu, C. Ruhdorfer, M. Bâce, and A. Bulling, “VisRecall++: Analysing and Predicting Visualisation Recallability from Gaze Behaviour,” Proc. ACM on Human-Computer Interaction (PACM HCI), vol. 8, pp. 1–18, Jul. 2024, doi: 10.1145/3655613.
    3. S. A. Vriend, S. Vidyapu, K.-T. Chen, and D. Weiskopf, “Which Experimental Design is Better Suited for VQA Tasks? Eye Tracking Study on Cognitive Load, Performance, and Gaze Allocations,” in Proceedings of the Symposium on Eye Tracking and Visualization (ETVIS), in Proceedings of the Symposium on Eye Tracking and Visualization (ETVIS). Jun. 2024. [Online]. Available: https://arxiv.org/abs/2404.04036
    4. Y. Wang et al., “SalChartQA: Question-driven Saliency on Information Visualisations,” in Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI), in Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI). ACM, May 2024, pp. 1–14. doi: 10.1145/3613904.3642942.
    5. M. Jenadeleh, A. Heß, S. Hviid del Pin, E. Gamboa, M. Hirth, and D. Saupe, “Impact of feedback on crowdsourced visual quality assessment with paired comparisons,” in 2024 16th International Conference on Quality of Multimedia Experience (QoMEX), IEEE, Ed., in 2024 16th International Conference on Quality of Multimedia Experience (QoMEX). IEEE, May 2024, pp. 125–131. doi: 10.1109/qomex61742.2024.10598256.
    6. D. Saupe and S. Hviid del Pin, “National differences in image quality assessment: An investigation on three large-scale IQA datasets,” in 2024 16th International Conference on Quality of Multimedia Experience (QoMEX), IEEE, Ed., in 2024 16th International Conference on Quality of Multimedia Experience (QoMEX). IEEE, May 2024, pp. 214–220. doi: 10.1109/qomex61742.2024.10598250.
    7. M. Jenadeleh, R. Hamzaoui, U.-D. Reips, and D. Saupe, “Crowdsourced Estimation of Collective Just Noticeable Difference for Compressed Video with the Flicker Test and QUEST+,” IEEE Transactions on Circuits and Systems for Video Technology, pp. 1–1, May 2024, doi: 10.1109/tcsvt.2024.3402363.
    8. M. Kurzweg, Y. Weiss, M. O. Ernst, A. Schmidt, and K. Wolf, “Survey on Haptic Feedback through Sensory Illusions in Interactive Systems,” ACM Comput. Surv., vol. 56, no. 8, Art. no. 8, Apr. 2024, doi: 10.1145/3648353.
    9. Y. Xue et al., “Reducing Ambiguities in Line-Based Density Plots by Image-Space Colorization,” IEEE Transactions on Visualization & Computer Graphics, vol. 30, no. 1, Art. no. 1, Jan. 2024, [Online]. Available: https://www.computer.org/csdl/journal/tg/2024/01/10297597/1RyY1MBMcIo
    10. D. Klötzl, T. Krake, M. Becher, M. Koch, D. Weiskopf, and K. Kurzhals, “NMF-Based Analysis of Mobile Eye-Tracking Data,” in Proceedings of the 2024 Symposium on Eye Tracking Research and Applications, in Proceedings of the 2024 Symposium on Eye Tracking Research and Applications. 2024, pp. 1–9. doi: 10.1145/3649902.3653518.
    11. L. Joos et al., “Evaluating Node Selection Techniques for Network Visualizations in Virtual Reality,” in ACM Symposium on Spatial User Interaction, in ACM Symposium on Spatial User Interaction. New York, NY, USA: ACM, 2024, pp. 1–11. doi: 10.1145/3677386.3682102.
    12. L. Joos, B. Jäckl, D. A. Keim, M. T. Fischer, L. Peska, and J. Lokoč, “Known-Item Search in Video: An Eye Tracking-Based Study,” in Proceedings of the 2024 International Conference on Multimedia Retrieval (ICMR ’24), in Proceedings of the 2024 International Conference on Multimedia Retrieval (ICMR ’24). New York, NY, USA: ACM, 2024, pp. 311–319. doi: 10.1145/3652583.3658119.
    13. Y. Zhang, H. Williams, F. Schreiber, and K. Klein, “Visualising the Invisible : Exploring Approaches for Visual Analysis of Dynamic Airflow in Geographic Environments Using Sensor Data,” in Proceedings of the EuroVis Workshop on Visual Analytics 2024, in Proceedings of the EuroVis Workshop on Visual Analytics 2024. Eindhoven, 2024. doi: 10.2312/eurova.20241117.
    14. S. Su et al., “Going the Extra Mile in Face Image Quality Assessment: A Novel Database and Model,” IEEE Transactions on Multimedia, vol. 26, pp. 2671–2685, 2024, doi: 10.1109/tmm.2023.3301276.
    15. K. Angerbauer et al., “Is it Part of Me? Exploring Experiences of Inclusive Avatar Use For Visible and Invisible Disabilities in Social VR,” in The 26th International ACM SIGACCESS Conference on Computers and Accessibility, in The 26th International ACM SIGACCESS Conference on Computers and Accessibility, vol. 64. New York, NY, USA: ACM, 2024, pp. 1–15. doi: 10.1145/3663548.3675601.
    16. C. Müller and T. Ertl, “Quantifying Performance Gains of DirectStorage for the Visualisation of Time-Dependent Particle Data Sets,” Journal of Visualization, 2024, doi: 10.1007/s12650-024-01036-3.
    17. M. M. Hamza, E. Ullah, A. Baggag, H. Bensmail, M. Sedlmair, and M. Aupetit, “ClustML: A measure of cluster pattern complexity in scatterplots learnt from human-labeled groupings,” Information Visualization, vol. 23, no. 2, Art. no. 2, 2024, doi: 10.1177/14738716231220536.
    18. Y. Wang, Q. Dai, M. Bâce, K. Klein, and A. Bulling, “Saliency3D: a 3D Saliency Dataset Collected on Screen,” in Proc. ACM International Symposium on Eye Tracking Research and Applications (ETRA), in Proc. ACM International Symposium on Eye Tracking Research and Applications (ETRA). ACM, 2024, pp. 1–6. doi: 10.1145/3649902.3653350.
    19. F. Huth, M. Koch, M. Awad-Mohammed, K. Kurzhals, and D. Weiskopf, “Eye Tracking on Text Reading with Visual Enhancements,” in Symposium on Eye Tracking Research and Applications, in Symposium on Eye Tracking Research and Applications. New York, NY, USA: Association for Computing Machinery, 2024, p. 7. doi: 10.1145/3649902.3653521.
    20. D. Weiskopf, “Bridging Quantitative and Qualitative Methods for Visualization Research: A Data/Semantics Perspective in Light of Advanced AI,” in 2024 IEEE Evaluation and Beyond - Methodological Approaches for Visualization (BELIV), IEEE, Ed., in 2024 IEEE Evaluation and Beyond - Methodological Approaches for Visualization (BELIV). IEEE, 2024, pp. 119–128. doi: 10.1109/beliv64461.2024.00019.
    21. L. Xiao et al., “A Systematic Review of Ability-diverse Collaboration through Ability-based Lens in HCI,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, in Proceedings of the CHI Conference on Human Factors in Computing Systems. New York, NY, USA: ACM, 2024, pp. 1–21. doi: 10.1145/3613904.3641930.
    22. M. Jenadeleh et al., “An Image Quality Dataset with Triplet Comparisons for Multi-dimensional Scaling,” 2024, IEEE. doi: 10.1109/qomex61742.2024.10598258.
    23. T. Krake, D. Klötzl, D. Hägele, and D. Weiskopf, “Uncertainty-Aware Seasonal-Trend Decomposition Based on Loess,” IEEE Transactions on Visualization and Computer Graphics, pp. 1–16, 2024, doi: 10.1109/tvcg.2024.3364388.
    24. M. Becher, C. Müller, D. Sellenthin, T. Ertl, G. Reina, and D. Weiskopf, “Your Visualisations are Going Places: SciVis on Gaming Consoles,” in Proc. JapanVis, in Proc. JapanVis. 2024.
    25. P. Eades et al., “CelticGraph: Drawing Graphs as Celtic Knots and Links,” in Graph Drawing and Network Visualization, M. A. Bekos and M. Chimani, Eds., in Graph Drawing and Network Visualization. Cham: Springer Nature Switzerland, 2024, pp. 18–35. doi: 10.1007/978-3-031-49272-3_2.
    26. V. Mikheev, R. Skukies, and B. Ehinger, “The Art of Brainwaves: A Survey on Event-Related Potential Visualization Practices,” Aperture Neuro, vol. 4, 2024, doi: 10.52294/001c.116386.
    27. M. Koch, N. Pathmanathan, D. Weiskopf, and K. Kurzhals, “How Deep Is Your Gaze? Leveraging Distance in Image-Based Gaze Analysis,” in Proceedings of the 2024 Symposium on Eye Tracking Research and Applications (ETRA ’24), in Proceedings of the 2024 Symposium on Eye Tracking Research and Applications (ETRA ’24). New York, NY, USA: ACM, 2024, pp. 1–7. doi: 10.1145/3649902.3653349.
  2. 2023

    1. C. Beck and M. Köllner, “GHisBERT – Training BERT from scratch for lexical semantic investigations across historical German language stages,” in Proceedings of the 4th Workshop on Computational Approaches to Historical Language Change, N. Tahmasebi, S. Montariol, H. Dubossarsky, A. Kutuzov, S. Hengchen, D. Alfter, F. Periti, and P. Cassotti, Eds., in Proceedings of the 4th Workshop on Computational Approaches to Historical Language Change. Singapore: Association for Computational Linguistics, Dec. 2023, pp. 33–45. [Online]. Available: https://aclanthology.org/2023.lchange-1.4
    2. F. Heyen, Q. Q. Ngo, and M. Sedlmair, “Visual Overviews for Sheet Music Structure,” in Proceedings of the 24th International Society for Music Information Retrieval Conference (ISMIR) 2023, in Proceedings of the 24th International Society for Music Information Retrieval Conference (ISMIR) 2023. ISMIR, Dec. 2023, pp. 692–699. doi: 10.5281/zenodo.10265383.
    3. J. Schmalfuß, L. Mehl, and A. Bruhn, “Distracting Downpour: Adversarial Weather Attacks for Motion Estimation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Oct. 2023, pp. 10106–10116. [Online]. Available: https://openaccess.thecvf.com/content/ICCV2023/html/Schmalfuss_Distracting_Downpour_Adversarial_Weather_Attacks_for_Motion_Estimation_ICCV_2023_paper.html
    4. L. Hirsch, F. Müller, F. Chiossi, T. Benga, and A. M. Butz, “My Heart Will Go On: Implicitly Increasing Social Connectedness by Visualizing Asynchronous Players’ Heartbeats in VR Games,” Proc. ACM Hum.-Comput. Interact., vol. 7, Oct. 2023, doi: 10.1145/3611057.
    5. J. Zagermann, S. Hubenschmid, D. I. Fink, J. Wieland, H. Reiterer, and T. Feuchtner, “Challenges and Opportunities for Collaborative Immersive Analytics with Hybrid User Interfaces,” in 2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), in 2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). Los Alamitos, CA, USA: IEEE Computer Society, Oct. 2023, pp. 191–195. doi: 10.1109/ISMAR-Adjunct60411.2023.00044.
    6. E. Sood, L. Shi, M. Bortoletto, Y. Wang, P. Müller, and A. Bulling, “Improving Neural Saliency Prediction with a Cognitive Model of Human Visual Attention,” in Proceedings of the 45th Annual Meeting of the Cognitive Science Society (CogSci), in Proceedings of the 45th Annual Meeting of the Cognitive Science Society (CogSci). Jul. 2023, pp. 3639–3646. [Online]. Available: https://escholarship.org/uc/item/5968p71m
    7. G. Chen, H. Lin, O. Wiedemann, and D. Saupe, “Localization of Just Noticeable Difference for Image Compression,” in 2023 15th International Conference on Quality of Multimedia Experience (QoMEX), in 2023 15th International Conference on Quality of Multimedia Experience (QoMEX). Jun. 2023, pp. 61–66. doi: 10.1109/QoMEX58391.2023.10178653.
    8. X. Zhao et al., “CUDAS: Distortion-Aware Saliency Benchmark,” IEEE Access, vol. 11, pp. 58025–58036, Jun. 2023, doi: 10.1109/access.2023.3283344.
    9. L. Mehl, J. Schmalfuß, A. Jahedi, Y. Nalivayko, and A. Bruhn, “Spring: A High-Resolution High-Detail Dataset and Benchmark for Scene Flow, Optical Flow and Stereo,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Jun. 2023, pp. 4981–4991. [Online]. Available: https://openaccess.thecvf.com/content/CVPR2023/html/Mehl_Spring_A_High-Resolution_High-Detail_Dataset_and_Benchmark_for_Scene_Flow_CVPR_2023_paper.html
    10. K.-T. Chen et al., “Gazealytics : A Unified and Flexible Visual Toolkit for Exploratory and Comparative Gaze Analysis,” in ETRA ’23 : Proceedings of the 2023 Symposium on Eye Tracking Research and Applications, in ETRA ’23 : Proceedings of the 2023 Symposium on Eye Tracking Research and Applications. Association for Computing Machinery, May 2023, pp. 1–7. doi: 10.1145/3588015.3589844.
    11. Y. Wang, M. Bâce, and A. Bulling, “Scanpath Prediction on Information Visualisations,” IEEE Transactions on Visualization and Computer Graphics, pp. 1–15, Feb. 2023, doi: 10.1109/TVCG.2023.3242293.
    12. M. Kern, S. Jaeger-Honz, F. Schreiber, and B. Sommer, “APL@voro—interactive visualization and analysis of cell membrane simulations,” Bioinformatics, vol. 39, no. 2, Art. no. 2, Feb. 2023, doi: 10.1093/bioinformatics/btad083.
    13. L. Mehl, A. Jahedi, J. Schmalfuß, and A. Bruhn, “M-FUSE: Multi-frame Fusion for Scene Flow Estimation,” in Proc. Winter Conference on Applications of Computer Vision (WACV), in Proc. Winter Conference on Applications of Computer Vision (WACV). Jan. 2023. doi: 10.48550/arXiv.2207.05704.
    14. T. Kosch, J. Karolus, J. Zagermann, H. Reiterer, A. Schmidt, and P. W. Woźniak, “A Survey on Measuring Cognitive Workload in Human-Computer Interaction,” ACM Comput. Surv., Jan. 2023, doi: 10.1145/3582272.
    15. N. Rodrigues, C. Schulz, S. Döring, D. Baumgartner, T. Krake, and D. Weiskopf, “Relaxed Dot Plots: Faithful Visualization of Samples and Their Distribution,” IEEE Transactions on Visualization and Computer Graphics, vol. 29, no. 1, Art. no. 1, Jan. 2023, doi: 10.1109/TVCG.2022.3209429.
    16. M. Koch, K. Kurzhals, M. Burch, and D. Weiskopf, “Visualization Psychology for Eye Tracking Evaluation,” in Visualization Psychology, D. Albers Szafir, R. Borgo, M. Chen, D. J. Edwards, B. Fisher, and L. Padilla, Eds., in Visualization Psychology. , Cham: Springer International Publishing, 2023, pp. 243–260. doi: 10.1007/978-3-031-34738-2_10.
    17. R. Bauer et al., “Visual Ensemble Analysis of Fluid Flow in Porous Media across Simulation Codes and Experiment,” Transport in Porous Media, 2023, doi: 10.1007/s11242-023-02019-y#citeas.
    18. P. Paetzold, R. Kehlbeck, H. Strobelt, Y. Xue, S. Storandt, and O. Deussen, “RectEuler: Visualizing Intersecting Sets using Rectangles,” Computer Graphics Forum, vol. 42, no. 3, Art. no. 3, 2023, doi: 10.1111/cgf.14814.
    19. F. L. Dennig, M. Miller, D. A. Keim, and M. El-Assady, “FS/DS: A Theoretical Framework for the Dual Analysis of Feature Space and Data Space,” IEEE Transactions on Visualization and Computer Graphics, pp. 1–17, 2023, [Online]. Available: https://ieeexplore.ieee.org/document/10158903
    20. M. Gleicher, M. Riveiro, T. von Landesberger, O. Deussen, R. Chang, and C. Gillman, “A Problem Space for Designing Visualizations,” IEEE Computer Graphics and Applications, vol. 43, no. 4, Art. no. 4, 2023, [Online]. Available: https://ieeexplore.ieee.org/document/10179119
    21. S. Hubenschmid, J. Zagermann, D. Leicht, H. Reiterer, and T. Feuchtner, “ARound the Smartphone: Investigating the Effects of Virtually-Extended Display Size on Spatial Memory,” in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23), in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23). New York, NY, USA: ACM, 2023. [Online]. Available: https://kops.uni-konstanz.de/server/api/core/bitstreams/6eecac2f-666f-4399-bec3-d8e607331164/content
    22. M. Jenadeleh, J. Zagermann, H. Reiterer, U.-D. Reips, R. Hamzaoui, and D. Saupe, “Relaxed forced choice improves performance of visual quality assessment methods,” in 2023 15th International Conference on Quality of Multimedia Experience (QoMEX), in 2023 15th International Conference on Quality of Multimedia Experience (QoMEX). 2023, pp. 37–42. [Online]. Available: https://ieeexplore.ieee.org/abstract/document/10178467
    23. E. Pangratz, F. Chiossi, S. Villa, K. Gramann, and L. Gehrke, “Towards an Implicit Metric of Sensory-Motor Accuracy: Brain Responses to Auditory Prediction Errors in Pianists,” in Proceedings of the 15th Conference on Creativity and Cognition, in Proceedings of the 15th Conference on Creativity and Cognition. New York, NY, USA: Association for Computing Machinery, 2023, pp. 129–138. doi: 10.1145/3591196.3593340.
    24. A. V. Reinschluessel and J. Zagermann, “Exploring Hybrid User Interfaces for Surgery Planning,” in 2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), in 2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). 2023, pp. 208–210. [Online]. Available: https://ieeexplore.ieee.org/abstract/document/10322244
    25. M. Testolina, V. Hosu, M. Jenadeleh, D. Lazzarotto, D. Saupe, and T. Ebrahimi, “JPEG AIC-3 Dataset: Towards Defining the High Quality to Nearly Visually Lossless Quality Range,” in 15th International Conference on Quality of Multimedia Experience (QoMEX), in 15th International Conference on Quality of Multimedia Experience (QoMEX). 2023, pp. 55–60. [Online]. Available: https://ieeexplore.ieee.org/document/10178554
    26. A. Zaky, J. Zagermann, H. Reiterer, and T. Feuchtner, “Opportunities and Challenges of Hybrid User Interfaces for Optimization of Mixed Reality Interfaces,” in 2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), in 2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). 2023, pp. 215–219. [Online]. Available: https://ieeexplore.ieee.org/document/10322176
    27. M. Xue et al., “Taurus: Towards a Unified Force Representation and Universal Solver for Graph Layout,” IEEE Transactions on Visualization and Computer Graphics, vol. 29, no. 1, Art. no. 1, 2023, doi: 10.1109/TVCG.2022.3209371.
    28. M. Butt, L. Carnesale, and T. Ahmed, “Experiencers vs. agents in Urdu/Hindi nominalized verbs of perception,” in Proceedings of the Lexical Functional Grammar Conference, in Proceedings of the Lexical Functional Grammar Conference, vol. 28. 2023, pp. 90–113. [Online]. Available: https://lfg-proceedings.org/lfg/index.php/main/article/view/46
    29. A. Jahedi, M. Luz, M. Rivinius, L. Mehl, and A. Bruhn, “MS-RAFT+: High Resolution Multi-Scale RAFT,” International Journal of Computer Vision, pp. 1573–1405, 2023, doi: 10.1007/s11263-023-01930-7.
    30. S. Hubenschmid, D. I. Fink, J. Zagermann, J. Wieland, H. Reiterer, and T. Feuchtner, “Colibri: A Toolkit for Rapid Prototyping of Networking Across Realities,” in 2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), in 2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). 2023, pp. 9–13. [Online]. Available: https://ieeexplore.ieee.org/abstract/document/10322249
    31. J. Wieland, “Designing and Evaluating Interactions for Handheld AR,” in Companion Proceedings of the 2023 Conference on Interactive Surfaces and Spaces, in Companion Proceedings of the 2023 Conference on Interactive Surfaces and Spaces. New York, NY, USA: Association for Computing Machinery, 2023, pp. 100–103. doi: 10.1145/3626485.3626555.
    32. W. Kerle-Malcharek, S. P. Feyer, F. Schreiber, and K. Klein, “GAV-VR: An Extensible Framework for Graph Analysis and Visualisation in Virtual Reality,” in ICAT-EGVE 2023 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments, J.-M. Normand, M. Sugimoto, and V. Sundstedt, Eds., in ICAT-EGVE 2023 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments. The Eurographics Association, 2023. doi: 10.2312/egve.20231321.
    33. J. Schmalfuß, E. Scheurer, H. Zhao, N. Karantzas, A. Bruhn, and D. Labate, “Blind image inpainting with sparse directional filter dictionaries for lightweight CNNs,” Journal of Mathematical Imaging and Vision (JMIV), vol. 65, pp. 323–339, 2023, doi: 10.1007/s10851-022-01119-6.
    34. N. Doerr, K. Angerbauer, M. Reinelt, and M. Sedlmair, “Bees, Birds and Butterflies: Investigating the Influence of Distractors on Visual Attention Guidance Techniques,” in Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, in Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. New York, NY, USA: Association for Computing Machinery, 2023. doi: 10.1145/3544549.3585816.
    35. F. Draxler, A. Schmidt, and L. L. Chuang, “Relevance, Effort, and Perceived Quality: Language Learners’ Experiences with AI-Generated Contextually Personalized Learning Material,” in Proceedings of the 2023 ACM Designing Interactive Systems Conference, in Proceedings of the 2023 ACM Designing Interactive Systems Conference. New York, NY, USA: Association for Computing Machinery, 2023, pp. 2249–2262. doi: 10.1145/3563657.3596112.
    36. T. Ge et al., “Optimally Ordered Orthogonal Neighbor Joining Trees for Hierarchical Cluster Analysis,” IEEE Transactions on Visualization and Computer Graphics, pp. 1–13, 2023, [Online]. Available: https://ieeexplore.ieee.org/document/10147241
    37. K.-T. Chen et al., “Reading Strategies for Graph Visualizations That Wrap Around in Torus Topology,” in Proceedings of the 2023 Symposium on Eye Tracking Research and Applications, in Proceedings of the 2023 Symposium on Eye Tracking Research and Applications. New York, NY, USA: Association for Computing Machinery, 2023. doi: 10.1145/3588015.3589841.
    38. C. Morariu, A. Bibal, R. Cutura, B. Frénay, and M. Sedlmair, “Predicting User Preferences of Dimensionality Reduction Embedding Quality,” IEEE Transactions on Visualization and Computer Graphics, vol. 29, no. 1, Art. no. 1, 2023, [Online]. Available: https://ieeexplore.ieee.org/document/9904619
    39. C. Schneegass, M. L. Wilson, H. A. Maior, F. Chiossi, A. L. Cox, and J. Wiese, “The Future of Cognitive Personal Informatics,” in Proceedings of the 25th International Conference on Mobile Human-Computer Interaction, in Proceedings of the 25th International Conference on Mobile Human-Computer Interaction. New York, NY, USA: Association for Computing Machinery, 2023. doi: 10.1145/3565066.3609790.
    40. W. Teramoto and M. O. Ernst, “Effects of invisible lip movements on phonetic perception,” Scientific Reports, vol. 13, no. 1, Art. no. 1, 2023, doi: 10.1038/s41598-023-33791-y.
  3. 2022

    1. D. I. Fink, J. Zagermann, H. Reiterer, and H.-C. Jetter, “Re-Locations: Augmenting Personal and Shared Workspaces to Support Remote Collaboration in Incongruent Spaces,” Proc. ACM Hum.-Comput. Interact., vol. 6, Nov. 2022, doi: 10.1145/3567709.
    2. A. Jahedi, L. Mehl, M. Rivinius, and A. Bruhn, “Multi-Scale RAFT: combining hierarchical concepts for learning-based optical flow estimation,” in Proceedings of the IEEE International Conference on Image Processing (ICIP), in Proceedings of the IEEE International Conference on Image Processing (ICIP). Oct. 2022, pp. 1236–1240. doi: 10.48550/arXiv.2207.12163.
    3. C. Müller, M. Heinemann, D. Weiskopf, and T. Ertl, “Power Overwhelming: Quantifying the Energy Cost of Visualisation,” in Proceedings of the 2022 IEEE Workshop on Evaluation and Beyond - Methodological Approaches for Visualization (BELIV), in Proceedings of the 2022 IEEE Workshop on Evaluation and Beyond - Methodological Approaches for Visualization (BELIV). Oct. 2022, pp. 38–46. doi: 10.1109/BELIV57783.2022.00009.
    4. K. Angerbauer and M. Sedlmair, “Toward Inclusion and Accessibility in Visualization Research: Speculations on Challenges, Solution Strategies, and Calls for Action (Position Paper),” in 2022 IEEE Evaluation and Beyond - Methodological Approaches for Visualization (BELIV), in 2022 IEEE Evaluation and Beyond - Methodological Approaches for Visualization (BELIV). Oct. 2022, pp. 20–27. [Online]. Available: https://ieeexplore.ieee.org/document/9978448
    5. J. Schmalfuß, P. Scholze, and A. Bruhn, “A Perturbation-Constrained Adversarial Attack for Evaluating the Robustness of Optical Flow,” Proceedings of the European Conference on Computer Vision (ECCV), Oct. 2022, doi: 10.1007/978-3-031-20047-2_11.
    6. H. Lin, H. Men, Y. Yan, J. Ren, and D. Saupe, “Crowdsourced Quality Assessment of Enhanced Underwater Images - a Pilot Study,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX). IEEE, Sep. 2022, pp. 1–4. [Online]. Available: https://ieeexplore.ieee.org/document/9900904
    7. P. Schäfer, N. Rodrigues, D. Weiskopf, and S. Storandt, “Group Diagrams for Simplified Representation of Scanpaths,” in Proceedings of the ACM Symposium on Visual Information Communication and Interaction (VINCI), in Proceedings of the ACM Symposium on Visual Information Communication and Interaction (VINCI). ACM, Aug. 2022. doi: 10.1145/3554944.3554971.
    8. S. Dosdall, K. Angerbauer, L. Merino, M. Sedlmair, and D. Weiskopf, “Toward In-Situ Authoring of Situated Visualization with Chorded Keyboards,” in 15th International Symposium on Visual Information Communication and Interaction, VINCI 2022, Chur, Switzerland, August 16-18, 2022, M. Burch, G. Wallner, and D. Limberger, Eds., in 15th International Symposium on Visual Information Communication and Interaction, VINCI 2022, Chur, Switzerland, August 16-18, 2022. ACM, Aug. 2022, pp. 1–5. doi: 10.1145/3554944.3554970.
    9. M. Zameshina et al., “Fairness in generative modeling: do it unsupervised!,” in Proceedings of the Genetic and Evolutionary Computation Conference Companion, in Proceedings of the Genetic and Evolutionary Computation Conference Companion. ACM, Jul. 2022, pp. 320–323. doi: 10.1145/3520304.3528992.
    10. P. Balestrucci, D. Wiebusch, and M. O. Ernst, “ReActLab: A Custom Framework for Sensorimotor Experiments ‘in-the-wild,’” Frontiers in Psychology, vol. 13, Jun. 2022, doi: 10.3389/fpsyg.2022.906643/full.
    11. Y. Wang, M. Koch, M. Bâce, D. Weiskopf, and A. Bulling, “Impact of Gaze Uncertainty on AOIs in Information Visualisations,” in 2022 Symposium on Eye Tracking Research and Applications, in 2022 Symposium on Eye Tracking Research and Applications. ACM, Jun. 2022, pp. 1–6. doi: 10.1145/3517031.3531166.
    12. M. Koch, D. Weiskopf, and K. Kurzhals, “A Spiral into the Mind: Gaze Spiral Visualization for Mobile Eye Tracking,” Proceedings of the ACM on Computer Graphics and Interactive Techniques, vol. 5, no. 2, Art. no. 2, May 2022, doi: 10.1145/3530795.
    13. G. Tkachev, R. Cutura, M. Sedlmair, S. Frey, and T. Ertl, “Metaphorical Visualization: Mapping Data to Familiar Concepts,” in CHI Conference on Human Factors in Computing Systems Extended Abstracts, in CHI Conference on Human Factors in Computing Systems Extended Abstracts. ACM, Apr. 2022, pp. 1–10. doi: 10.1145/3491101.3516393.
    14. M. Philipp, N. Bacher, S. Sauer, F. Mathis-Ullrich, and A. Bruhn, “From Chairs To Brains: Customizing Optical Flow For Surgical Activity Localization,” in Proceedings of the IEEE International Symposium on Biomedical Imaging (ISBI), in Proceedings of the IEEE International Symposium on Biomedical Imaging (ISBI). IEEE, Mar. 2022, pp. 1–5. [Online]. Available: https://ieeexplore.ieee.org/document/9761704
    15. F. Petersen, B. Goldluecke, O. Deussen, and H. Kuehne, “Style Agnostic 3D Reconstruction via Adversarial Style Transfer,” in 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), in 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). IEEE, Jan. 2022, pp. 2273–2282. [Online]. Available: http://dblp.uni-trier.de/db/conf/wacv/wacv2022.html#PetersenGDK22
    16. K. Klein, M. Sedlmair, and F. Schreiber, “Immersive Analytics: An Overview,” it - Information Technology, vol. 64, pp. 155–168, 2022, doi: 10.1515/itit-2022-0037.
    17. T. Krake, M. von Scheven, J. Gade, M. Abdelaal, D. Weiskopf, and M. Bischoff, “Efficient Update of Redundancy Matrices for Truss and Frame Structures,” Journal of Theoretical, Computational and Applied Mechanics, 2022, [Online]. Available: https://jtcam.episciences.org/10398
    18. K. Angerbauer et al., “Accessibility for Color Vision Deficiencies: Challenges and Findings of a Large Scale Study on Paper Figures,” in Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, in Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. New York, NY, USA: Association for Computing Machinery, 2022. doi: 10.1145/3491102.3502133.
    19. F. Chiossi et al., “Adapting visualizations and interfaces to the user,” it - Information Technology, vol. 64, pp. 133–143, 2022, doi: 10.1515/itit-2022-0035.
    20. D. Hägele, T. Krake, and D. Weiskopf, “Uncertainty-Aware Multidimensional Scaling,” IEEE Transactions on Visualization and Computer Graphics, vol. 29, no. 1, Art. no. 1, 2022, doi: 10.1109/TVCG.2022.3209420.
    21. G. Richer, A. Pister, M. Abdelaal, J.-D. Fekete, M. Sedlmair, and D. Weiskopf, “Scalability in Visualization,” IEEE Transactions on Visualization and Computer Graphics, pp. 1–15, 2022.
    22. D. Weiskopf, “Uncertainty Visualization: Concepts, Methods, and Applications in Biological Data Visualization,” Frontiers in Bioinformatics, vol. 2, 2022, doi: 10.3389/fbinf.2022.793819.
    23. M. Becher et al., “Situated Visual Analysis and Live Monitoring for Manufacturing,” IEEE Computer Graphics and Applications, pp. 1–1, 2022.
    24. T. Krake, D. Klötzl, B. Eberhardt, and D. Weiskopf, “Constrained Dynamic Mode Decomposition,” IEEE Transactions on Visualization and Computer Graphics, pp. 1–11, 2022, doi: 10.1109/tvcg.2022.3209437.
    25. J. Görtler et al., “Neo: Generalizing Confusion Matrix Visualization to Hierarchical and Multi-Output Labels,” in Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, in Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. New York, NY, USA: Association for Computing Machinery, 2022, pp. 1–13. doi: 10.1145/3491102.3501823.
    26. Y. Zhang, K. Klein, O. Deussen, T. Gutschlag, and S. Storandt, “Robust Visualization of Trajectory Data,” it - Information Technology, vol. 64, pp. 181–191, 2022, doi: 10.1515/itit-2022-0036.
    27. A. Niarakis et al., “Addressing barriers in comprehensiveness, accessibility, reusability, interoperability and reproducibility of computational models in systems biology,” Briefings in bioinformatics, vol. 23, no. 4, Art. no. 4, 2022.
    28. S. Hubenschmid et al., “ReLive: Bridging In-Situ and Ex-Situ Visual Analytics for Analyzing Mixed Reality User Studies,” in CHI Conference on Human Factors in Computing Systems (CHI ’22), in CHI Conference on Human Factors in Computing Systems (CHI ’22). New York, NY: ACM, 2022, pp. 1–20. doi: 10.1145/3491102.3517550.
    29. J. Schmalfuß, L. Mehl, and A. Bruhn, “Attacking Motion Estimation with Adversarial Snow,” in Proc. ECCV Workshop on Adversarial Robustness in the Real World (AROW), in Proc. ECCV Workshop on Adversarial Robustness in the Real World (AROW). 2022. [Online]. Available: /brokenurl#ttps://arxiv.org/abs/2210.11242
    30. P. Fleck, A. Sousa Calepso, S. Hubenschmid, M. Sedlmair, and D. Schmalstieg, “RagRug: A Toolkit for Situated Analytics,” IEEE Transactions on Visualization and Computer Graphics, 2022, [Online]. Available: https://pubmed.ncbi.nlm.nih.gov/35254986/
    31. N. Rodrigues, L. Shao, J. J. Yan, T. Schreck, and D. Weiskopf, “Eye Gaze on Scatterplot: Concept and First Results of Recommendations for Exploration of SPLOMs Using Implicit Data Selection,” in 2022 Symposium on Eye Tracking Research and Applications, in 2022 Symposium on Eye Tracking Research and Applications. New York, NY, USA: Association for Computing Machinery, 2022, pp. 59:1-59:7. doi: 10.1145/3517031.3531165.
    32. T. Krake, A. Bruhn, B. Eberhardt, and D. Weiskopf, “Efficient and Robust Background Modeling with Dynamic Mode Decomposition,” Journal of Mathematical Imaging and Vision (2022), 2022, doi: 10.1007/s10851-022-01068-0.
    33. F. Götz-Hahn, V. Hosu, and D. Saupe, “Critical Analysis on the Reproducibility of Visual Quality Assessment Using Deep Features,” PLoS ONE, vol. 17, no. 8, Art. no. 8, 2022, [Online]. Available: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0269715
    34. S. Frey et al., “Parameter Adaptation In Situ: Design Impacts and Trade-Offs,” in In Situ Visualization for Computational Science, H. Childs, J. C. Bennett, and C. Garth, Eds., in In Situ Visualization for Computational Science. Cham: Springer International Publishing, 2022, pp. 159–182. doi: 10.1007/978-3-030-81627-8_8.
    35. F. Schreiber and D. Weiskopf, “Quantitative Visual Computing,” it - Information Technology, vol. 64, pp. 119–120, 2022, doi: 10.1515/itit-2022-0048.
    36. D. Dietz et al., “Walk This Beam: Impact of Different Balance Assistance Strategies and Height Exposure on Performance and Physiological Arousal in VR,” in 28th ACM Symposium on Virtual Reality Software and Technology, in 28th ACM Symposium on Virtual Reality Software and Technology. 2022, pp. 1–12. doi: 10.1145/3562939.3567818.
    37. M. Abdelaal, N. D. Schiele, K. Angerbauer, K. Kurzhals, M. Sedlmair, and D. Weiskopf, “Supplemental Materials for: Comparative Evaluation of Bipartite, Node-Link, and Matrix-Based Network Representations,” 2022, DaRUS. [Online]. Available: https://darus.uni-stuttgart.de/citation?persistentId=doi:10.18419/darus-3100
    38. V. Bruder, M. Larsen, T. Ertl, H. Childs, and S. Frey, “A Hybrid In Situ Approach for Cost Efficient Image Database Generation,” IEEE Transactions on Visualization and Computer Graphics, pp. 1–1, 2022, [Online]. Available: https://ieeexplore.ieee.org/abstract/document/9765476
    39. H. Tarner, V. Bruder, T. Ertl, S. Frey, and F. Beck, “Visually Comparing Rendering Performance from Multiple Perspectives,” in Vision, Modeling, and Visualization, J. Bender, M. Botsch, and D. A. Keim, Eds., in Vision, Modeling, and Visualization. The Eurographics Association, 2022. doi: 10.2312/vmv.20221211.
    40. M. Abdelaal, N. D. Schiele, K. Angerbauer, K. Kurzhals, M. Sedlmair, and D. Weiskopf, “Comparative Evaluation of Bipartite, Node-Link, and Matrix-Based Network Representations,” IEEE Transactions on Visualization and Computer Graphics, pp. 1–11, 2022.
    41. J. Lou, H. Lin, D. Marshall, D. Saupe, and H. Liu, “TranSalNet: Towards perceptually relevant visual saliency prediction,” Neurocomputing, vol. 494, pp. 455–467, 2022, [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0925231222004714
    42. L. Joos, S. Jaeger-Honz, F. Schreiber, D. A. Keim, and K. Klein, “Visual Comparison of Networks in VR,” IEEE Transactions on Visualization and Computer Graphics, vol. 28, no. 11, Art. no. 11, 2022, [Online]. Available: https://ieeexplore.ieee.org/document/9873980
    43. Y. Wang, C. Jiao, M. Bâce, and A. Bulling, “VisRecall: Quantifying Information Visualisation Recallability Via Question Answering,” IEEE Transactions on Visualization and Computer Graphics, vol. 28, no. 12, Art. no. 12, 2022, [Online]. Available: https://ieeexplore.ieee.org/document/9855227
    44. D. Bienroth et al., “Spatially resolved transcriptomics in immersive environments,” Visual Computing for Industry, Biomedicine, and Art, vol. 5, no. 1, Art. no. 1, 2022, doi: 10.1186/s42492-021-00098-6.
    45. Q. Q. Ngo, F. L. Dennig, D. A. Keim, and M. Sedlmair, “Machine Learning Meets Visualization – Experiences and Lessons Learned,” it - Information Technology, vol. 64, pp. 169–180, 2022, doi: 10.1515/itit-2022-0034.
    46. D. Hägele et al., “Uncertainty Visualization: Fundamentals and Recent Developments,” it - Information Technology, vol. 64, pp. 121–132, 2022, doi: 10.1515/itit-2022-0033.
    47. D. Klötzl, T. Krake, Y. Zhou, I. Hotz, B. Wang, and D. Weiskopf, “Local bilinear computation of Jacobi sets,” The Visual Computer, vol. 38, no. 9, Art. no. 9, 2022, doi: 10.1007/s00371-022-02557-4.
    48. H. Lin et al., “Large-Scale Crowdsourced Subjective Assessment of Picturewise Just Noticeable Difference,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 9, Art. no. 9, 2022, [Online]. Available: https://ieeexplore.ieee.org/document/9745537
    49. F. Petersen, B. Goldluecke, C. Borgelt, and O. Deussen, “GenDR: A Generalized Differentiable Renderer,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), in Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR). 2022, pp. 3992–4001. doi: 10.1109/CVPR52688.2022.00397.
    50. F. Chiossi, R. Welsch, S. Villa, L. L. Chuang, and S. Mayer, “Virtual Reality Adaptation Using Electrodermal Activity to Support the User Experience,” Big Data and Cognitive Computing, vol. 6, no. 2, Art. no. 2, 2022, [Online]. Available: https://www.mdpi.com/2504-2289/6/2/55
    51. A. Huang, P. Knierim, F. Chiossi, L. L. Chuang, and R. Welsch, “Proxemics for Human-Agent Interaction in Augmented Reality,” in CHI Conference on Human Factors in Computing Systems, in CHI Conference on Human Factors in Computing Systems. 2022, pp. 1–13. doi: 10.1145/3491102.3517593.
    52. T. Kosch, R. Welsch, L. L. Chuang, and A. Schmidt, “The Placebo Effect of Artificial Intelligence in Human-Computer Interaction,” ACM Transactions on Computer-Human Interaction, 2022, doi: 10.1145/3529225.
    53. J. Zagermann et al., “Complementary Interfaces for Visual Computing,” it - Information Technology, vol. 64, pp. 145–154, 2022, doi: 10.1515/itit-2022-0031.
    54. D. Garkov, C. Müller, M. Braun, D. Weiskopf, and F. Schreiber, “Research Data Curation in Visualization : Position Paper,” 2022, IEEE. doi: 10.1109/beliv57783.2022.00011.
    55. R. Kehlbeck, J. Görtler, Y. Wang, and O. Deussen, “SPEULER: Semantics-preserving Euler Diagrams,” IEEE Transactions on Visualization and Computer Graphics, vol. 28, no. 1, Art. no. 1, 2022, [Online]. Available: https://www.computer.org/csdl/journal/tg/2022/01/09552459/1xibZ9AqsLu
    56. C. Schneegass, V. Füseschi, V. Konevych, and F. Draxler, “Investigating the Use of Task Resumption Cues to Support Learning in Interruption-Prone Environments,” Multimodal Technologies and Interaction, vol. 6, no. 1, Art. no. 1, 2022, [Online]. Available: https://www.mdpi.com/2414-4088/6/1/2
  4. 2021

    1. C. Schulz et al., “Multi-Class Inverted Stippling,” ACM Trans. Graph., vol. 40, no. 6, Art. no. 6, Dec. 2021, doi: 10.1145/3478513.3480534.
    2. B. Roziere et al., “EvolGAN: Evolutionary Generative Adversarial Networks,” in Computer Vision -- ACCV 2020, in Computer Vision -- ACCV 2020. Cham: Springer International Publishing, Nov. 2021, pp. 679–694. [Online]. Available: https://openaccess.thecvf.com/content/ACCV2020/html/Roziere_EvolGAN_Evolutionary_Generative_Adversarial_Networks_ACCV_2020_paper.html
    3. K. Klein, D. Garkov, S. Rütschlin, T. Böttcher, and F. Schreiber, “QSDB—a graphical Quorum Sensing Database,” Database, vol. 2021, no. 2021, Art. no. 2021, Nov. 2021, doi: 10.1093/database/baab058.
    4. R. Sevastjanova, A.-L. Kalouli, C. Beck, H. Schäfer, and M. El-Assady, “Explaining Contextualization in Language Models using Visual Analytics,” in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Online: Association for Computational Linguistics, Aug. 2021, pp. 464–476. [Online]. Available: https://aclanthology.org/2021.acl-long.39
    5. M. Aichem et al., “Visual exploration of large metabolic models,” Bioinformatics, vol. 37, no. 23, Art. no. 23, May 2021, doi: 10.1093/bioinformatics/btab335.
    6. K. Lu et al., “Palettailor: Discriminable Colorization for Categorical Data,” IEEE Transactions on Visualization & Computer Graphics, vol. 27, no. 2, Art. no. 2, Feb. 2021, [Online]. Available: https://ieeexplore.ieee.org/document/9222351
    7. P. Balestrucci, V. Maffei, F. Lacquaniti, and A. Moscatelli, “The Effects of Visual Parabolic Motion on the Subjective Vertical and on Interception,” Neuroscience, vol. 453, pp. 124–137, Jan. 2021, [Online]. Available: https://www.sciencedirect.com/science/article/abs/pii/S0306452220306424
    8. T. Krake, S. Reinhardt, M. Hlawatsch, B. Eberhardt, and D. Weiskopf, “Visualization and Selection of Dynamic Mode Decomposition Components for Unsteady Flow,” Visual Informatics, vol. 5, no. 3, Art. no. 3, 2021, [Online]. Available: https://www.sciencedirect.com/science/article/pii/S2468502X21000309
    9. C. Krauter, J. Vogelsang, A. Sousa Calepso, K. Angerbauer, and M. Sedlmair, “Don’t Catch It: An Interactive Virtual-Reality Environment to Learn About COVID-19 Measures Using Gamification Elements,” in Mensch und Computer, in Mensch und Computer. ACM, 2021, pp. 593–596. doi: 10.1145/3473856.3474031.
    10. C. Morariu, A. Bibal, R. Cutura, B. Frénay, and M. Sedlmair, “DumbleDR: Predicting User Preferences of Dimensionality Reduction Projection Quality,” 2021. [Online]. Available: https://arxiv.org/abs/2105.09275
    11. H. Ben Lahmar and M. Herschel, “Collaborative filtering over evolution provenance data for interactive visual data exploration,” Information Systems, vol. 95, p. 101620, 2021, doi: 10.1016/j.is.2020.101620.
    12. F. L. Dennig, M. T. Fischer, M. Blumenschein, J. Fuchs, D. A. Keim, and E. Dimara, “ParSetgnostics: Quality Metrics for Parallel Sets,” Computer Graphics Forum, vol. 40, no. 3, Art. no. 3, 2021, doi: 10.1111/cgf.14314.
    13. S. Su, V. Hosu, H. Lin, Y. Zhang, and D. Saupe, “KonIQ++: Boosting No-Reference Image Quality Assessment in the Wild by Jointly Predicting Image Quality and Defects,” in 32nd British Machine Vision Conference, in 32nd British Machine Vision Conference. 2021, pp. 1–12. [Online]. Available: https://www.bmvc2021-virtualconference.com/assets/papers/0868.pdf
    14. F. Draxler, C. Schneegass, J. Safranek, and H. Hussmann, “Why Did You Stop? - Investigating Origins and Effects of Interruptions during Mobile Language Learning,” in Mensch Und Computer 2021, in Mensch Und Computer 2021. New York, NY, USA: Association for Computing Machinery, 2021, pp. 21–33. doi: 10.1145/3473856.3473881.
    15. R. Cutura, C. Morariu, Z. Cheng, Y. Wang, D. Weiskopf, and M. Sedlmair, “Hagrid — Gridify Scatterplots with Hilbert and Gosper Curves,” in The 14th International Symposium on Visual Information Communication and Interaction, in The 14th International Symposium on Visual Information Communication and Interaction. New York, NY, USA: Association for Computing Machinery, 2021, p. 1:1—1:8. doi: 10.1145/3481549.3481569.
    16. G. J. Rijken et al., “Illegible Semantics: Exploring the Design Space of Metal Logos,” in IEEE VIS alt.VIS Workshop, in IEEE VIS alt.VIS Workshop. 2021. [Online]. Available: https://arxiv.org/abs/2109.01688
    17. R. Cutura, K. Angerbauer, F. Heyen, N. Hube, and M. Sedlmair, “DaRt: Generative Art using Dimensionality Reduction Algorithms,” in 2021 IEEE VIS Arts Program (VISAP), in 2021 IEEE VIS Arts Program (VISAP). IEEE, 2021, pp. 59–72. [Online]. Available: https://ieeexplore.ieee.org/document/9622987
    18. K. Klein, M. Aichem, Y. Zhang, S. Erk, B. Sommer, and F. Schreiber, “TEAMwISE : synchronised immersive environments for exploration and analysis of animal behaviour,” Journal of Visualization, 2021, doi: 10.1007/s12650-021-00746-2.
    19. S. Hubenschmid, J. Zagermann, S. Butscher, and H. Reiterer, “STREAM: Exploring the Combination of Spatially-Aware Tablets with Augmented Reality Head-Mounted Displays for Immersive Analytics,” in Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, in Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. , New York, NY, USA: Association for Computing Machinery, 2021. doi: 10.1145/3411764.3445298.
    20. B. Roziere et al., “Tarsier: Evolving Noise Injection in Super-Resolution GANs,” in 2020 25th International Conference on Pattern Recognition (ICPR), in 2020 25th International Conference on Pattern Recognition (ICPR). 2021, pp. 7028–7035. [Online]. Available: https://ieeexplore.ieee.org/abstract/document/9413318
    21. S. Hubenschmid, J. Zagermann, D. I. Fink, J. Wieland, T. Feuchtner, and H. Reiterer, “Towards Asynchronous Hybrid User Interfaces for Cross-Reality Interaction,” in ISS’21 Workshop Proceedings: “Transitional Interfaces in Mixed and Cross-Reality: A new frontier?,” H.-C. Jetter, J.-H. Schröder, J. Gugenheimer, M. Billinghurst, C. Anthes, M. Khamis, and T. Feuchtner, Eds., in ISS’21 Workshop Proceedings: “Transitional Interfaces in Mixed and Cross-Reality: A new frontier?” 2021. [Online]. Available: https://kops.uni-konstanz.de/bitstream/handle/123456789/55453/Hubenschmid_2-84mm0sggczq02.pdf?sequence=1&isAllowed=y
    22. S. Giebenhain and B. Goldlücke, “AIR-Nets: An Attention-Based Framework for Locally Conditioned Implicit Representations,” in 2021 International Conference on 3D Vision (3DV), in 2021 International Conference on 3D Vision (3DV). 2021, pp. 1054–1064. [Online]. Available: https://ieeexplore.ieee.org/abstract/document/9665836
    23. D. Bethge et al., “VEmotion: Using Driving Context for Indirect Emotion Prediction in Real-Time,” in The 34th Annual ACM Symposium on User Interface Software and Technology, in The 34th Annual ACM Symposium on User Interface Software and Technology. , New York, NY, USA: Association for Computing Machinery, 2021, pp. 638–651. doi: 10.1145/3472749.3474775.
    24. J. Bernard, M. Hutter, M. Sedlmair, M. Zeppelzauer, and T. Munzner, “A Taxonomy of Property Measures to Unify Active Learning and Human-centered Approaches to Data Labeling,” ACM Transactions on Interactive Intelligent Systems (TiiS), vol. 11, pp. 1–42, 2021, doi: 10.1145/3439333.
    25. T. Müller, C. Schulz, and D. Weiskopf, “Adaptive Polygon Rendering for Interactive Visualization in the Schwarzschild Spacetime,” European Journal of Physics, vol. 43, no. 1, Art. no. 1, 2021, doi: 10.1088/1361-6404/ac2b36/meta.
    26. H. Booth and C. Beck, “Verb-second and Verb-first in the History of Icelandic,” Journal of Historical Syntax, vol. 5, no. 27, Art. no. 27, 2021, [Online]. Available: https://ojs.ub.uni-konstanz.de/hs/index.php/hs/article/view/112
    27. F. Götz-Hahn, V. Hosu, H. Lin, and D. Saupe, “KonVid-150k : A Dataset for No-Reference Video Quality Assessment of Videos in-the-Wild,” IEEE Access, vol. 9, pp. 72139–72160, 2021, doi: 10.1109/ACCESS.2021.3077642.
    28. J. Wieland, J. Zagermann, J. Müller, and H. Reiterer, “Separation, Composition, or Hybrid? : Comparing Collaborative 3D Object Manipulation Techniques for Handheld Augmented Reality,” in 2021 IEEE International Symposium on Mixed and Augmented Reality, in 2021 IEEE International Symposium on Mixed and Augmented Reality. Piscataway, NJ: IEEE, 2021, pp. 403–412. [Online]. Available: http://nbn-resolving.de/urn:nbn:de:bsz:352-2-ahkg9sntr33e8
    29. Y. Chen, K. C. Kwan, L.-Y. Wei, and H. Fu, “Autocomplete Repetitive Stroking with Image Guidance,” in SIGGRAPH Asia 2021 Technical Communications, in SIGGRAPH Asia 2021 Technical Communications. New York, NY, USA: Association for Computing Machinery, 2021. doi: 10.1145/3478512.3488595.
    30. J. Bernard, M. Hutter, M. Zeppelzauer, M. Sedlmair, and T. Munzner, “ProSeCo: Visual analysis of class separation measures and dataset characteristics,” Computers & Graphics, vol. 96, pp. 48–60, 2021, [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0097849321000406
    31. M. M. Abbas, E. Ullah, A. Baggag, H. Bensmail, M. Sedlmair, and M. Aupetit, “ClustRank: A Visual Quality Measure Trained on Perceptual Data for Sorting Scatterplots by Cluster Patterns,” 2021. [Online]. Available: https://arxiv.org/pdf/2106.00599.pdf
    32. M. Kraus, K. Klein, J. Fuchs, D. A. Keim, F. Schreiber, and M. Sedlmair, “The Value of Immersive Visualization,” IEEE Computer Graphics and Applications (CG&A), vol. 41, no. 4, Art. no. 4, 2021, doi: 10.1109/MCG.2021.3075258.
    33. F. Frieß, M. Becher, G. Reina, and T. Ertl, “Amortised Encoding for Large High-Resolution Displays,” in 2021 IEEE 11th Symposium on Large Data Analysis and Visualization (LDAV), in 2021 IEEE 11th Symposium on Large Data Analysis and Visualization (LDAV). 2021, pp. 53–62. [Online]. Available: https://ieeexplore.ieee.org/document/9623235
    34. L. Zhou, C. R. Johnson, and D. Weiskopf, “Data-Driven Space-Filling Curves,” IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 2, Art. no. 2, 2021, doi: 10.1109/TVCG.2020.3030473.
    35. L. Mehl, C. Beschle, A. Barth, and A. Bruhn, “An Anisotropic Selection Scheme for Variational Optical Flow Methods with Order-Adaptive Regularisation,” in Proceedings of the International Conference on Scale Space and Variational Methods in Computer Vision (SSVM), in Proceedings of the International Conference on Scale Space and Variational Methods in Computer Vision (SSVM). Springer, 2021, pp. 140–152. [Online]. Available: https://link.springer.com/chapter/10.1007%2F978-3-030-75549-2_12
    36. R. Bian et al., “Implicit Multidimensional Projection of Local Subspaces,” IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 2, Art. no. 2, 2021, doi: 10.1109/TVCG.2020.3030368.
    37. K. Klein et al., “Visual analytics of sensor movement data for cheetah behaviour analysis,” Journal of Visualization, 2021, doi: 10.1007/s12650-021-00742-6.
    38. H. Men, H. Lin, M. Jenadeleh, and D. Saupe, “Subjective Image Quality Assessment with Boosted Triplet Comparisons,” IEEE Access, vol. 9, pp. 138939–138975, 2021, [Online]. Available: https://ieeexplore.ieee.org/abstract/document/9559922
    39. K. Gadhave et al., “Predicting intent behind selections in scatterplot visualizations,” Information Visualization, vol. 20, no. 4, Art. no. 4, 2021, doi: 10.1177/14738716211038604.
    40. M. Kraus et al., “Immersive Analytics with Abstract 3D Visualizations: A Survey,” Computer Graphics Forum, 2021, doi: 10.1111/cgf.14430.
    41. K. C. Kwan and H. Fu, “Automatic Image Checkpoint Selection for Guider-Follower Pedestrian Navigation,” Computer Graphics Forum, vol. 40, no. 1, Art. no. 1, 2021, doi: 10.1111/cgf.14192.
    42. M. Burch, W. Huang, M. Wakefield, H. C. Purchase, D. Weiskopf, and J. Hua, “The State of the Art in Empirical User Evaluation of Graph Visualizations,” IEEE Access, vol. 9, pp. 4173–4198, 2021, [Online]. Available: https://ieeexplore.ieee.org/abstract/document/9309216
    43. N. Grossmann, J. Bernard, M. Sedlmair, and M. Waldner, “Does the Layout Really Matter? A Study on Visual Model Accuracy Estimation,” in IEEE Visualization Conference  (VIS, Short Paper), in IEEE Visualization Conference  (VIS, Short Paper). 2021, pp. 61–65. [Online]. Available: https://arxiv.org/abs/2110.07188
    44. K. Schatz et al., “2019 IEEE Scientific Visualization Contest Winner: Visual Analysis of Structure Formation in Cosmic Evolution,” IEEE Computer Graphics and Applications, vol. 41, no. 6, Art. no. 6, 2021, doi: 10.1109/MCG.2020.3004613.
    45. H. Lin, G. Chen, and F. W. Siebert, “Positional Encoding: Improving Class-Imbalanced Motorcycle Helmet use Classification,” in 2021 IEEE International Conference on Image Processing (ICIP), in 2021 IEEE International Conference on Image Processing (ICIP). 2021, pp. 1194–1198. [Online]. Available: https://ieeexplore.ieee.org/document/9506178
    46. K. Vock, S. Hubenschmid, J. Zagermann, S. Butscher, and H. Reiterer, “IDIAR : Augmented Reality Dashboards to Supervise Mobile Intervention Studies,” in Mensch und Computer 2021 (MuC ’21), in Mensch und Computer 2021 (MuC ’21). New York, NY: ACM, 2021. [Online]. Available: http://nbn-resolving.de/urn:nbn:de:bsz:352-2-22ydtfzvxx3l1
    47. C. Bu et al., “SineStream: Improving the Readability of Streamgraphs by Minimizing Sine Illusion Effects,” IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 2, Art. no. 2, 2021, [Online]. Available: https://ieeexplore.ieee.org/document/9222035
  5. 2020

    1. C. Beck, “DiaSense at SemEval-2020 Task 1: Modeling Sense Change via Pre-trained BERT Embeddings,” in Proceedings of the Fourteenth Workshop on Semantic Evaluation, in Proceedings of the Fourteenth Workshop on Semantic Evaluation. Barcelona (online): International Committee for Computational Linguistics, Dec. 2020, pp. 50–58. [Online]. Available: https://www.aclweb.org/anthology/2020.semeval-1.4
    2. C. Beck, H. Booth, M. El-Assady, and M. Butt, “Representation Problems in Linguistic Annotations: Ambiguity, Variation, Uncertainty, Error and Bias,” in Proceedings of the 14th Linguistic Annotation Workshop, in Proceedings of the 14th Linguistic Annotation Workshop. Barcelona, Spain: Association for Computational Linguistics, Dec. 2020, pp. 60–73. [Online]. Available: https://www.aclweb.org/anthology/2020.law-1.6
    3. M. Blumenschein, “Pattern-Driven Design of Visualizations for High-Dimensional Data,” Konstanz, 2020. [Online]. Available: http://nbn-resolving.de/urn:nbn:de:bsz:352-2-18wp9dhmhapww8
    4. V. Bruder, C. Müller, S. Frey, and T. Ertl, “On Evaluating Runtime Performance of Interactive Visualizations,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, pp. 2848–2862, Sep. 2020, [Online]. Available: https://ieeexplore.ieee.org/document/8637795
    5. M. Dias, D. Orellana, S. Vidal, L. Merino, and A. Bergel, “Evaluating a Visual Approach for Understanding JavaScript Source Code,” in Proceedings of the 28th International Conference on Program Comprehension, in Proceedings of the 28th International Conference on Program Comprehension. ACM, Jul. 2020, pp. 128–138. [Online]. Available: http://bergel.eu/MyPapers/Dias20-Hunter.pdf
    6. A. Kumar, D. Mohanty, K. Kurzhals, F. Beck, D. Weiskopf, and K. Mueller, “Demo of the EyeSAC System for Visual Synchronization, Cleaning, and Annotation of Eye Movement Data,” in ACM Symposium on Eye Tracking Research and Applications, in ACM Symposium on Eye Tracking Research and Applications. New York, NY, USA: Association for Computing Machinery, 2020. doi: 10.1145/3379157.3391988.
    7. K. Kurzhals, M. Burch, and D. Weiskopf, “What We See and What We Get from Visualization: Eye Tracking Beyond Gaze Distributions and Scanpaths,” CoRR, 2020, [Online]. Available: https://arxiv.org/abs/2009.14515
    8. K. Kurzhals, F. Göbel, K. Angerbauer, M. Sedlmair, and M. Raubal, “A View on the Viewer: Gaze-Adaptive Captions for Videos,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, in Proceedings of the CHI Conference on Human Factors in Computing Systems. 2020, pp. 139:1-139:12. doi: 10.1145/3313831.3376266.
    9. F. Heyen et al., “ClaVis: An Interactive Visual Comparison System for Classifiers,” in Proceedings of the International Conference on Advanced Visual Interfaces (AVI), in Proceedings of the International Conference on Advanced Visual Interfaces (AVI). New York, NY, USA: Association for Computing Machinery, 2020, pp. 9:1-9:9. doi: 10.1145/3399715.3399814.
    10. M. Kraus et al., “A Comparative Study of Orientation Support Tools in Virtual Reality Environments with Virtual Teleportation,” in 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), in 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). 2020, pp. 227–238. [Online]. Available: https://ieeexplore.ieee.org/abstract/document/9284697
    11. L. Zhou, M. Rivinius, C. R. Johnson, and D. Weiskopf, “Photographic High-Dynamic-Range Scalar Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 6, Art. no. 6, 2020, doi: 10.1109/TVCG.2020.2970522.
    12. H. Lin et al., “SUR-FeatNet: Predicting the Satisfied User Ratio Curvefor Image Compression with Deep Feature Learning,” Quality and User Experience, vol. 5, no. 1, Art. no. 1, 2020, doi: 10.1007/s41233-020-00034-1.
    13. H. Bast, P. Brosi, and S. Storandt, “Metro Maps on Octilinear Grid Graphs,” in Computer Graphics Forum, in Computer Graphics Forum. Hoboken, New Jersey: Wiley-Blackwell - STM, 2020, pp. 357–367. doi: 10.1111/cgf13986.
    14. P. Angelini, S. Chaplick, S. Cornelsen, and G. Da Lozzo, “Planar L-Drawings of Bimodal Graphs,” in Graph Drawing and Network Visualization, D. Auber and P. Valtr, Eds., in Graph Drawing and Network Visualization. Cham: Springer International Publishing, 2020, pp. 205–219. doi: 10.1007/978-3-030-68766-3_17.
    15. M. Borowski, J. Zagermann, C. N. Klokmose, H. Reiterer, and R. Rädle, “Exploring the Benefits and Barriers of Using Computational Notebooks for Collaborative Programming Assignments,” in Proceedings of the ACM Technical Symposium on Computer Science Education (SIGCSE), in Proceedings of the ACM Technical Symposium on Computer Science Education (SIGCSE). 2020, pp. 468–474. doi: 10.1145/3328778.3366887.
    16. F. Draxler, A. Labrie, A. Schmidt, and L. L. Chuang, “Augmented Reality to Enable Users in Learning Case Grammar from Their Real-World Interactions,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2020, pp. 410:1-410:12. doi: 10.1145/3313831.3376537.
    17. T. Kosch, A. Schmidt, S. Thanheiser, and L. L. Chuang, “One Does Not Simply RSVP: Mental Workload to Select Speed Reading Parameters Using Electroencephalography,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2020, pp. 637:1-637:13. doi: 10.1145/3313831.3376766.
    18. M. Blumenschein, L. J. Debbeler, N. C. Lages, B. Renner, D. A. Keim, and M. El-Assady, “v-plots: Designing Hybrid Charts for the Comparative Analysis of Data Distributions,” Computer Graphics Forum, vol. 39, no. 3, Art. no. 3, 2020, doi: 10.1111/cgf14002.
    19. H. Men, V. Hosu, H. Lin, A. Bruhn, and D. Saupe, “Subjective annotation for a frame interpolation benchmark using artefact amplification,” Quality and User Experience, vol. 5, no. 1, Art. no. 1, 2020, [Online]. Available: https://link.springer.com/article/10.1007%2Fs41233-020-00037-y
    20. N. Chotisarn et al., “A Systematic Literature Review of Modern Software Visualization,” Journal of Visualization, vol. 23, no. 4, Art. no. 4, 2020, [Online]. Available: https://link.springer.com/article/10.1007%2Fs12650-020-00647-w
    21. M. Kraus et al., “Assessing 2D and 3D Heatmaps for Comparative Analysis: An Empirical Study,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, in Proceedings of the CHI Conference on Human Factors in Computing Systems. 2020, pp. 546:1-546:14. doi: 10.1145/3313831.3376675.
    22. F. Frieß, C. Müller, and T. Ertl, “Real-Time High-Resolution Visualisation,” in Proceedings of the Eurographics Symposium on Vision, Modeling, and Visualization (VMV), J. Krüger, M. Niessner, and J. Stückler, Eds., in Proceedings of the Eurographics Symposium on Vision, Modeling, and Visualization (VMV). The Eurographics Association, 2020, pp. 127–135. doi: 10.2312/vmv.20201195.
    23. H. Men, V. Hosu, H. Lin, A. Bruhn, and D. Saupe, “Visual Quality Assessment for Interpolated Slow-Motion Videos Based on a Novel Database,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX). 2020, pp. 1–6. [Online]. Available: https://ieeexplore.ieee.org/document/9123096/authors#authors
    24. T. Guha et al., “ATQAM/MAST’20: Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends,” in Proceedings of the 28th ACM International Conference on Multimedia, in Proceedings of the 28th ACM International Conference on Multimedia. New York, NY, USA: Association for Computing Machinery, 2020, pp. 4758–4760. doi: 10.1145/3394171.3421895.
    25. H. Lin, J. D. Deng, D. Albers, and F. W. Siebert, “Helmet Use Detection of Tracked Motorcycles Using CNN-Based Multi-Task Learning,” IEEE Access, vol. 8, pp. 162073–162084, 2020, [Online]. Available: https://ieeexplore.ieee.org/abstract/document/9184871
    26. O. Wiedemann, V. Hosu, H. Lin, and D. Saupe, “Foveated Video Coding for Real-Time Streaming Applications,” in 2020 Twelfth International Conference on Quality of Multimedia Experience (QoMEX), in 2020 Twelfth International Conference on Quality of Multimedia Experience (QoMEX). 2020, pp. 1–6. [Online]. Available: https://ieeexplore.ieee.org/abstract/document/9123080
    27. U. Ju, L. L. Chuang, and C. Wallraven, “Acoustic Cues Increase Situational Awareness in Accident Situations: A VR Car-Driving Study,” IEEE Transactions on Intelligent Transportation Systems, pp. 1–11, 2020, [Online]. Available: https://ieeexplore.ieee.org/document/9261134
    28. F. Frieß, M. Braun, V. Bruder, S. Frey, G. Reina, and T. Ertl, “Foveated Encoding for Large High-Resolution Displays,” IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 2, Art. no. 2, 2020, doi: 10.1109/TVCG.2020.3030445.
    29. J. Zagermann, U. Pfeil, P. von Bauer, D. I. Fink, and H. Reiterer, “‘It’s in my other hand!’: Studying the Interplay of Interaction Techniques and Multi-Tablet Activities,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, in Proceedings of the CHI Conference on Human Factors in Computing Systems. 2020, pp. 413:1-413:13. doi: 10.1145/3313831.3376540.
    30. N. Patkar, L. Merino, and O. Nierstrasz, “Towards Requirements Engineering with Immersive Augmented Reality,” in Conference Companion of the 4th International Conference on Art, Science, and Engineering of Programming, in Conference Companion of the 4th International Conference on Art, Science, and Engineering of Programming. New York, NY, USA: ACM, 2020, pp. 55–60. doi: 10.1145/3397537.3398472.
    31. A. Kumar, P. Howlader, R. Garcia, D. Weiskopf, and K. Mueller, “Challenges in Interpretability of Neural Networks for Eye Movement Data,” in ACM Symposium on Eye Tracking Research and Applications, in ACM Symposium on Eye Tracking Research and Applications. New York, NY, USA: Association for Computing Machinery, 2020. doi: 10.1145/3379156.3391361.
    32. P. Balestrucci et al., “Pipelines Bent, Pipelines Broken: Interdisciplinary Self-Reflection on the Impact of COVID-19 on Current and Future Research (Position Paper),” in 2020 IEEE Workshop on Evaluation and Beyond-Methodological Approaches to Visualization (BELIV), in 2020 IEEE Workshop on Evaluation and Beyond-Methodological Approaches to Visualization (BELIV). IEEE, 2020, pp. 11–18. [Online]. Available: https://ieeexplore.ieee.org/abstract/document/9307759
    33. J. Bernard, M. Hutter, M. Zeppelzauer, M. Sedlmair, and T. Munzner, “SepEx: Visual Analysis of Class Separation Measures,” in Proceedings of the International Workshop on Visual Analytics (EuroVA), C. Turkay and K. Vrotsou, Eds., in Proceedings of the International Workshop on Visual Analytics (EuroVA). The Eurographics Association, 2020, pp. 1–5. doi: 10.2312/eurova.20201079.
    34. S. Öney et al., “Evaluation of Gaze Depth Estimation from Eye Tracking in Augmented Reality,” in Proceedings of the Symposium on Eye Tracking Research & Applications-Short Paper (ETRA-SP), in Proceedings of the Symposium on Eye Tracking Research & Applications-Short Paper (ETRA-SP). ACM, 2020, pp. 49:1-49:5. doi: 10.1145/3379156.3391835.
    35. K. Kurzhals et al., “Visual Analytics and Annotation of Pervasive Eye Tracking Video,” in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA). ACM, 2020, pp. 16:1-16:9. doi: 10.1145/3379155.3391326.
    36. D. Okanović et al., “Can a Chatbot Support Software Engineers with Load Testing? Approach and Experiences,” in Proceedings of the ACM/SPEC International Conference on Performance Engineering (ICPE), in Proceedings of the ACM/SPEC International Conference on Performance Engineering (ICPE). 2020, pp. 120–129. doi: 10.1145/3358960.3375792.
    37. M. Jenadeleh, M. Pedersen, and D. Saupe, “Blind Quality Assessment of Iris Images Acquired in Visible Light for Biometric Recognition,” Sensors, vol. 20, no. 5, Art. no. 5, 2020, [Online]. Available: https://www.mdpi.com/1424-8220/20/5/1308
    38. B. Roziere et al., “Evolutionary Super-Resolution,” in Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion, in Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion. New York, NY, USA: Association for Computing Machinery, 2020, pp. 151–152. doi: 10.1145/3377929.3389959.
    39. V. Hosu et al., “From Technical to Aesthetics Quality Assessment and Beyond: Challenges and Potential,” in Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, in Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends. New York, NY, USA: Association for Computing Machinery, 2020, pp. 19–20. doi: 10.1145/3423268.3423589.
    40. O. Wiedemann and D. Saupe, “Gaze Data for Quality Assessment of Foveated Video,” in ACM Symposium on Eye Tracking Research and Applications, in ACM Symposium on Eye Tracking Research and Applications. New York, NY, USA: Association for Computing Machinery, 2020. doi: 10.1145/3379157.3391656.
    41. T. Stankov and S. Storandt, “Maximum Gap Minimization in Polylines,” in Web and Wireless Geographical Information Systems - 18th International Symposium, W2GIS 2020, Wuhan, China, November 13-14, 2020, Proceedings, in Web and Wireless Geographical Information Systems - 18th International Symposium, W2GIS 2020, Wuhan, China, November 13-14, 2020, Proceedings. 2020, pp. 181–196. doi: 10.1007/978-3-030-60952-8\_19.
    42. H. Lin, M. Jenadeleh, G. Chen, U.-D. Reips, R. Hamzaoui, and D. Saupe, “Subjective Assessment of Global Picture-Wise Just Noticeable Difference,” in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME). 2020, pp. 1–6. [Online]. Available: https://ieeexplore.ieee.org/document/9106058
    43. N. Rodrigues, C. Schulz, A. Lhuillier, and D. Weiskopf, “Cluster-Flow Parallel Coordinates: Tracing Clusters Across Subspaces,” in Proceedings of the Graphics Interface Conference (GI) (forthcoming), in Proceedings of the Graphics Interface Conference (GI) (forthcoming). Canadian Human-Computer Communications Society / Société canadienne du dialogue humain-machine, 2020, pp. 0:1-0:11. doi: 10.20380/GI2020.38.
    44. L. Merino, M. Schwarzl, M. Kraus, M. Sedlmair, D. Schmalstieg, and D. Weiskopf, “Evaluating Mixed and Augmented Reality: A Systematic Literature Review (2009 – 2019),” in IEEE International Symposium on Mixed and Augmented Reality (ISMAR), in IEEE International Symposium on Mixed and Augmented Reality (ISMAR). 2020. [Online]. Available: https://ieeexplore.ieee.org/abstract/document/9284762
    45. D. Weiskopf, “Vis4Vis: Visualization for (Empirical) Visualization Research,” in Foundations of Data Visualization, M. Chen, H. Hauser, P. Rheingans, and G. Scheuermann, Eds., in Foundations of Data Visualization. , Springer International Publishing, 2020, pp. 209–224. doi: 10.1007/978-3-030-34444-3_10.
    46. L. Merino, M. Lungu, and C. Seidl, “Unleashing the Potentials of Immersive Augmented Reality for Software Engineering,” in 2020 IEEE 27th International Conference on Software Analysis, Evolution and Reengineering (SANER), in 2020 IEEE 27th International Conference on Software Analysis, Evolution and Reengineering (SANER). 2020, pp. 517–521. [Online]. Available: https://arxiv.org/abs/2001.01223
    47. N. Brich et al., “Visual Analysis of Multivariate Intensive Care Surveillance Data,” in Eurographics Workshop on Visual Computing for Biology and Medicine, B. Kozlíková, M. Krone, N. Smit, K. Nieselt, and R. G. Raidou, Eds., in Eurographics Workshop on Visual Computing for Biology and Medicine. The Eurographics Association, 2020.
    48. M. Sondag, W. Meulemans, C. Schulz, K. Verbeek, D. Weiskopf, and B. Speckmann, “Uncertainty Treemaps,” in Proceedings of the IEEE Pacific Visualization Symposium (PacificVis), in Proceedings of the IEEE Pacific Visualization Symposium (PacificVis). 2020, pp. 111–120. [Online]. Available: https://ieeexplore.ieee.org/document/9086235
    49. N. Pathmanathan et al., “Eye vs. Head: Comparing Gaze Methods for Interaction in Augmented Reality,” in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA). ACM, 2020, pp. 50:1-50:5. doi: 10.1145/3379156.3391829.
    50. V. Hosu, H. Lin, T. Szirányi, and D. Saupe, “KonIQ-10k : An Ecologically Valid Database for Deep Learning of Blind Image Quality Assessment,” IEEE Transactions on Image Processing, vol. 29, pp. 4041–4056, 2020, [Online]. Available: https://ieeexplore.ieee.org/document/8968750
    51. C. Schätzle and M. Butt, “Visual Analytics for Historical Linguistics: Opportunities and Challenges,” Journal of Data Mining and Digital Humanities, 2020, [Online]. Available: https://jdmdh.episciences.org/6968
    52. X. Zhao, H. Lin, P. Guo, D. Saupe, and H. Liu, “Deep Learning VS. Traditional Algorithms for Saliency Prediction of Distorted Images,” in 2020 IEEE International Conference on Image Processing (ICIP), in 2020 IEEE International Conference on Image Processing (ICIP). 2020, pp. 156–160. [Online]. Available: https://ieeexplore.ieee.org/abstract/document/9191203
    53. M. Beck and S. Storandt, “Puzzling Grid Embeddings,” in Proceedings of the Symposium on Algorithm Engineering and Experiments, ALENEX 2020, Salt Lake City, UT, USA, January 5-6, 2020, in Proceedings of the Symposium on Algorithm Engineering and Experiments, ALENEX 2020, Salt Lake City, UT, USA, January 5-6, 2020. 2020, pp. 94–105. doi: 10.1137/1.9781611976007.8.
    54. F. Bishop, J. Zagermann, U. Pfeil, G. Sanderson, H. Reiterer, and U. Hinrichs, “Construct-A-Vis: Exploring the Free-Form Visualization Processes of Children,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 1, Art. no. 1, 2020, [Online]. Available: https://ieeexplore.ieee.org/document/8807271
    55. D. R. Wahl et al., “Why We Eat What We Eat: Assessing Dispositional and In-the-Moment Eating Motives by Using Ecological Momentary Assessment,” JMIR mHealth and uHealth., vol. 8, no. 1, Art. no. 1, 2020, [Online]. Available: https://mhealth.jmir.org/2020/1/e13191/
    56. R. Garcia and D. Weiskopf, “Inner-Process Visualization of Hidden States in Recurrent Neural Networks,” in Proceedings of the 13th International Symposium on Visual Information Communication and Interaction, in Proceedings of the 13th International Symposium on Visual Information Communication and Interaction. New York, NY, USA: Association for Computing Machinery, 2020. doi: 10.1145/3430036.3430047.
    57. L. Merino et al., “Toward Agile Situated Visualization: An Exploratory User Study,” in Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA), in Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA). 2020, p. LBW087:1-LBW087:7. doi: 10.1145/3334480.3383017.
    58. A. Streichert, K. Angerbauer, M. Schwarzl, and M. Sedlmair, “Comparing Input Modalities for Shape Drawing Tasks,” in Proceedings of the Symposium on Eye Tracking Research & Applications-Short Papers (ETRA-SP), in Proceedings of the Symposium on Eye Tracking Research & Applications-Short Papers (ETRA-SP). ACM, 2020, pp. 1–5. doi: 10.1145/3379156.3391830.
    59. D. Schubring, M. Kraus, C. Stolz, N. Weiler, D. A. Keim, and H. Schupp, “Virtual Reality Potentiates Emotion and Task Effects of Alpha/Beta Brain Oscillations,” Brain Sciences, vol. 10, no. 8, Art. no. 8, 2020, [Online]. Available: https://www.mdpi.com/2076-3425/10/8/537
    60. M. Lan Ha, V. Hosu, and V. Blanz, “Color Composition Similarity and Its Application in Fine-grained Similarity,” in 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), in 2020 IEEE Winter Conference on Applications of Computer Vision (WACV). Piscataway, NJ: IEEE, 2020, pp. 2548–2557. [Online]. Available: https://ieeexplore.ieee.org/document/9093522
    61. J. Spoerhase, S. Storandt, and J. Zink, “Simplification of Polyline Bundles,” in 17th Scandinavian Symposium and Workshops on Algorithm Theory, SWAT 2020, June 22-24, 2020, Tórshavn, Faroe Islands, in 17th Scandinavian Symposium and Workshops on Algorithm Theory, SWAT 2020, June 22-24, 2020, Tórshavn, Faroe Islands. 2020, pp. 35:1-35:20. doi: 10.4230/LIPIcs.SWAT.2020.35.
    62. S. Cornelsen et al., “Drawing Shortest Paths in Geodetic Graphs,” in Graph Drawing and Network Visualization, D. Auber and P. Valtr, Eds., in Graph Drawing and Network Visualization. Cham: Springer International Publishing, 2020, pp. 333–340. doi: 10.1007/978-3-030-68766-3_26.
    63. M. Blumenschein, X. Zhang, D. Pomerenke, D. A. Keim, and J. Fuchs, “Evaluating Reordering Strategies for Cluster Identification in Parallel Coordinates,” Computer Graphics Forum, vol. 39, no. 3, Art. no. 3, 2020, [Online]. Available: https://diglib.eg.org:443/handle/10.1111/cgf14000
  6. 2019

    1. T. Munz, L. L. Chuang, S. Pannasch, and D. Weiskopf, “VisME: Visual microsaccades explorer,” Journal of Eye Movement Research, vol. 12, no. 6, Art. no. 6, Dec. 2019, [Online]. Available: https://bop.unibe.ch/JEMR/article/view/JEMR.12.6.5
    2. P. Balestrucci and M. O. Ernst, “Visuo-motor adaptation during interaction with a user-adaptive system,” Journal of Vision, vol. 19, p. 187a, Sep. 2019, [Online]. Available: https://jov.arvojournals.org/article.aspx?articleid=2750667
    3. V. Bruder, C. Schulz, R. Bauer, S. Frey, D. Weiskopf, and T. Ertl, “Voronoi-Based Foveated Volume Rendering,” in Proceedings of the Eurographics Conference on Visualization - Short Papers (EuroVis), J. Johansson, F. Sadlo, and G. E. Marai, Eds., in Proceedings of the Eurographics Conference on Visualization - Short Papers (EuroVis). Eurographics Association, 2019, pp. 67–71. doi: 10.2312/evs.20191172.
    4. N. Silva et al., “Eye Tracking Support for Visual Analytics Systems: Foundations, Current Applications, and Research Challenges,” in Proceedings of the Symposium on Eye Tracking Research and Applications, in Proceedings of the Symposium on Eye Tracking Research and Applications. ACM, 2019, pp. 11:1-11:9. doi: 10.1145/3314111.3319919.
    5. V. Hosu, B. Goldlücke, and D. Saupe, “Effective Aesthetics Prediction with Multi-level Spatially Pooled Features,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9367–9375, 2019, [Online]. Available: https://ieeexplore.ieee.org/document/8953497
    6. T. M. Benz, B. Riedl, and L. L. Chuang, “Projection Displays Induce Less Simulator Sickness than Head-Mounted Displays in a Real Vehicle Driving Simulator,” in Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI), C. P. Janssen, S. F. Donker, L. L. Chuang, and W. Ju, Eds., in Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI). ACM, 2019, pp. 379–387. doi: 10.1145/3342197.3344515.
    7. F. L. Dennig, T. Polk, Z. Lin, T. Schreck, H. Pfister, and M. Behrisch, “FDive: Learning Relevance Models using Pattern-based Similarity Measures,” Proceedings of the IEEE Conference on Visual Analytics Science and Technology (VAST), 2019, [Online]. Available: https://ieeexplore.ieee.org/document/8986940
    8. Y. Wang, Z. Wang, C.-W. Fu, H. Schmauder, O. Deussen, and D. Weiskopf, “Image-Based Aspect Ratio Selection.,” IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 1, Art. no. 1, 2019, [Online]. Available: https://ieeexplore.ieee.org/document/8440843
    9. V. Bruder et al., “Volume-Based Large Dynamic Graph Analysis Supported by Evolution Provenance,” Multimedia Tools and Applications, vol. 78, no. 23, Art. no. 23, 2019, doi: 10.1007/s11042-019-07878-6.
    10. C. Schulz et al., “A Framework for Pervasive Visual Deficiency Simulation,” in Proceedings of the IEEE Conference on Virtual Reality and 3D User Interfaces (VR), in Proceedings of the IEEE Conference on Virtual Reality and 3D User Interfaces (VR). 2019, pp. 1852–1857. [Online]. Available: https://ieeexplore.ieee.org/document/9044164
    11. B. Sommer et al., “Tiled Stereoscopic 3D Display Wall - Concept, Applications and Evaluation,” Electronic Imaging, vol. 2019, no. 3, Art. no. 3, 2019, [Online]. Available: https://www.ingentaconnect.com/content/ist/ei/2019/00002019/00000003/art00014
    12. C. Schätzle, F. L. Dennig, M. Blumenschein, D. A. Keim, and M. Butt, “Visualizing Linguistic Change as Dimension Interactions,” in Proceedings of the International Workshop on Computational Approaches to Historical Language Change, in Proceedings of the International Workshop on Computational Approaches to Historical Language Change. 2019, pp. 272–278. [Online]. Available: https://www.aclweb.org/anthology/W19-4734.pdf
    13. M. Aupetit, M. Sedlmair, M. M. Abbas, A. Baggag, and H. Bensmail, “Toward Perception-based Evaluation of Clustering Techniques for Visual Analytics,” in Proceedings of the IEEE Visualization Conference (VIS), in Proceedings of the IEEE Visualization Conference (VIS). IEEE, 2019, pp. 141–145. [Online]. Available: https://ieeexplore.ieee.org/document/8933620
    14. H. Zhang, S. Frey, H. Steeb, D. Uribe, T. Ertl, and W. Wang, “Visualization of Bubble Formation in Porous Media,” IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 1, Art. no. 1, 2019, [Online]. Available: https://ieeexplore.ieee.org/document/8445644
    15. V. Bruder, K. Kurzhals, S. Frey, D. Weiskopf, and T. Ertl, “Space-Time Volume Visualization of Gaze and Stimulus,” in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), K. Krejtz and B. Sharif, Eds., in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA). ACM, 2019, pp. 12:1-12:9. doi: 10.1145/3314111.3319812.
    16. S. Jaeger et al., “Challenges for Brain Data Analysis in VR Environments,” in 2019 IEEE Pacific Visualization Symposium (PacificVis), in 2019 IEEE Pacific Visualization Symposium (PacificVis). 2019, pp. 42–46. [Online]. Available: https://ieeexplore.ieee.org/document/8781584
    17. K. Klein et al., “Visual Analytics for Cheetah Behaviour Analysis.,” in VINCI, in VINCI. ACM, 2019, pp. 16:1-16:8. [Online]. Available: http://dblp.uni-trier.de/db/conf/vinci/vinci2019.html#0001JMWHBS19
    18. C. Schätzle and H. Booth, “DiaHClust: an Iterative Hierarchical Clustering Approach for Identifying Stages in Language Change,” in Proceedings of the International Workshop on Computational Approaches to Historical Language Change, in Proceedings of the International Workshop on Computational Approaches to Historical Language Change. Association for Computational Linguistics, 2019, pp. 126–135. [Online]. Available: https://www.aclweb.org/anthology/W19-4716
    19. K. Klein, M. Aichem, B. Sommer, S. Erk, Y. Zhang, and F. Schreiber, “TEAMwISE: Synchronised Immersive Environments for Exploration and Analysis of Movement Data,” in Proceedings of the ACM Symposium on Visual Information Communication and Interaction (VINCI), in Proceedings of the ACM Symposium on Visual Information Communication and Interaction (VINCI). ACM, 2019, pp. 9:1-9:5. doi: 10.1145/3356422.3356450.
    20. Y. Wang et al., “Improving the Robustness of Scagnostics,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 1, Art. no. 1, 2019, [Online]. Available: https://ieeexplore.ieee.org/document/8807247
    21. T. Castermans, M. van Garderen, W. Meulemans, M. Nöllenburg, and X. Yuan, “Short Plane Supports for Spatial Hypergraphs,” in Graph Drawing and Network Visualization. GD 2018. Lecture Notes in Computer Science, vol. 11282, T. Biedl and A. Kerren, Eds., in Graph Drawing and Network Visualization. GD 2018. Lecture Notes in Computer Science, vol. 11282. , Springer International Publishing, 2019, pp. 53–66. doi: 10.1007/978-3-030-04414-5_4#citeas.
    22. K. Klein et al., “Fly with the flock : immersive solutions for animal movement visualization and analytics,” Journal of the Royal Society Interface, vol. 16, no. 153, Art. no. 153, 2019, doi: 10.1098/rsif.2018.0794.
    23. H. Booth and C. Schätzle, “The Syntactic Encoding of Information Structure in the History of Icelandic,” in Proceedings of the LFG’19 Conference, M. Butt, T. H. King, and I. Toivonen, Eds., in Proceedings of the LFG’19 Conference. CSLI Publications, 2019, pp. 69–89. [Online]. Available: http://web.stanford.edu/group/cslipublications/cslipublications/LFG/LFG-2019/lfg2019-booth-schaetzle.pdf
    24. D. Pomerenke, F. L. Dennig, D. A. Keim, J. Fuchs, and M. Blumenschein, “Slope-Dependent Rendering of Parallel Coordinates to Reduce Density Distortion and Ghost Clusters,” in Proceedings of the IEEE Visualization Conference (VIS), in Proceedings of the IEEE Visualization Conference (VIS). IEEE, 2019, pp. 86–90. [Online]. Available: https://ieeexplore.ieee.org/document/8933706
    25. C. Müller, M. Braun, and T. Ertl, “Optimised Molecular Graphics on the HoloLens,” in IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2019, Osaka, Japan, March 23-27, 2019, in IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2019, Osaka, Japan, March 23-27, 2019. IEEE, 2019, pp. 97–102. doi: 10.1109/VR.2019.8798111.
    26. J. Görtler, M. Spicker, C. Schulz, D. Weiskopf, and O. Deussen, “Stippling of 2D Scalar Fields,” IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 6, Art. no. 6, 2019, [Online]. Available: https://ieeexplore.ieee.org/document/8667696
    27. L. Zhou, R. Netzel, D. Weiskopf, and C. R. Johnson, “Spectral Visualization Sharpening,” in Proceedings of the ACM Symposium on Applied Perception (SAP), S. Neyret, E. Kokkinara, M. González-Franco, L. Hoyet, D. W. Cunningham, and J. Swidrak, Eds., in Proceedings of the ACM Symposium on Applied Perception (SAP). ACM, 2019, pp. 18:1-18:9. doi: 10.1145/3343036.3343133.
    28. H. Men, H. Lin, V. Hosu, D. Maurer, A. Bruhn, and D. Saupe, “Visual Quality Assessment for Motion Compensated Frame Interpolation,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 2019, pp. 1–6. [Online]. Available: https://ieeexplore.ieee.org/document/8743221
    29. R. Netzel, N. Rodrigues, A. Haug, and D. Weiskopf, “Compensation of Simultaneous Orientation Contrast in Superimposed Textures,” in Proceedings of the Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP), A. Kerren, C. Hurter, and J. Braz, Eds., in Proceedings of the Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP). SciTePress, 2019, pp. 48–57. [Online]. Available: http://www.scitepress.org/DigitalLibrary/Link.aspx?doi=10.5220/0007356800480057
    30. K. Schatz et al., “Visual Analysis of Structure Formation in Cosmic Evolution,” in Proceedings of the IEEE Scientific Visualization Conference (SciVis), in Proceedings of the IEEE Scientific Visualization Conference (SciVis). 2019, pp. 33–41. doi: 10.1109/scivis47405.2019.8968855.
    31. J. Müller, J. Zagermann, J. Wieland, U. Pfeil, and H. Reiterer, “A Qualitative Comparison Between Augmented and Virtual Reality Collaboration with Handheld Devices,” in Mensch und Computer 2019 – Tagungsband (MuC), F. Alt, A. Bulling, and T. Döring, Eds., in Mensch und Computer 2019 – Tagungsband (MuC). GI, ACM, 2019, pp. 399–410. doi: 10.1145/3340764.3340773.
    32. C. Fan et al., “SUR-Net: Predicting the Satisfied User Ratio Curve for Image Compression with Deep Learning,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 2019, pp. 1–6. [Online]. Available: https://ieeexplore.ieee.org/document/8743204
    33. H. Lin, V. Hosu, and D. Saupe, “KADID-10k: A Large-scale Artificially Distorted IQA Database,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 2019, pp. 1–3. [Online]. Available: https://ieeexplore.ieee.org/document/8743252
    34. M. Miller, X. Zhang, J. Fuchs, and M. Blumenschein, “Evaluating Ordering Strategies of Star Glyph Axes,” in Proceedings of the IEEE Visualization Conference (VIS), in Proceedings of the IEEE Visualization Conference (VIS). IEEE, 2019, pp. 91–95. [Online]. Available: https://ieeexplore.ieee.org/document/8933656
  7. 2018

    1. C. Schätzle, “Dative Subjects: Historical Change Visualized,” Konstanz, 2018. [Online]. Available: http://nbn-resolving.de/urn:nbn:de:bsz:352-2-1d917i4avuz1a2
    2. J. Görtler, C. Schulz, O. Deussen, and D. Weiskopf, “Bubble Treemaps for Uncertainty Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 1, Art. no. 1, 2018, doi: 10.1109/TVCG.2017.2743959.
    3. N. Rodrigues and D. Weiskopf, “Nonlinear Dot Plots,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 1, Art. no. 1, 2018, doi: 10.1109/TVCG.2017.2744018.
    4. F. Frieß, M. Landwehr, V. Bruder, S. Frey, and T. Ertl, “Adaptive Encoder Settings for Interactive Remote Visualisation on High-Resolution Displays,” in Proceedings of the IEEE Symposium on Large Data Analysis and Visualization - Short Papers (LDAV), in Proceedings of the IEEE Symposium on Large Data Analysis and Visualization - Short Papers (LDAV). IEEE, 2018, pp. 87–91. [Online]. Available: https://ieeexplore.ieee.org/document/8739215
    5. D. Maurer and A. Bruhn, “ProFlow: Learning to Predict Optical Flow,” in Proceedings of the British Machine Vision Conference (BMVC), in Proceedings of the British Machine Vision Conference (BMVC). BMVA Press, 2018. [Online]. Available: http://bmvc2018.org/contents/supplementary/pdf/0277_supp.pdf
    6. N. Rodrigues, R. Netzel, J. Spalink, and D. Weiskopf, “Multiscale Scanpath Visualization and Filtering,” in Proceedings of the Symposium on Eye Tracking and Visualization (ETVIS), L. L. Chuang, M. Burch, and K. Kurzhals, Eds., in Proceedings of the Symposium on Eye Tracking and Visualization (ETVIS). ACM, 2018, pp. 2:1-2:5. doi: 10.1145/3205929.3205931.
    7. S. Frey, “Spatio-Temporal Contours from Deep Volume Raycasting,” Computer Graphics Forum, vol. 37, no. 3, Art. no. 3, 2018, doi: 10.1111/cgf.13438.
    8. C. Glatz and L. L. Chuang, “The Time Course of Auditory Looming Cues in Redirecting Visuo-Spatial Attention,” Nature - Scientific Reports, vol. 9, pp. 743:1-743:10, 2018, doi: 10.1038/s41598-018-36033-8.
    9. M. Ghaffar et al., “3D Modelling and Visualisation of Heterogeneous Cell Membranes in Blender,” in Proceedings of the 11th International Symposium on Visual Information Communication and Interaction, in Proceedings of the 11th International Symposium on Visual Information Communication and Interaction. New York, NY, USA: Association for Computing Machinery, 2018, pp. 64–71. doi: 10.1145/3231622.3231639.
    10. Y. Zhu et al., “Genome-scale Metabolic Modeling of Responses to Polymyxins in Pseudomonas Aeruginosa,” GigaScience, vol. 7, no. 4, Art. no. 4, 2018, doi: 10.1093/gigascience/giy021.
    11. V. Yoghourdjian, T. Dwyer, K. Klein, K. Marriott, and M. Wybrow, “Graph Thumbnails: Identifying and Comparing Multiple Graphs at a Glance,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 12, Art. no. 12, 2018, [Online]. Available: https://ieeexplore.ieee.org/document/8249874
    12. V. Hosu, H. Lin, and D. Saupe, “Expertise Screening in Crowdsourcing Image Quality,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 2018, pp. 276–281. [Online]. Available: https://ieeexplore.ieee.org/document/8463427
    13. M. Behrisch et al., “Quality Metrics for Information Visualization,” Computer Graphics Forum, vol. 37, no. 3, Art. no. 3, 2018, doi: 10.1111/cgf.13446.
    14. N. Marniok and B. Goldluecke, “Real-time Variational Range Image Fusion and Visualization for Large-scale Scenes using GPU Hash Tables,” in Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV), in Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV). 2018, pp. 912–920. [Online]. Available: https://ieeexplore.ieee.org/document/8354209
    15. C. Glatz, S. S. Krupenia, H. H. Bülthoff, and L. L. Chuang, “Use the Right Sound for the Right Job: Verbal Commands and Auditory Icons for a Task-Management System Favor Different Information Processes in the Brain,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, R. L. Mandryk, M. Hancock, M. Perry, and A. L. Cox, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2018, pp. 472:1-472:13. doi: 10.1145/3173574.3174046.
    16. T. Torsney-Weir, S. Afroozeh, M. Sedlmair, and T. Möller, “Risk Fixers and Sweet Spotters: a Study of the Different Approaches to Using Visual Sensitivity Analysis in an Investment Scenario,” in Proceedings of the Eurographics Conference on Visualization (EuroVis), J. Johansson, F. Sadlo, and T. Schreck, Eds., in Proceedings of the Eurographics Conference on Visualization (EuroVis). Eurographics Association, 2018, pp. 119–123. doi: 10.5555/3290776.3290801.
    17. C. Schulz, K. Schatz, M. Krone, M. Braun, T. Ertl, and D. Weiskopf, “Uncertainty Visualization for Secondary Structures of Proteins,” in Proceedings of the IEEE Pacific Visualization Symposium (PacificVis), in Proceedings of the IEEE Pacific Visualization Symposium (PacificVis). IEEE, 2018, pp. 96–105. [Online]. Available: https://ieeexplore.ieee.org/document/8365980
    18. V. Bruder, M. Hlawatsch, S. Frey, M. Burch, D. Weiskopf, and T. Ertl, “Volume-Based Large Dynamic Graph Analytics,” in Proceedings of the International Conference Information Visualisation (IV), E. Banissi, R. Francese, M. W. McK. Bannatyne, T. G. Wyeld, M. Sarfraz, J. M. Pires, A. Ursyn, F. Bouali, N. Datia, G. Venturini, G. Polese, V. Deufemia, T. D. Mascio, M. Temperini, F. Sciarrone, D. Malandrino, R. Zaccagnino, P. Díaz, F. Papadopoulo, A. F. Anta, A. Cuzzocrea, M. Risi, U. Erra, and V. Rossano, Eds., in Proceedings of the International Conference Information Visualisation (IV). IEEE, 2018, pp. 210–219. [Online]. Available: https://ieeexplore.ieee.org/document/8564163
    19. P. Knierim, V. Schwind, A. M. Feit, F. Nieuwenhuizen, and N. Henze, “Physical Keyboards in Virtual Reality: Analysis of Typing Performance and Effects of Avatar Hands,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, R. L. Mandryk, M. Hancock, M. Perry, and A. L. Cox, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2018, pp. 345:1-345:9. doi: 10.1145/3173574.3173919.
    20. M. Jenadeleh, M. Pedersen, and D. Saupe, “Realtime Quality Assessment of Iris Biometrics Under Visible Light,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPRW), CVPR Workshops, in Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPRW), CVPR Workshops. IEEE, 2018, pp. 443–452. [Online]. Available: https://ieeexplore.ieee.org/document/8575548
    21. D. Varga, D. Saupe, and T. Szirányi, “DeepRN: A Content Preserving Deep Architecture for Blind Image Quality Assessment,” in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2018, pp. 1–6. [Online]. Available: https://ieeexplore.ieee.org/document/8486528
    22. H. Bast, P. Brosi, and S. Storandt, “Efficient Generation of Geographically Accurate Transit Maps,” in Proceedings of the ACM International Conference on Advances in Geographic Information Systems (SIGSPATIAL), F. B. Kashani, E. G. Hoel, R. H. Güting, R. Tamassia, and L. Xiong, Eds., in Proceedings of the ACM International Conference on Advances in Geographic Information Systems (SIGSPATIAL). ACM, 2018, pp. 13–22. doi: 10.1145/3274895.3274955.
    23. M. Scheer, H. H. Bülthoff, and L. L. Chuang, “Auditory Task Irrelevance: A Basis for Inattentional Deafness,” Human Factors, vol. 60, no. 3, Art. no. 3, 2018, doi: 10.1177/0018720818760919.
    24. M. Blumenschein et al., “SMARTexplore: Simplifying High-Dimensional Data Analysis through a Table-Based Visual Analytics Approach,” in Proceedings of the IEEE Conference on Visual Analytics Science and Technology (VAST), R. Chang, H. Qu, and T. Schreck, Eds., in Proceedings of the IEEE Conference on Visual Analytics Science and Technology (VAST). IEEE, 2018, pp. 36–47. [Online]. Available: https://ieeexplore.ieee.org/document/8802486
    25. D. Laupheimer, P. Tutzauer, N. Haala, and M. Spicker, “Neural Networks for the Classification of Building Use from Street-view Imagery,” ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, pp. 177–184, 2018, [Online]. Available: https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/IV-2/177/2018/isprs-annals-IV-2-177-2018.pdf
    26. M. Klapperstueck et al., “Contextuwall: Multi-site Collaboration Using Display Walls,” Journal of Visual Languages & Computing, vol. 46, pp. 35–42, 2018, doi: 10.1016/j.jvlc.2017.10.002.
    27. T. Spinner, J. Körner, J. Görtler, and O. Deussen, “Towards an Interpretable Latent Space: An Intuitive Comparison of Autoencoders with Variational Autoencoders,” in Proceedings of the Workshop on Visualization for AI Explainability (VISxAI), in Proceedings of the Workshop on Visualization for AI Explainability (VISxAI). IEEE VIS, 2018. [Online]. Available: https://thilospinner.com/towards-an-interpretable-latent-space/
    28. C. Schulz, A. Zeyfang, M. van Garderen, H. Ben Lahmar, M. Herschel, and D. Weiskopf, “Simultaneous Visual Analysis of Multiple Software Hierarchies,” in Proceedings of the IEEE Working Conference on Software Visualization (VISSOFT), in Proceedings of the IEEE Working Conference on Software Visualization (VISSOFT). IEEE, 2018, pp. 87–95. [Online]. Available: https://ieeexplore.ieee.org/document/8530134/
    29. C. Müller et al., “Interactive Molecular Graphics for Augmented Reality Using HoloLens,” Journal of Integrative Bioinformatics, vol. 15, no. 2, Art. no. 2, 2018.
    30. D. Maurer, N. Marniok, B. Goldluecke, and A. Bruhn, “Structure-from-motion-aware PatchMatch for Adaptive Optical Flow Estimation,” in Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol. 11212, V. Ferrari, M. Hebert, C. Sminchisescu, and Y. Weiss, Eds., in Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol. 11212. , Springer International Publishing, 2018, pp. 575–592. doi: 10.1007/978-3-030-01237-3_35.
    31. S. Hubenschmid, J. Zagermann, S. Butscher, and H. Reiterer, “Employing Tangible Visualisations in Augmented Reality with Mobile Devices,” in Proceedings of the Working Conference on Advanced Visual Interfaces (AVI), in Proceedings of the Working Conference on Advanced Visual Interfaces (AVI). 2018, pp. 1–4. [Online]. Available: http://nbn-resolving.de/urn:nbn:de:bsz:352-2-1iooenfo4fofm8
    32. V. Schwind, K. Leicht, S. Jäger, K. Wolf, and N. Henze, “Is there an Uncanny Valley of Virtual Animals? A Quantitative and Qualitative Investigation,” International Journal of Human-Computer Studies, vol. 111, pp. 49–61, 2018, doi: 10.1016/j.ijhcs.2017.11.003.
    33. L. L. Chuang and U. Pfeil, “Transparency and Openness Promotion Guidelines for HCI,” in Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA), R. L. Mandryk, M. Hancock, M. Perry, and A. L. Cox, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA). ACM, 2018, p. SIG04:1-SIG04:4. doi: 10.1145/3170427.3185377.
    34. A. Hautli-Janisz, C. Rohrdantz, C. Schätzle, A. Stoffel, M. Butt, and D. A. Keim, “Visual Analytics in Diachronic Linguistic Investigations,” Linguistic Visualizations, 2018.
    35. K. Hänsel, R. Poguntke, H. Haddadi, A. Alomainy, and A. Schmidt, “What to Put on the User: Sensing Technologies for Studies and Physiology Aware Systems,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, R. L. Mandryk, M. Hancock, M. Perry, and A. L. Cox, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2018, pp. 145:1-145:14. doi: 10.1145/3173574.3173719.
    36. D. Sacha et al., “SOMFlow: Guided Exploratory Cluster Analysis with Self-Organizing Maps and Analytic Provenance,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 1, Art. no. 1, 2018, [Online]. Available: https://ieeexplore.ieee.org/document/8019867
    37. J. Görtler, R. Kehlbeck, and O. Deussen, “A Visual Exploration of Gaussian Processes,” in Proceedings of the Workshop on Visualization for AI Explainability (VISxAI), in Proceedings of the Workshop on Visualization for AI Explainability (VISxAI). 2018. [Online]. Available: https://distill.pub/2019/visual-exploration-gaussian-processes/
    38. H. Ben Lahmar, M. Herschel, M. Blumenschein, and D. A. Keim, “Provenance-based Visual Data Exploration with EVLIN,” in Proceedings of the Conference on Extending Database Technology (EDBT), in Proceedings of the Conference on Extending Database Technology (EDBT). 2018, pp. 686–689. doi: 10.5441/002/edbt.2018.85.
    39. A. C. Valdez, M. Ziefle, and M. Sedlmair, “Priming and Anchoring Effects in Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 1, Art. no. 1, 2018, [Online]. Available: https://ieeexplore.ieee.org/document/8022891
    40. Y. Wang et al., “A Perception-driven Approach to Supervised Dimensionality Reduction for Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 5, Art. no. 5, 2018, [Online]. Available: https://www.computer.org/csdl/journal/tg/2018/05/07920403/13rRUEgs2M7
    41. A. Nesti, G. Rognini, B. Herbelin, H. H. Bülthoff, L. L. Chuang, and O. Blanke, “Modulation of Vection Latencies in the Full-Body Illusion,” PLoS ONE, vol. 13, no. 12, Art. no. 12, 2018, doi: 10.1371/journal.pone.0209189.
    42. S. S. Borojeni, S. C. J. Boll, W. Heuten, H. H. Bülthoff, and L. L. Chuang, “Feel the Movement: Real Motion Influences Responses to Take-Over Requests in Highly Automated Vehicles,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, R. L. Mandryk, M. Hancock, M. Perry, and A. L. Cox, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2018, pp. 246:1-246:13. doi: 10.1145/3173574.3173820.
    43. T. Dingler, R. Rzayev, A. S. Shirazi, and N. Henze, “Designing Consistent Gestures Across Device Types: Eliciting RSVP Controls for Phone, Watch, and Glasses,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, R. L. Mandryk, M. Hancock, M. Perry, and A. L. Cox, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2018, pp. 419:1-419:12. doi: 10.1145/3173574.3173993.
    44. J. Zagermann, U. Pfeil, and H. Reiterer, “Studying Eye Movements as a Basis for Measuring Cognitive Load,” Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA), p. LBW095:1-LBW095:6, 2018, doi: 10.1145/3170427.3188628.
    45. T. Kosch, M. Funk, A. Schmidt, and L. L. Chuang, “Identifying Cognitive Assistance with Mobile Electroencephalography: A Case Study with In-Situ Projections for Manual Assembly.,” Proceedings of the ACM on Human-Computer Interaction (ACMHCI), vol. 2, pp. 11:1-11:20, 2018, doi: 10.1145/3229093.
    46. L. J. Debbeler, M. Gamp, M. Blumenschein, D. A. Keim, and B. Renner, “Polarized But Illusory Beliefs About Tap and Bottled Water: A Product- and Consumer-Oriented Survey and Blind Tasting Experiment,” Science of the Total Environment, vol. 643, pp. 1400–1410, 2018, doi: 10.1016/j.scitotenv.2018.06.190.
    47. M. de Ridder, K. Klein, and J. Kim, “A Review and Outlook on Visual Analytics for Uncertainties in Functional Magnetic Resonance Imaging,” Brain Informatics, vol. 5, no. 2, Art. no. 2, 2018, doi: 10.1186/s40708-018-0083-0.
    48. K. Marriott et al., Immersive Analytics, vol. 11190. in Lecture Notes in Computer Science (LNCS), vol. 11190. Springer International Publishing, 2018. doi: 10.1007/978-3-030-01388-2.
    49. S. Oppold and M. Herschel, “Provenance for Entity Resolution,” in Provenance and Annotation of Data and Processes. IPAW 2018. Lecture Notes in Computer Science, vol. 11017, K. Belhajjame, A. Gehani, and P. Alper, Eds., in Provenance and Annotation of Data and Processes. IPAW 2018. Lecture Notes in Computer Science, vol. 11017. , Springer International Publishing, 2018, pp. 226–230. doi: 10.1007/978-3-319-98379-0_25.
    50. D. Maurer, M. Stoll, and A. Bruhn, “Directional Priors for Multi-Frame Optical Flow,” in Proceedings of the British Machine Vision Conference (BMVC), in Proceedings of the British Machine Vision Conference (BMVC). BMVA Press, 2018, pp. 106:1-106:13. [Online]. Available: http://bmvc2018.org/contents/papers/0377.pdf
    51. D. Maurer, Y. C. Ju, M. Breuß, and A. Bruhn, “Combining Shape from Shading and Stereo: A Joint Variational Method for Estimating Depth, Illumination and Albedo,” International Journal of Computer Vision, vol. 126, no. 12, Art. no. 12, 2018, doi: 10.1007/s11263-018-1079-1.
    52. J. Karolus, H. Schuff, T. Kosch, P. W. Woźniak, and A. Schmidt, “EMGuitar: Assisting Guitar Playing with Electromyography,” in Proceedings of the Designing Interactive Systems Conference (DIS), I. Koskinen, Y.-K. Lim, T. C. Pargman, K. K. N. Chow, and W. Odom, Eds., in Proceedings of the Designing Interactive Systems Conference (DIS). ACM, 2018, pp. 651–655. doi: 10.1145/3196709.3196803.
    53. H. Men, H. Lin, and D. Saupe, “Spatiotemporal Feature Combination Model for No-Reference Video Quality Assessment,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 2018, pp. 1–3. [Online]. Available: https://ieeexplore.ieee.org/document/8463426
  8. 2017

    1. O. Deussen, M. Spicker, and Q. Zheng, “Weighted Linde-Buzo-Gray Stippling,” ACM Transactions on Graphics, vol. 36, no. 6, Art. no. 6, Nov. 2017, doi: 10.1145/3130800.3130819.
    2. C. Schulz, N. Rodrigues, K. Damarla, A. Henicke, and D. Weiskopf, “Visual Exploration of Mainframe Workloads,” in Proceedings of the SIGGRAPH Asia Symposium on Visualization, in Proceedings of the SIGGRAPH Asia Symposium on Visualization. ACM, 2017, pp. 4:1-4:7. doi: 10.1145/3139295.3139312.
    3. D. Sacha et al., “Visual Interaction with Dimensionality Reduction: A Structured Literature Analysis,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, Art. no. 1, 2017.
    4. D. Sacha et al., “What You See Is What You Can Change: Human-Centered Machine Learning by Interactive Visualization,” Neurocomputing, vol. 268, pp. 164–175, 2017.
    5. P. Gralka, C. Schulz, G. Reina, D. Weiskopf, and T. Ertl, “Visual Exploration of Memory Traces and Call Stacks,” in Proceedings of the IEEE Working Conference on Software Visualization (VISSOFT), in Proceedings of the IEEE Working Conference on Software Visualization (VISSOFT). IEEE, 2017, pp. 54–63. doi: 10.1109/VISSOFT.2017.15.
    6. R. Diestelkämper, M. Herschel, and P. Jadhav, “Provenance in DISC Systems: Reducing Space Overhead at Runtime,” in Proceedings of the USENIX Conference on Theory and Practice of Provenance (TAPP), in Proceedings of the USENIX Conference on Theory and Practice of Provenance (TAPP). 2017, pp. 1–13. doi: 10.5555/3183865.3183883.
    7. D. Maurer, A. Bruhn, and M. Stoll, “Order-adaptive and Illumination-aware Variational Optical Flow Refinement,” in Proceedings of the British Machine Vision Conference (BMVC), in Proceedings of the British Machine Vision Conference (BMVC). BMVA Press, 2017, pp. 150:1-150:13. doi: 10.5244/C.31.150.
    8. M. Stoll, D. Maurer, and A. Bruhn, “Variational Large Displacement Optical Flow Without Feature Matches.,” in Energy Minimization Methods in Computer Vision and Pattern Recognition. EMMCVPR 2017. Lecture Notes in Computer Science, M. Pelillo and E. R. Hancock, Eds., in Energy Minimization Methods in Computer Vision and Pattern Recognition. EMMCVPR 2017. Lecture Notes in Computer Science, vol. 10746. Springer International Publishing, 2017, pp. 79–92. doi: 10.1007/978-3-319-78199-0_6.
    9. J. Kratt, F. Eisenkeil, M. Spicker, Y. Wang, D. Weiskopf, and O. Deussen, “Structure-aware Stylization of Mountainous Terrains,” in Vision, Modeling & Visualization, M. Hullin, R. Klein, T. Schultz, and A. Yao, Eds., in Vision, Modeling & Visualization. , The Eurographics Association, 2017. doi: 10.2312/vmv20171255.
    10. C. Schulz, A. Nocaj, J. Görtler, O. Deussen, U. Brandes, and D. Weiskopf, “Probabilistic Graph Layout for Uncertain Network Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, Art. no. 1, 2017, doi: 10.1109/TVCG.2016.2598919.
    11. M. Heinemann, V. Bruder, S. Frey, and T. Ertl, “Power Efficiency of Volume Raycasting on Mobile Devices,” in Proceedings of the Eurographics Conference on Visualization (EuroVis) - Poster Track, E. Association, Ed., in Proceedings of the Eurographics Conference on Visualization (EuroVis) - Poster Track. 2017. doi: 10.2312/eurp.20171166.
    12. G. Tkachev, S. Frey, C. Müller, V. Bruder, and T. Ertl, “Prediction of Distributed Volume Visualization Performance to Support Render Hardware Acquisition,” in Proceedings of the Eurographics Symposium on Parallel Graphics and Visualization (EGPGV), E. Association, Ed., in Proceedings of the Eurographics Symposium on Parallel Graphics and Visualization (EGPGV). Eurographics Association, 2017, pp. 11–20. doi: 10.2312/pgv.20171089.
    13. U. Gadiraju et al., “Crowdsourcing Versus the Laboratory: Towards Human-centered Experiments Using the Crowd,” in Information Systems and Applications, incl. Internet/Web, and HCI, D. Archambault, H. Purchase, and T. Hossfeld, Eds., in Information Systems and Applications, incl. Internet/Web, and HCI. , Springer International Publishing, 2017, pp. 6–26.
    14. X. Zhang, Y. Sugano, and A. Bulling, “Everyday Eye Contact Detection Using Unsupervised Gaze Target Discovery,” in Proceedings of the ACM Symposium on User Interface Software and Technology (UIST), in Proceedings of the ACM Symposium on User Interface Software and Technology (UIST). 2017, pp. 193–203. doi: 10.1145/3126594.3126614.
    15. S. Funke, T. Mendel, A. Miller, S. Storandt, and M. Wiebe, “Map Simplification with Topology Constraints: Exactly and in Practice,” in Proceedings of the Meeting on Algorithm Engineering and Experiments (ALENEX), S. P. Fekete and V. Ramachandran, Eds., in Proceedings of the Meeting on Algorithm Engineering and Experiments (ALENEX). SIAM, 2017, pp. 185–196. doi: 10.1137/1.9781611974768.15.
    16. O. Johannsen et al., “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Workshops, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Workshops. IEEE, 2017, pp. 1795–1812. [Online]. Available: https://ieeexplore.ieee.org/document/8014960
    17. J. Allsop, R. Gray, H. H. Bülthoff, and L. L. Chuang, “Eye Movement Planning on Single-Sensor-Single-Indicator Displays is Vulnerable to User Anxiety and Cognitive Load,” Journal of Eye Movement Research, vol. 10, no. 5, Art. no. 5, 2017, doi: 10.16910/jemr.10.5.8.
    18. N. Rodrigues, M. Burch, L. Di Silvestro, and D. Weiskopf, “A Visual Analytics Approach for Word Relevances in Multiple Texts,” in Proceedings of the International Conference on Information Visualisation (IV), in Proceedings of the International Conference on Information Visualisation (IV). IEEE, 2017, pp. 1–7. [Online]. Available: https://ieeexplore.ieee.org/document/8107940
    19. M. Stoll, D. Maurer, S. Volz, and A. Bruhn, “Illumination-aware Large Displacement Optical Flow,” in Proceedings of International Conference on Energy Minimization Methods in Computer Vision and Pattern Recognition (EMMCVPR). Lecture Notes in Computer Science, M. Pelillo and E. R. Hancock, Eds., in Proceedings of International Conference on Energy Minimization Methods in Computer Vision and Pattern Recognition (EMMCVPR). Lecture Notes in Computer Science, vol. 10746. Springer International Publishing, 2017, pp. 139–154. doi: 10.1007/978-3-319-78199-0_10.
    20. M. Herschel, R. Diestelkämper, and H. Ben Lahmar, “A Survey on Provenance - What for? What form? What from?,” The VLDB Journal, vol. 26, pp. 881–906, 2017, doi: 10.1007/s00778-017-0486-1.
    21. S. Frey and T. Ertl, “Flow-Based Temporal Selection for Interactive Volume Visualization,” Computer Graphics Forum, vol. 36, no. 8, Art. no. 8, 2017, doi: 10.1111/cgf.13070.
    22. J. Iseringhausen et al., “4D Imaging through Spray-On Optics,” in ACM Transactions on Graphics, in ACM Transactions on Graphics, vol. 36. 2017, pp. 35:1-35:11. doi: 10.1145/3072959.3073589.
    23. K. Kurzhals, E. Çetinkaya, Y. Hu, W. Wang, and D. Weiskopf, “Close to the Action: Eye-Tracking Evaluation of Speaker-Following Subtitles,” in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, ACM, Ed., in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 2017, pp. 6559–6568. doi: 10.1145/3025453.3025772.
    24. K. Kurzhals, M. Hlawatsch, C. Seeger, and D. Weiskopf, “Visual Analytics for Mobile Eye Tracking,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, Art. no. 1, 2017, doi: 10.1109/TVCG.2016.2598695.
    25. K. Srulijes et al., “Visualization of Eye-Head Coordination While Walking in Healthy Subjects and Patients with Neurodegenerative Diseases,” in Poster (reviewed) presented on Symposium of the International Society of Posture and Gait Research (ISPGR), in Poster (reviewed) presented on Symposium of the International Society of Posture and Gait Research (ISPGR). 2017.
    26. A. Nesti, K. de Winkel, and H. H. Bülthoff, “Accumulation of Inertial Sensory Information in the Perception of Whole Body Yaw Rotation,” PloS ONE, vol. 12, no. 1, Art. no. 1, 2017, doi: 10.1371/journal.pone.0170497.
    27. V. Schwind, K. Wolf, and N. Henze, “FaceMaker - A Procedural Face Generator to Foster Character Design Research,” O. Korn and N. Lee, Eds., Springer International Publishing, 2017, pp. 95–113. doi: 10.1007/978-3-319-53088-8_6.
    28. D. Fritsch and M. Klein, “3D and 4D Modeling for AR and VR App Developments,” in Proceedings of the International Conference on Virtual System & Multimedia (VSMM), in Proceedings of the International Conference on Virtual System & Multimedia (VSMM). 2017, pp. 1–8. [Online]. Available: https://ieeexplore.ieee.org/document/8346270
    29. P. Tutzauer and N. Haala, “Processing of Crawled Urban Imagery for Building Use Classification,” ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, pp. 143–149, 2017, doi: 10.5194/isprs-archives-XLII-1-W1-143-2017.
    30. A. Barth, B. Harrach, N. Hyvönen, and L. Mustonen, “Detecting Stochastic Inclusions in Electrical Impedance Tomography,” Inverse Problems, vol. 33, no. 11, Art. no. 11, 2017, doi: 10.1088/1361-6420/aa8f5c.
    31. C. Schätzle, “Genitiv als Stilmittel in der Novelle,” Scalable Reading. Zeitschrift für Literaturwissenschaft und Linguistik (LiLi), vol. 47, pp. 125–140, 2017, doi: 10.1007/s41244-017-0043-9.
    32. D. Jäckle, M. Hund, M. Behrisch, D. A. Keim, and T. Schreck, “Pattern Trails: Visual Analysis of Pattern Transitions in Subspaces,” in Proceedings of the IEEE Conference on Visual Analytics Science and Technology (VAST), B. Fisher, S. Liu, and T. Schreck, Eds., in Proceedings of the IEEE Conference on Visual Analytics Science and Technology (VAST). IEEE, 2017, pp. 1–12. [Online]. Available: https://ieeexplore.ieee.org/document/8585613
    33. M. Behrisch et al., “Magnostics: Image-Based Search of Interesting Matrix Views for Guided Network Exploration,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, Art. no. 1, 2017, [Online]. Available: https://ieeexplore.ieee.org/document/7534849
    34. M. A. Baazizi, H. Ben Lahmar, D. Colazzo, G. Ghelli, and C. Sartiani, “Schema Inference for Massive JSON Datasets,” in Proceedings of the Conference on Extending Database Technology (EDBT), in Proceedings of the Conference on Extending Database Technology (EDBT). 2017, pp. 222–233. doi: 10.5441/002/edbt.2017.21.
    35. R. Netzel, J. Vuong, U. Engelke, S. I. O’Donoghue, D. Weiskopf, and J. Heinrich, “Comparative Eye-tracking Evaluation of Scatterplots and Parallel Coordinates,” Visual Informatics, vol. 1, no. 2, Art. no. 2, 2017, doi: 10.1016/j.visinf.2017.11.001.
    36. M. Krone et al., “Molecular Surface Maps,” IEEE Transactions on Visualization and Computer Graphics (Proceedings of the Scientific Visualization 2016), vol. 23, no. 1, Art. no. 1, 2017, doi: 10.1109/TVCG.2016.2598824.
    37. R. Netzel, M. Hlawatsch, M. Burch, S. Balakrishnan, H. Schmauder, and D. Weiskopf, “An Evaluation of Visual Search Support in Maps,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, Art. no. 1, 2017, doi: 10.1109/TVCG.2016.2598898.
    38. V. Schwind, P. Knierim, L. L. Chuang, and N. Henze, “‘Where’s Pinky?’: The Effects of a Reduced Number of Fingers in Virtual Reality,” in Proceedings of the Annual Symposium on Computer-Human Interaction in Play (CHI PLAY), B. A. M. Schouten, P. Markopoulos, Z. O. Toups, P. A. Cairns, and T. Bekker, Eds., in Proceedings of the Annual Symposium on Computer-Human Interaction in Play (CHI PLAY). ACM, 2017, pp. 507–515. doi: 10.1145/3116595.3116596.
    39. K. de Winkel, A. Nesti, H. Ayaz, and H. H. Bülthoff, “Neural Correlates of Decision Making on Whole Body Yaw Rotation: an fNIRS Study,” Neuroscience Letters, vol. 654, pp. 56–62, 2017, doi: 10.1016/j.neulet.2017.04.053.
    40. D. Fritsch, “Photogrammetrische Auswertung digitaler Bilder – Neue Methoden der Kamerakalibration, dichten Bildzuordnung und Interpretation von Punktwolken,” in Photogrammetrie und Fernerkundung, C. Heipke, Ed., in Photogrammetrie und Fernerkundung. , Springer Spektrum, 2017, pp. 157–196. doi: 10.1007/978-3-662-47094-7_41.
    41. X. Zhang, Y. Sugano, M. Fritz, and A. Bulling, “MPIIGaze: Real-World Dataset and Deep Appearance-Based Gaze Estimation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 1, Art. no. 1, 2017, [Online]. Available: https://ieeexplore.ieee.org/abstract/document/8122058
    42. V. Hosu et al., “The Konstanz natural video database (KoNViD-1k).,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 2017, pp. 1–6. [Online]. Available: https://ieeexplore.ieee.org/document/7965673
    43. S. Egger-Lampl et al., “Crowdsourcing Quality of Experience Experiments,” in Information Systems and Applications, incl. Internet/Web, and HCI, D. Archambault, H. Purchase, and T. Hossfeld, Eds., in Information Systems and Applications, incl. Internet/Web, and HCI. , Springer International Publishing, 2017, pp. 154–190.
    44. J. Zagermann, U. Pfeil, C. Acevedo, and H. Reiterer, “Studying the Benefits and Challenges of Spatial Distribution and Physical Affordances in a Multi-device Workspace,” in Proceedings of the International Conference on Mobile and Ubiquitous Multimedia (MUM), in Proceedings of the International Conference on Mobile and Ubiquitous Multimedia (MUM). 2017, pp. 249–259. doi: 10.1145/3152832.3152855.
    45. M. de Ridder, K. Klein, and J. Kim, “Temporaltracks: Visual Analytics for Exploration of 4D fMRI Time-series Coactivation,” in Proceedings of the Computer Graphics International Conference (CGI), X. Mao, D. Thalmann, and M. L. Gavrilova, Eds., in Proceedings of the Computer Graphics International Conference (CGI). ACM, 2017, pp. 13:1-13:6. doi: 10.1145/3095140.3095153.
    46. J. Zagermann, U. Pfeil, D. I. Fink, P. von Bauer, and H. Reiterer, “Memory in Motion: The Influence of Gesture- and Touch-based Input Modalities on Spatial Memory,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, G. Mark, S. R. Fussell, C. Lampe, m. c. schraefel, J. P. Hourcade, C. Appert, and D. Wigdor, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2017, pp. 1899–1910. doi: 10.1145/3025453.3026001.
    47. L. L. Chuang, C. Glatz, and S. S. Krupenia, “Using EEG to Understand why Behavior to Auditory In-vehicle Notifications Differs Across Test Environments,” in Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI), S. Boll, B. Pfleging, B. Donmez, I. Politis, and D. R. Large, Eds., in Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI). ACM, 2017, pp. 123–133. doi: 10.1145/3122986.3123017.
    48. H. T. Nim et al., “Design Considerations for Immersive Analytics of Bird Movements Obtained by Miniaturised GPS Sensors,” in Proceedings of the Eurographics Workshop on Visual Computing for Biology and Medicine (VCBM), in Proceedings of the Eurographics Workshop on Visual Computing for Biology and Medicine (VCBM). Eurographics Association, 2017. doi: 10.2312/vcbm.20171234.
    49. M. Correll and J. Heer, “Surprise! Bayesian Weighting for De-Biasing Thematic Maps.,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, Art. no. 1, 2017, [Online]. Available: http://dblp.uni-trier.de/db/journals/tvcg/tvcg23.html#CorrellH17
    50. D. Jäckle, F. Stoffel, S. Mittelstädt, D. A. Keim, and H. Reiterer, “Interpretation of Dimensionally-Reduced Crime Data: A Study with Untrained Domain Experts,” in Proceedings of the Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP), in Proceedings of the Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP), vol. 3. 2017, pp. 164–175. [Online]. Available: https://bib.dbvis.de/publications/details/697
    51. N. Rodrigues et al., “Visualization of Time Series Data with Spatial Context: Communicating the Energy Production of Power Plants,” in Proceedings of the ACM Symposium on Visual Information Communication and Interaction (VINCI), in Proceedings of the ACM Symposium on Visual Information Communication and Interaction (VINCI). 2017, pp. 37–44. doi: 10.1145/3105971.3105982.
    52. D. Bahrdt et al., “Growing Balls in ℝd,” in Proceedings of the Meeting on Algorithm Engineering and Experiments (ALENEX), S. P. Fekete and V. Ramachandran, Eds., in Proceedings of the Meeting on Algorithm Engineering and Experiments (ALENEX). SIAM, 2017, pp. 247–258. doi: 10.1137/1.9781611974768.20.
    53. V. Bruder, S. Frey, and T. Ertl, “Prediction-Based Load Balancing and Resolution Tuning for Interactive Volume Raycasting,” Visual Informatics, vol. 1, no. 2, Art. no. 2, 2017, doi: 10.1016/j.visinf.2017.09.001.
    54. L. Merino et al., “On the Impact of the Medium in the Effectiveness of 3D Software Visualizations,” in Proceedings of the IEEE Working Conference on Software Visualization (VISSOFT), in Proceedings of the IEEE Working Conference on Software Visualization (VISSOFT). IEEE, 2017, pp. 11–21. [Online]. Available: https://ieeexplore.ieee.org/document/8091182
    55. C. Schulz, M. Burch, F. Beck, and D. Weiskopf, “Visual Data Cleansing of Low-Level Eye Tracking Data,” in Eye Tracking and Visualization: Foundations, Techniques, and Applications. ETVIS 2015, M. Burch, L. L. Chuang, B. Fisher, A. Schmidt, and D. Weiskopf, Eds., in Eye Tracking and Visualization: Foundations, Techniques, and Applications. ETVIS 2015. , Springer International Publishing, 2017, pp. 199–216. doi: 10.1007/978-3-319-47024-5_12.
    56. K. Kurzhals, M. Stoll, A. Bruhn, and D. Weiskopf, “FlowBrush: Optical Flow Art,” in Symposium on Computational Aesthetics, Sketch-Based Interfaces and Modeling, and Non-Photorealistic Animation and Rendering (EXPRESSIVE, co-located with SIGGRAPH)., in Symposium on Computational Aesthetics, Sketch-Based Interfaces and Modeling, and Non-Photorealistic Animation and Rendering (EXPRESSIVE, co-located with SIGGRAPH). 2017, pp. 1:1-1:9. doi: 10.1145/3092912.3092914.
    57. P. Knierim et al., “Tactile Drones - Providing Immersive Tactile Feedback in Virtual Reality through Quadcopters,” in Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA), G. Mark, S. R. Fussell, C. Lampe, m. c. schraefel, J. P. Hourcade, C. Appert, and D. Wigdor, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA). ACM, 2017, pp. 433–436. doi: 10.1145/3027063.3050426.
    58. P. Tutzauer, S. Becker, and N. Haala, “Perceptual Rules for Building Enhancements in 3d Virtual Worlds,” i-com, vol. 16, no. 3, Art. no. 3, 2017, doi: 10.1515/icom-2017-0022.
    59. H. Sattar, A. Bulling, and M. Fritz, “Predicting the Category and Attributes of Visual Search Targets Using Deep Gaze Pooling,” in Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCVW), in Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCVW). 2017, pp. 2740–2748. [Online]. Available: https://ieeexplore.ieee.org/document/8265534
    60. M. Tonsen, J. Steil, Y. Sugano, and A. Bulling, “InvisibleEye: Mobile Eye Tracking Using Multiple Low-Resolution Cameras and Learning-Based Gaze Estimation,” in Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), in Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), vol. 1. 2017, pp. 106:1-106:21. doi: 10.1145/3130971.
    61. S. Funke, N. Schnelle, and S. Storandt, “URAN: A Unified Data Structure for Rendering and Navigation,” in Web and Wireless Geographical Information Systems. W2GIS 2017. Lecture Notes in Computer Science, vol. 10181, D. Brosset, C. Claramunt, X. Li, and T. Wang, Eds., in Web and Wireless Geographical Information Systems. W2GIS 2017. Lecture Notes in Computer Science, vol. 10181. , 2017, pp. 66–82. doi: 10.1007/978-3-319-55998-8_5.
    62. N. Marniok, O. Johannsen, and B. Goldluecke, “An Efficient Octree Design for Local Variational Range Image Fusion,” in Pattern Recognition. GCPR 2017. Lecture Notes in Computer Science, vol. 10496, V. Roth and T. Vetter, Eds., in Pattern Recognition. GCPR 2017. Lecture Notes in Computer Science, vol. 10496. , Springer International Publishing, 2017, pp. 401–412. doi: 10.1007/978-3-319-66709-6_32.
    63. M. Stein et al., “Bring it to the Pitch: Combining Video and Movement Data to Enhance Team Sport Analysis,” in IEEE Transactions on Visualization and Computer Graphics, in IEEE Transactions on Visualization and Computer Graphics, vol. 24. 2017, pp. 13–22. [Online]. Available: https://ieeexplore.ieee.org/document/8019849
    64. H. Ben Lahmar and M. Herschel, “Provenance-based Recommendations for Visual Data Exploration,” in Proceedings of the USENIX Conference on Theory and Practice of Provenance (TAPP), in Proceedings of the USENIX Conference on Theory and Practice of Provenance (TAPP). 2017, pp. 1–7.
    65. D. Maurer, M. Stoll, and A. Bruhn, “Order-adaptive Regularisation for Variational Optical Flow: Global, Local and in Between.,” in Scale Space and Variational Methods in Computer Vision. SSVM 2017. Lecture Notes in Computer Science, F. Lauze, Y. Dong, and A. B. Dahl, Eds., in Scale Space and Variational Methods in Computer Vision. SSVM 2017. Lecture Notes in Computer Science, vol. 10302. Springer International Publishing, 2017, pp. 550–562. doi: 10.1007/978-3-319-58771-4_44.
    66. D. Maurer, M. Stoll, S. Volz, P. Gairing, and A. Bruhn, “A Comparison of Isotropic and Anisotropic Second Order Regularisers for Optical Flow,” in Scale Space and Variational Methods in Computer Vision. SSVM 2017. Lecture Notes in Computer Science, vol. 10302, F. Lauze, Y. Dong, and A. B. Dahl, Eds., in Scale Space and Variational Methods in Computer Vision. SSVM 2017. Lecture Notes in Computer Science, vol. 10302. , Springer International Publishing, 2017, pp. 537–549. doi: 10.1007/978-3-319-58771-4_43.
    67. M. van Garderen, B. Pampel, A. Nocaj, and U. Brandes, “Minimum-Displacement Overlap Removal for Geo-referenced Data Visualization,” Computer Graphics Forum, vol. 36, no. 3, Art. no. 3, 2017.
    68. S. Frey and T. Ertl, “Progressive Direct Volume-to-Volume Transformation,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, Art. no. 1, 2017, [Online]. Available: https://ieeexplore.ieee.org/document/7539644
    69. S. Frey, “Sampling and Estimation of Pairwise Similarity in Spatio-Temporal Data Based on Neural Networks,” in Informatics, in Informatics, vol. 4. Multidisciplinary Digital Publishing Institute (MDPI), 2017, p. 27. doi: 10.3390/informatics4030027.
    70. H. V. Le, V. Schwind, P. Göttlich, and N. Henze, “PredicTouch: A System to Reduce Touchscreen Latency using Neural Networks and Inertial Measurement Units,” in Proceedings of the ACM International Conference on Interactive Surfaces and Spaces (ISS), ACM, Ed., in Proceedings of the ACM International Conference on Interactive Surfaces and Spaces (ISS), vol. 17. ACM, 2017, pp. 230–239. doi: 10.1145/3132272.3134138.
    71. V. Schwind, P. Knierim, C. Tasci, P. Franczak, N. Haas, and N. Henze, “‘These are not my hands!’: Effect of Gender on the Perception of Avatar Hands in Virtual Reality,” Proceedings of the 2017 Conference on Human Factors in Computing Systems (CHI’17), pp. 1577–1582, 2017, doi: 10.1145/3025453.3025602.
    72. M. Burch, M. Hlawatsch, and D. Weiskopf, “Visualizing a Sequence of a Thousand Graphs (or Even More),” Computer Graphics Forum, vol. 36, no. 3, Art. no. 3, 2017, doi: 10.1111/cgf.13185.
    73. M. Spicker, F. Hahn, T. Lindemeier, D. Saupe, and O. Deussen, “Quantifying Visual Abstraction Quality for Stipple Drawings,” in Proceedings of the Symposium on Non-Photorealistic Animation and Rendering (NPAR), ACM, Ed., in Proceedings of the Symposium on Non-Photorealistic Animation and Rendering (NPAR). Association for Computing Machinery, 2017, pp. 8:1-8:10. doi: 10.1145/3092919.3092923.
    74. J. Karolus, P. W. Woźniak, L. L. Chuang, and A. Schmidt, “Robust Gaze Features for Enabling Language Proficiency Awareness,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, G. Mark, S. R. Fussell, C. Lampe, m. c. schraefel, J. P. Hourcade, C. Appert, and D. Wigdor, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2017, pp. 2998–3010. doi: 10.1145/3025453.3025601.
    75. H. Booth, C. Schätzle, K. Börjars, and M. Butt, “Dative Subjects and the Rise of Positional Licensing in Icelandic,” in Proceedings of the LFG’17 Conference, in Proceedings of the LFG’17 Conference. 2017, pp. 104–124. [Online]. Available: http://web.stanford.edu/group/cslipublications/cslipublications/LFG/LFG-2017/lfg2017-bsbb.pdf
    76. C. Schätzle, M. Hund, F. L. Dennig, M. Butt, and D. A. Keim, “HistoBankVis: Detecting Language Change via Data Visualization,” in Proceedings of the NoDaLiDa 2017 Workshop Processing Historical Language, G. Bouma and Y. Adesam, Eds., in Proceedings of the NoDaLiDa 2017 Workshop Processing Historical Language. Linköping University Electronic Press, 2017, pp. 32–39. [Online]. Available: https://www.aclweb.org/anthology/W17-0507
    77. T.-K. Machulla, L. L. Chuang, F. Kiss, M. O. Ernst, and A. Schmidt, “Sensory Amplification Through Crossmodal Stimulation,” in Proceedings of the CHI Workshop on Amplification and Augmentation of Human Perception, in Proceedings of the CHI Workshop on Amplification and Augmentation of Human Perception. 2017.
    78. Y. Abdelrahman, P. Knierim, P. W. Woźniak, N. Henze, and A. Schmidt, “See Through the Fire: Evaluating the Augmentation of Visual Perception of Firefighters Using Depth and Thermal Cameras,” in Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing and Symposium on Wearable Computers (UbiComp/ISWC), S. C. Lee, L. Takayama, and K. N. Truong, Eds., in Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing and Symposium on Wearable Computers (UbiComp/ISWC). ACM, 2017, pp. 693–696. doi: 10.1145/3123024.3129269.
    79. T. Dingler, A. Schmidt, and T. Machulla, “Building Cognition-Aware Systems: A Mobile Toolkit for Extracting Time-of-Day Fluctuations of Cognitive Performance,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), vol. 1, no. 3, Art. no. 3, 2017, doi: 10.1145/3132025.
  9. 2016

    1. D. Weiskopf, M. Burch, L. L. Chuang, B. Fischer, and A. Schmidt, Eye Tracking and Visualization: Foundations, Techniques, and Applications. Berlin, Heidelberg: Springer, 2016. [Online]. Available: https://www.springer.com/de/book/9783319470238
    2. M. Herschel and M. Hlawatsch, “Provenance: On and Behind the Screens,” in Proceedings of the ACM International Conference on the Management of Data (SIGMOD), F. Özcan, G. Koutrika, and S. Madden, Eds., in Proceedings of the ACM International Conference on the Management of Data (SIGMOD). ACM, 2016, pp. 2213–2217. doi: 10.1145/2882903.2912568.
    3. V. Bruder, S. Frey, and T. Ertl, “Real-Time Performance Prediction and Tuning for Interactive Volume Raycasting,” in Proceedings of the SIGGRAPH Asia Symposium on Visualization, ACM, Ed., in Proceedings of the SIGGRAPH Asia Symposium on Visualization, vol. 2016. ACM, 2016, pp. 1–8. doi: 10.1145/3002151.3002156.
    4. M. Aupetit and M. Sedlmair, “SepMe: 2002 New Visual Separation Measures.,” in Proceedings of the IEEE Pacific Visualization Symposium (PacificVis), C. Hansen, I. Viola, and X. Yuan, Eds., in Proceedings of the IEEE Pacific Visualization Symposium (PacificVis). IEEE, 2016, pp. 1–8. [Online]. Available: https://ieeexplore.ieee.org/abstract/document/7465244
    5. M. Hund et al., “Visual Analytics for Concept Exploration in Subspaces of Patient Groups,” Brain Informatics, vol. 3, no. 4, Art. no. 4, 2016, doi: 10.1007/s40708-016-0043-5.
    6. A. Voit, T. Machulla, D. Weber, V. Schwind, S. Schneegaß, and N. Henze, “Exploring Notifications in Smart Home Environments,” in Proceedings of the International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct (MobileHCI), ACM, Ed., in Proceedings of the International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct (MobileHCI). 2016, pp. 942–947. doi: 10.1145/2957265.2962661.
    7. M. Burch, R. Woods, R. Netzel, and D. Weiskopf, “The Challenges of Designing Metro Maps,” Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, 2016, doi: 10.5220/0005679601950202.
    8. V. Schwind and S. Jäger, “The Uncanny Valley and the Importance of Eye Contact,” in Mensch und Computer 2015 - Tagungsband, in Mensch und Computer 2015 - Tagungsband, vol. 2015. Oldenbourg Wissenschaftsverlag, 2016, pp. 153–162. doi: 10.1515/icom-2016-0001.
    9. T. Dingler, R. Rzayev, V. Schwind, and N. Henze, “RSVP on the go - Implicit Reading Support on Smart Watches Through Eye Tracking,” in Proceedings of the ACM International Symposium on Wearable Computers (ISWC), ACM, Ed., in Proceedings of the ACM International Symposium on Wearable Computers (ISWC). 2016, pp. 116–119. doi: 10.1145/2971763.2971794.
    10. P. Xu, Y. Sugano, and A. Bulling, “Spatio-Temporal Modeling and Prediction of Visual Attention in Graphical User Interfaces,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, in Proceedings of the CHI Conference on Human Factors in Computing Systems. 2016, pp. 3299–3310.
    11. A. Barth and A. Stein, “Approximation and simulation of infinite-dimensional Lévy processes,” Stochastics and Partial Differential Equations: Analysis and Computations, vol. 6, no. 2, Art. no. 2, 2016, doi: 10.1007/s40072-017-0109-2.
    12. J. Zagermann, U. Pfeil, R. Rädle, H.-C. Jetter, C. N. Klokmose, and H. Reiterer, “When Tablets meet Tabletops: The Effect of Tabletop Size on Around-the-Table Collaboration with Personal Tablets,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, J. Kaye, A. Druin, C. Lampe, D. Morris, and J. P. Hourcade, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2016, pp. 5470–5481. doi: 10.1145/2858036.2858224.
    13. T. Waltemate et al., “The Impact of Latency on Perceptual Judgments and Motor Performance in Closed-loop Interaction in Virtual Reality,” in Proceedings of the ACM Conference on Virtual Reality Software and Technology (VRST), D. Kranzlmüller and G. Klinker, Eds., in Proceedings of the ACM Conference on Virtual Reality Software and Technology (VRST). ACM, 2016, pp. 27–35. doi: 10.1145/2993369.2993381.
    14. K. Kurzhals, M. Hlawatsch, M. Burch, and D. Weiskopf, “Fixation-Image Charts,” in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), ACM, Ed., in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), vol. 1. ACM, 2016, pp. 11–18. doi: 10.1145/2857491.2857507.
    15. C. Schulz et al., “Generative Data Models for Validation and Evaluation of Visualization Techniques,” in Proceedings of the Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization (BELIV), in Proceedings of the Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization (BELIV). ACM, 2016, pp. 112–124. doi: 10.1145/2993901.2993907.
    16. R. Netzel and D. Weiskopf, “Hilbert Attention Maps for Visualizing Spatiotemporal Gaze Data,” in Proceedings of the Symposium on Eye Tracking and Visualization (ETVIS), in Proceedings of the Symposium on Eye Tracking and Visualization (ETVIS). 2016, pp. 21–25. [Online]. Available: https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7851160
    17. T. Blascheck, F. Beck, S. Baltes, T. Ertl, and D. Weiskopf, “Visual analysis and coding of data-rich user behavior,” in IEEE Conference on Visual Analytics Science and Technology, in IEEE Conference on Visual Analytics Science and Technology. IEEE, 2016, pp. 141–150. doi: 10.1109/vast.2016.7883520.
    18. R. Netzel, M. Burch, and D. Weiskopf, “User Performance and Reading Strategies for Metro Maps: An Eye Tracking Study,” Special Issue on Eye Tracking for Spatial Research in Spatial Cognition and Computation: An Interdisciplinary Journal, 2016, doi: 10.1080/13875868.2016.1226839.
    19. L. Lischke, S. Mayer, K. Wolf, N. Henze, H. Reiterer, and A. Schmidt, “Screen arrangements and interaction areas for large display work places,” in PerDis ’16 Proceedings of the 5th ACM International Symposium on Pervasive Displays, ACM, Ed., in PerDis ’16 Proceedings of the 5th ACM International Symposium on Pervasive Displays, vol. 5. ACM, 2016, pp. 228–234. doi: 10.1145/2914920.2915027.
    20. L. Lischke, V. Schwind, K. Friedrich, A. Schmidt, and N. Henze, “MAGIC-Pointing on Large High-Resolution Displays,” in Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA), ACM, Ed., in Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA). ACM, 2016, pp. 1706–1712. doi: 10.1145/2851581.2892479.
    21. S. Butscher and H. Reiterer, “Applying Guidelines for the Design of Distortions on Focus+Context Interfaces,” in Proceedings of the Working Conference on Advanced Visual Interfaces (AVI), P. Buono, R. Lanzilotti, M. Matera, and M. F. Costabile, Eds., in Proceedings of the Working Conference on Advanced Visual Interfaces (AVI). ACM, 2016, pp. 244–247. doi: 10.1145/2909132.2909284.
    22. E. Wood, T. Baltrusaitis, L.-P. Morency, P. Robinson, and A. Bulling, “A 3D Morphable Eye Region Model for Gaze Estimation,” in Proceedings of the European Conference on Computer Vision (ECCV), in Proceedings of the European Conference on Computer Vision (ECCV). 2016, pp. 297–313. [Online]. Available: https://link.springer.com/chapter/10.1007%2F978-3-319-46448-0_18
    23. E. Wood, T. Baltrusaitis, L.-P. Morency, P. Robinson, and A. Bulling, “Learning an Appearance-Based Gaze Estimator from One Million Synthesised Images,” in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA). 2016, pp. 131–138. doi: 10.1145/2857491.2857492.
    24. P. Tutzauer, S. Becker, T. Niese, O. Deussen, and D. Fritsch, “Understanding Human Perception of Building Categories in Virtual 3d Cities - a User Study,” The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences (ISPRS), pp. 683–687, 2016, [Online]. Available: https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLI-B2/683/2016/isprs-archives-XLI-B2-683-2016.pdf
    25. A. Nocaj, M. Ortmann, and U. Brandes, “Adaptive Disentanglement Based on Local Clustering in Small-World Network Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 6, Art. no. 6, 2016, [Online]. Available: http://dblp.uni-trier.de/db/journals/tvcg/tvcg22.html#NocajOB16
    26. S. Funke, A. Nusser, and S. Storandt, “On k-Path Covers and their Applications.,” VLDB Journal, vol. 25, no. 1, Art. no. 1, 2016, doi: 10.1007/s00778-015-0392-3.
    27. A. Kumar, R. Netzel, M. Burch, D. Weiskopf, and K. Mueller, “Multi-Similarity Matrices of Eye Movement Data,” in Proceedings of the Symposium on Eye Tracking and Visualization (ETVIS), in Proceedings of the Symposium on Eye Tracking and Visualization (ETVIS). 2016, pp. 26–30. [Online]. Available: https://ieeexplore.ieee.org/document/7851161
    28. D. Maurer, Y.-C. Ju, M. Breuß, and A. Bruhn, “Combining shape from shading and stereo: a variational approach for the joint estimation of depth, illumination and albedo.,” in Proceedings of the British Machine Vision Conference (BMVC), in Proceedings of the British Machine Vision Conference (BMVC). BMVA Press, 2016.
    29. K. Kurzhals, M. Hlawatsch, F. Heimerl, M. Burch, T. Ertl, and D. Weiskopf, “Gaze Stripes: Image-Based Visualization of Eye Tracking Data,” IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 1, Art. no. 1, 2016, doi: 10.1109/TVCG.2015.2468091.
    30. J. Karolus, P. W. Woźniak, and L. L. Chuang, “Towards Using Gaze Properties to Detect Language Proficiency,” in Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI), in Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI). New York, NY, USA: ACM, 2016, pp. 118:1-118:6. doi: 10.1145/2971485.2996753.
    31. X. Zhang, Y. Sugano, M. Fritz, and A. Bulling, “It’s Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016, pp. 2299–2308. [Online]. Available: https://ieeexplore.ieee.org/document/8015018
    32. V. Hosu, F. Hahn, I. Zingman, and D. Saupe, “Reported Attention as a Promising Alternative to Gaze in IQA Tasks,” in Proceedings of  the 5th ISCA/DEGA Workshop on Perceptual Quality of Systems (PQS 2016), in Proceedings of  the 5th ISCA/DEGA Workshop on Perceptual Quality of Systems (PQS 2016). 2016, pp. 117–121. [Online]. Available: https://www.isca-speech.org/archive/PQS_2016/abstracts/25.html
    33. V. Hosu, F. Hahn, O. Wiedemann, S.-H. Jung, and D. Saupe, “Saliency-driven Image Coding Improves Overall Perceived JPEG Quality,” in Proceedings of the Picture Coding Symposium (PCS), in Proceedings of the Picture Coding Symposium (PCS). IEEE, 2016, pp. 1–5. [Online]. Available: https://www.uni-konstanz.de/mmsp/pubsys/publishedFiles/HoHaWi16.pdf
    34. J. Müller, R. Rädle, and H. Reiterer, “Virtual Objects as Spatial Cues in Collaborative Mixed Reality Environments: How They Shape Communication Behavior and User Task Load,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, J. Kaye, A. Druin, C. Lampe, D. Morris, and J. P. Hourcade, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2016, pp. 1245–1249. doi: 10.1145/2858036.2858043.
    35. J. Zagermann, U. Pfeil, and H. Reiterer, “Measuring Cognitive Load using Eye Tracking Technology in Visual Computing,” in Proceedings of the Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization (BELIV), M. Sedlmair, P. Isenberg, T. Isenberg, N. Mahyar, and H. Lam, Eds., in Proceedings of the Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization (BELIV). ACM, 2016, pp. 78–85. doi: 10.1145/2993901.2993908.
    36. M. Greis, P. El.Agroudy, H. Schuff, T. Machulla, and A. Schmidt, “Decision-Making under Uncertainty: How the Amount of Presented Uncertainty Influences User Behavior,” in Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI), ACM, Ed., in Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI), vol. 2016. 2016. doi: 10.1145/2971485.2971535.
    37. S. Frey and T. Ertl, “Auto-Tuning Intermediate Representations for In Situ Visualization,” in Proceedings of the New York Scientific Data Summit (NYSDS), in Proceedings of the New York Scientific Data Summit (NYSDS). IEEE, 2016, pp. 1–10. [Online]. Available: https://ieeexplore.ieee.org/document/7747807
    38. S. Cheng and K. Mueller, “The Data Context Map: Fusing Data and Attributes into a Unified Display.,” IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 1, Art. no. 1, 2016, [Online]. Available: http://dblp.uni-trier.de/db/journals/tvcg/tvcg22.html#ChengM16
    39. M. Correll and J. Heer, “Black Hat Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 1, Art. no. 1, 2016, [Online]. Available: http://idl.cs.washington.edu/files/2017-BlackHatVis-DECISIVe.pdf
    40. A. Barth and F. G. Fuchs, “Uncertainty Quantification for Hyperbolic Conservation Laws with Flux Coefficients Given by Spatiotemporal Random Fields,” SIAM Journal on Scientific Computing, vol. 38, no. 4, Art. no. 4, 2016, doi: 10.1137/15M1027723.
    41. J. Hildenbrand, A. Nocaj, and U. Brandes, “Flexible Level-of-Detail Rendering for Large Graphs,” no. 9801, Y. Hu and M. Nöllenburg, Eds., 2016. [Online]. Available: https://link.springer.com/content/pdf/bbm%3A978-3-319-50106-2%2F1.pdf
    42. O. Johannsen, A. Sulc, N. Marniok, and B. Goldluecke, “Layered Scene Reconstruction from Multiple Light Field Camera Views,” in Computer Vision – ACCV 2016. ACCV 2016. Lecture Notes in Computer Science, vol. 10113, S.-H. Lai, V. Lepetit, K. Nishino, and Y. Sato, Eds., in Computer Vision – ACCV 2016. ACCV 2016. Lecture Notes in Computer Science, vol. 10113. , Springer International Publishing, 2016, pp. 3–18. doi: 10.1007/978-3-319-54187-7_1.
    43. B. Pfleging, D. K. Fekety, A. Schmidt, and A. L. Kun, “A Model Relating Pupil Diameter to Mental Workload and Lighting Conditions,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, J. Kaye, A. Druin, C. Lampe, D. Morris, and J. P. Hourcade, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2016, pp. 5776–5788. doi: 10.1145/2858036.2858117.
    44. C. Schätzle and D. Sacha, “Visualizing Language Change: Dative Subjects in Icelandic,” in Proceedings of the LREC 2016 Workshop VisLRII: Visualization as Added Value in the Development, Use and Evaluation of Language Resources, in Proceedings of the LREC 2016 Workshop VisLRII: Visualization as Added Value in the Development, Use and Evaluation of Language Resources. 2016, pp. 8–15. [Online]. Available: http://www.lrec-conf.org/proceedings/lrec2016/workshops/LREC2016Workshop-VisLR%20II_Proceedings.pdf
    45. D. Sacha et al., “Human-Centered Machine Learning Through Interactive Visualization: Review and Open Challenges.,” in Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN), in Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN). 2016. [Online]. Available: http://dblp.uni-trier.de/db/conf/esann/esann2016.html#SachaSZLWNK16
    46. K. Kurzhals, B. Fisher, M. Burch, and D. Weiskopf, “Eye Tracking Evaluation of Visual Analytics,” Information Visualization, vol. 15, no. 4, Art. no. 4, 2016, doi: 10.1177/1473871615609787.
    47. R. Netzel, M. Burch, and D. Weiskopf, “Interactive Scanpath-Oriented Annotation of Fixations,” Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications, pp. 183–187, 2016, doi: 10.1145/2857491.2857498.
    48. N. Flad, J. C. Ditz, A. Schmidt, H. H. Bülthoff, and L. L. Chuang, “Data-Driven Approaches to Unrestricted Gaze-Tracking Benefit from Saccade Filtering,” in Proceedings of the Second Workshop on Eye Tracking and Visualization (ETVIS), M. Burch, L. L. Chuang, and A. T. Duchowski, Eds., in Proceedings of the Second Workshop on Eye Tracking and Visualization (ETVIS). IEEE, 2016, pp. 1–5. [Online]. Available: https://ieeexplore.ieee.org/document/7851156
    49. M. Scheer, H. H. Bülthoff, and L. L. Chuang, “Steering Demands Diminish the Early-P3, Late-P3 and RON Components of the Event-Related Potential of Task-Irrelevant Environmental Sounds,” in Frontiers in Human Neuroscience, F. in Human Neuroscience, Ed., in Frontiers in Human Neuroscience, vol. 10. 2016, pp. 73:1-73:15. doi: 10.3389/fnhum.2016.00073.
    50. P. Tutzauer, S. Becker, D. Fritsch, T. Niese, and O. Deussen, “A Study of the Human Comprehension of Building Categories Based on Different 3D Building Representations,” Photogrammetrie - Fernerkundung - Geoinformation, vol. 2016, pp. 319–333, 2016, doi: 10.1127/pfg/2016/0302.
    51. I. Zingman, D. Saupe, O. A. B. Penatti, and K. Lambers, “Detection of Fragmented Rectangular Enclosures in Very High Resolution Remote Sensing Images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 8, Art. no. 8, 2016, [Online]. Available: https://ieeexplore.ieee.org/document/7452408
    52. D. Saupe, F. Hahn, V. Hosu, I. Zingman, M. Rana, and S. Li, “Crowd Workers Proven Useful: A Comparative Study of Subjective Video Quality Assessment,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX). 2016, pp. 1–2. [Online]. Available: https://www.uni-konstanz.de/mmsp/pubsys/publishedFiles/SaHaHo16.pdf
    53. A. Barth, R. Bürger, I. Kröker, and C. Rohde, “Computational Uncertainty Quantification for a Clarifier-thickener Model with Several Random Perturbations: A Hybrid Stochastic Galerkin Approach,” Computers & Chemical Engineering, vol. 89, pp. 11–26, 2016, doi: 10.1016/j.compchemeng.2016.02.016.
    54. S. Funke, F. Krumpe, and S. Storandt, “Crushing Disks Efficiently,” in Combinatorial Algorithms. IWOCA 2016. Lecture Notes in Computer Science, vol. 9843, V. Mäkinen, S. J. Puglisi, and L. Salmela, Eds., in Combinatorial Algorithms. IWOCA 2016. Lecture Notes in Computer Science, vol. 9843. , Springer International Publishing, 2016, pp. 43–54. doi: 10.1007/978-3-319-44543-4_4.
  10. 2015

    1. L. L. Chuang and H. H. Bülthoff, “Towards a Better Understanding of Gaze Behavior in the Automobile,” in Position papers of the workshops at AutomotiveUI’15, in Position papers of the workshops at AutomotiveUI’15. Sep. 2015. [Online]. Available: https://www.auto-ui.org/15/p/workshops/2/8_Towards%20a%20Better%20Understanding%20of%20Gaze%20Behavior%20in%20the%20Automobile_Chuang.pdf
    2. T. Chandler et al., “Immersive Analytics,” Sep. 2015, IEEE. doi: 10.1109/bdva.2015.7314296.
    3. N. Flad, T. Fomina, H. H. Bülthoff, and L. L. Chuang, “Unsupervised Clustering of EOG as a Viable Substitute for Optical Eye Tracking,” in Eye Tracking and Visualization: Foundations, Techniques, and Applications, M. Burch, L. L. Chuang, B. D. Fisher, A. Schmidt, and D. Weiskopf, Eds., in Eye Tracking and Visualization: Foundations, Techniques, and Applications. , Springer International Publishing, 2015, pp. 151–167. doi: 10.1007/978-3-319-47024-5_9.
    4. S. Frey, F. Sadlo, and T. Ertl, “Balanced Sampling and Compression for Remote Visualization,” in Proceedings of the SIGGRAPH Asia Symposium on High Performance Computing, in Proceedings of the SIGGRAPH Asia Symposium on High Performance Computing. ACM, 2015, pp. 1–4. doi: 10.1145/2818517.2818529.
    5. L. Lischke et al., “Using Space: Effect of Display Size on Users’ Search Performance,” in Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA), B. Begole, J. Kim, K. Inkpen, and W. Woo, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA). ACM, 2015, pp. 1845–1850. doi: 10.1145/2702613.2732845.
    6. M. Sedlmair and M. Aupetit, “Data-driven Evaluation of Visual Quality Measures,” Computer Graphics Forum, vol. 34, no. 3, Art. no. 3, 2015, doi: 10.5555/2858877.2858899.
    7. C. Schulz, M. Burch, and D. Weiskopf, “Visual Data Cleansing of Eye Tracking Data,” in Proceedings of the Symposium on Eye Tracking and Visualization (ETVIS), in Proceedings of the Symposium on Eye Tracking and Visualization (ETVIS). 2015. [Online]. Available: http://etvis.visus.uni-stuttgart.de/etvis2015/papers/etvis15_schulz.pdf
    8. L. Lischke, J. Grüninger, K. Klouche, A. Schmidt, P. Slusallek, and G. Jacucci, “Interaction Techniques for Wall-Sized Screens,” Proceedings of the International Conference on Interactive Tabletops & Surfaces (ITS), pp. 501–504, 2015, doi: 10.1145/2817721.2835071.
    9. L. Lischke, P. Knierim, and H. Klinke, “Mid-Air Gestures for Window Management on Large Displays,” in Mensch und Computer 2015 – Tagungsband (MuC), D. G. Oldenbourg, Ed., in Mensch und Computer 2015 – Tagungsband (MuC). De Gruyter, 2015, pp. 439–442. doi: 10.1515/9783110443929-072.
    10. L. L. Chuang, “Error Visualization and Information-Seeking Behavior for Air-Vehicle Control,” in Foundations of Augmented Cognition. AC 2015. Lecture Notes in Computer Science, vol. 9183, D. Schmorrow and C. M. Fidopiastis, Eds., in Foundations of Augmented Cognition. AC 2015. Lecture Notes in Computer Science, vol. 9183. , Springer, 2015, pp. 3–11. doi: 10.1007/978-3-319-20816-9_1.
    11. K. Kurzhals, M. Burch, T. Pfeiffer, and D. Weiskopf, “Eye Tracking in Computer-based Visualization,” Computing in Science & Engineering, vol. 17, no. 5, Art. no. 5, 2015, doi: 10.1109/MCSE.2015.93.
    12. M. Hund et al., “Subspace Nearest Neighbor Search - Problem Statement, Approaches, and Discussion,” in Similarity Search and Applications. International Conference on Similarity Search and Applications (SISAP). Lecture Notes in Computer Science, vol. 9371, G. Amato, R. Connor, F. Falchi, and C. Gennaro, Eds., in Similarity Search and Applications. International Conference on Similarity Search and Applications (SISAP). Lecture Notes in Computer Science, vol. 9371. , Springer, Cham, 2015, pp. 307–313. [Online]. Available: https://link.springer.com/chapter/10.1007%2F978-3-319-25087-8_29
    13. M. Spicker, J. Kratt, D. Arellano, and O. Deussen, “Depth-aware Coherent Line Drawings,” in Proceedings of the SIGGRAPH Asia Symposium on Computer Graphics and Interactive Techniques, Technical Briefs, in Proceedings of the SIGGRAPH Asia Symposium on Computer Graphics and Interactive Techniques, Technical Briefs. ACM, 2015, pp. 1:1-1:5. doi: 10.1145/2820903.2820909.

Project Group A

Models and Measures

 

Completed

 

Project Group B

Adaptive Algorithms

 

Completed

 

Project Group C

Interaction

 

Completed

 

Project Group D

Applications

 

Completed