Selected Paper Awards & Personal Awards

For more awards, please browse our news section.

All Publications

  1. 2023

    1. K.-T. Chen et al., “Reading Strategies for Graph Visualizations That Wrap Around in Torus Topology,” in Proceedings of the 2023 Symposium on Eye Tracking Research and Applications, in Proceedings of the 2023 Symposium on Eye Tracking Research and Applications. Tubingen, Germany: Association for Computing Machinery, 2023. doi: 10.1145/3588015.3589841.
    2. N. Doerr, K. Angerbauer, M. Reinelt, and M. Sedlmair, “Bees, Birds and Butterflies: Investigating the Influence of Distractors on Visual Attention Guidance Techniques,” in Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, in Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. Hamburg, Germany: Association for Computing Machinery, 2023. doi: 10.1145/3544549.3585816.
    3. T. Ge et al., “Optimally Ordered Orthogonal Neighbor Joining Trees for Hierarchical Cluster Analysis,” IEEE Transactions on Visualization and Computer Graphics, pp. 1–13, 2023, doi: 10.1109/TVCG.2023.3284499.
    4. S. Hubenschmid, J. Zagermann, D. Leicht, H. Reiterer, and T. Feuchtner, “ARound the Smartphone: Investigating the Effects of Virtually-Extended Display Size on Spatial Memory,” in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23), in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23). New York, NY, USA: ACM, 2023. doi: https://doi.org/10.1145/3544548.3581438.
    5. D. Hägele, T. Krake, and D. Weiskopf, “Uncertainty-Aware Multidimensional Scaling,” IEEE Transactions on Visualization and Computer Graphics, vol. 29, no. 1, Art. no. 1, 2023, doi: 10.1109/TVCG.2022.3209420.
    6. T. Kosch, J. Karolus, J. Zagermann, H. Reiterer, A. Schmidt, and P. W. Woźniak, “A Survey on Measuring Cognitive Workload in Human-Computer Interaction,” ACM Comput. Surv., Jan. 2023, doi: 10.1145/3582272.
    7. L. Mehl, A. Jahedi, J. Schmalfuss, and A. Bruhn, “M-FUSE: Multi-frame Fusion for Scene Flow Estimation,” in Proc. Winter Conference on Applications of Computer Vision (WACV), in Proc. Winter Conference on Applications of Computer Vision (WACV). Jan. 2023. doi: 10.48550/arXiv.2207.05704.
    8. C. Morariu, A. Bibal, R. Cutura, B. Frénay, and M. Sedlmair, “Predicting User Preferences of Dimensionality Reduction Embedding Quality,” IEEE Transactions on Visualization and Computer Graphics, vol. 29, no. 1, Art. no. 1, 2023, doi: 10.1109/TVCG.2022.3209449.
    9. P. Paetzold, R. Kehlbeck, H. Strobelt, Y. Xue, S. Storandt, and O. Deussen, “RectEuler: Visualizing Intersecting Sets using Rectangles,” Computer Graphics Forum, vol. 42, no. 3, Art. no. 3, 2023, doi: https://doi.org/10.1111/cgf.14814.
    10. N. Rodrigues, C. Schulz, S. Doring, D. Baumgartner, T. Krake, and D. Weiskopf, “Relaxed Dot Plots: Faithful Visualization of Samples and Their Distribution,” IEEE Transactions on Visualization and Computer Graphics, vol. 29, no. 1, Art. no. 1, Jan. 2023, doi: 10.1109/TVCG.2022.3209429.
    11. J. Schmalfuss, E. Scheurer, H. Zhao, N. Karantzas, A. Bruhn, and D. Labate, “Blind image inpainting with sparse directional filter dictionaries for lightweight CNNs,” Journal of Mathematical Imaging and Vision (JMIV), vol. 65, pp. 323--339, 2023, doi: 10.1007/s10851-022-01119-6.
    12. E. Sood, L. Shi, M. Bortoletto, Y. Wang, P. Müller, and A. Bulling, “Improving Neural Saliency Prediction with a Cognitive Model of Human Visual Attention,” in Proceedings of the 45th Annual Meeting of the Cognitive Science Society (CogSci), in Proceedings of the 45th Annual Meeting of the Cognitive Science Society (CogSci). Jul. 2023, pp. 3639–3646. [Online]. Available: https://escholarship.org/uc/item/5968p71m
    13. Y. Wang, M. Bâce, and A. Bulling, “Scanpath Prediction on Information Visualisations,” IEEE Transactions on Visualization and Computer Graphics, pp. 1--15, Feb. 2023, doi: 10.1109/TVCG.2023.3242293.
  2. 2022

    1. M. Abdelaal, N. D. Schiele, K. Angerbauer, K. Kurzhals, M. Sedlmair, and D. Weiskopf, “Comparative Evaluation of Bipartite, Node-Link, and Matrix-Based Network Representations,” IEEE Transactions on Visualization and Computer Graphics, pp. 1–11, 2022, doi: 10.1109/TVCG.2022.3209427.
    2. K. Angerbauer et al., “Accessibility for Color Vision Deficiencies: Challenges and Findings of a Large Scale Study on Paper Figures,” in Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, in Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. New Orleans, LA, USA: Association for Computing Machinery, 2022. doi: 10.1145/3491102.3502133.
    3. K. Angerbauer and M. Sedlmair, “Toward Inclusion and Accessibility in Visualization Research: Speculations on Challenges, Solution Strategies, and Calls for Action (Position Paper),” in 2022 IEEE Evaluation and Beyond - Methodological Approaches for Visualization (BELIV), in 2022 IEEE Evaluation and Beyond - Methodological Approaches for Visualization (BELIV). Oct. 2022, pp. 20–27. doi: 10.1109/BELIV57783.2022.00007.
    4. P. Balestrucci, D. Wiebusch, and M. O. Ernst, “ReActLab: A Custom Framework for Sensorimotor Experiments ‘in-the-wild,’” Frontiers in Psychology, vol. 13, Jun. 2022, doi: 10.3389/fpsyg.2022.906643.
    5. M. Becher et al., “Situated Visual Analysis and Live Monitoring for Manufacturing,” IEEE Computer Graphics and Applications, pp. 1–1, 2022, doi: 10.1109/MCG.2022.3157961.
    6. D. Bienroth et al., “Spatially resolved transcriptomics in immersive environments,” Visual Computing for Industry, Biomedicine, and Art, vol. 5, no. 1, Art. no. 1, 2022, doi: 10.1186/s42492-021-00098-6.
    7. V. Bruder, M. Larsen, T. Ertl, H. Childs, and S. Frey, “A Hybrid In Situ Approach for Cost Efficient Image Database Generation,” IEEE Transactions on Visualization and Computer Graphics, pp. 1–1, 2022, doi: 10.1109/TVCG.2022.3169590.
    8. F. Chiossi, R. Welsch, S. Villa, L. Chuang, and S. Mayer, “Virtual Reality Adaptation Using Electrodermal Activity to Support the User Experience,” Big Data and Cognitive Computing, vol. 6, no. 2, Art. no. 2, 2022, doi: 10.3390/bdcc6020055.
    9. F. Chiossi et al., “Adapting visualizations and interfaces to the user,” it - Information Technology, vol. 64, no. 4–5, Art. no. 4–5, 2022, doi: 10.1515/itit-2022-0035.
    10. D. Dietz et al., “Walk This Beam: Impact of Different Balance Assistance Strategies and Height Exposure on Performance and Physiological Arousal in VR,” in 28th ACM Symposium on Virtual Reality Software and Technology, in 28th ACM Symposium on Virtual Reality Software and Technology. 2022, pp. 1--12. doi: 10.1145/3562939.3567818.
    11. S. Dosdall, K. Angerbauer, L. Merino, M. Sedlmair, and D. Weiskopf, “Toward In-Situ Authoring of Situated Visualization with Chorded Keyboards,” in 15th International Symposium on Visual Information Communication and Interaction, VINCI 2022, Chur, Switzerland, August 16-18, 2022, M. Burch, G. Wallner, and D. Limberger, Eds., in 15th International Symposium on Visual Information Communication and Interaction, VINCI 2022, Chur, Switzerland, August 16-18, 2022. ACM, 2022, pp. 1–5. doi: 10.1145/3554944.3554970.
    12. D. I. Fink, J. Zagermann, H. Reiterer, and H.-C. Jetter, “Re-Locations: Augmenting Personal and Shared Workspaces to Support Remote Collaboration in Incongruent Spaces,” Proc. ACM Hum.-Comput. Interact., vol. 6, no. ISS, Art. no. ISS, Nov. 2022, doi: 10.1145/3567709.
    13. P. Fleck, A. Sousa Calepso, S. Hubenschmid, M. Sedlmair, and D. Schmalstieg, “RagRug: A Toolkit for Situated Analytics,” IEEE Transactions on Visualization and Computer Graphics, 2022, doi: 10.1109/TVCG.2022.3157058.
    14. S. Frey et al., “Parameter Adaptation In Situ: Design Impacts and Trade-Offs,” in In Situ Visualization for Computational Science, H. Childs, J. C. Bennett, and C. Garth, Eds., in In Situ Visualization for Computational Science. Cham: Springer International Publishing, 2022, pp. 159--182. doi: 10.1007/978-3-030-81627-8_8.
    15. D. Garkov, C. Müller, M. Braun, D. Weiskopf, and F. Schreiber, “Research Data Curation in Visualization: Position Paper,” in 2022 IEEE Evaluation and Beyond - Methodological Approaches for Visualization (BELIV), M. Sedlmair, Ed., in 2022 IEEE Evaluation and Beyond - Methodological Approaches for Visualization (BELIV). 2022, pp. 56–65. doi: 10.1109/BELIV57783.2022.00011.
    16. J. Görtler et al., “Neo: Generalizing Confusion Matrix Visualization to Hierarchical and Multi-Output Labels,” in Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, in Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. New Orleans, LA, USA: Association for Computing Machinery, 2022, pp. 1–13. doi: 10.1145/3491102.3501823.
    17. F. Götz-Hahn, V. Hosu, and D. Saupe, “Critical Analysis on the Reproducibility of Visual Quality Assessment Using Deep Features,” PLoS ONE, vol. 17, no. 8, Art. no. 8, 2022, doi: 10.1371/journal.pone.0269715.
    18. A. Huang, P. Knierim, F. Chiossi, L. L. Chuang, and R. Welsch, “Proxemics for Human-Agent Interaction in Augmented Reality,” in CHI Conference on Human Factors in Computing Systems, in CHI Conference on Human Factors in Computing Systems. 2022, pp. 1--13. doi: 10.1145/3491102.3517593.
    19. S. Hubenschmid et al., “ReLive: Bridging In-Situ and Ex-Situ Visual Analytics for Analyzing Mixed Reality User Studies,” in CHI Conference on Human Factors in Computing Systems (CHI ’22), in CHI Conference on Human Factors in Computing Systems (CHI ’22). New York, NY: ACM, 2022, pp. 1–20. doi: 10.1145/3491102.3517550.
    20. D. Hägele et al., “Uncertainty Visualization: Fundamentals and Recent Developments,” it - Information Technology, vol. 64, no. 4–5, Art. no. 4–5, 2022, doi: 10.1515/itit-2022-0033.
    21. A. Jahedi, L. Mehl, M. Rivinius, and A. Bruhn, “Multi-Scale RAFT: combining hierarchical concepts for learning-based optical flow estimation,” Proceedings of the IEEE International Conference on Image Processing (ICIP), pp. 1236–1240, Oct. 2022, doi: 10.1109/ICIP46576.2022.9898048.
    22. L. Joos, S. Jaeger-Honz, F. Schreiber, D. A. Keim, and K. Klein, “Visual Comparison of Networks in VR,” IEEE Transactions on Visualization and Computer Graphics, vol. 28, no. 11, Art. no. 11, 2022, doi: 10.1109/TVCG.2022.3203001.
    23. R. Kehlbeck, J. Görtler, Y. Wang, and O. Deussen, “SPEULER: Semantics-preserving Euler Diagrams,” IEEE Transactions on Visualization and Computer Graphics, vol. 28, no. 1, Art. no. 1, 2022, doi: 10.1109/TVCG.2021.3114834.
    24. K. Klein, M. Sedlmair, and F. Schreiber, “Immersive Analytics: An Overview,” it - Information Technology, vol. 64, no. 4–5, Art. no. 4–5, 2022, doi: doi:10.1515/itit-2022-0037.
    25. M. Koch, D. Weiskopf, and K. Kurzhals, “A Spiral into the Mind: Gaze Spiral Visualization for Mobile Eye Tracking,” Proceedings of the ACM on Computer Graphics and Interactive Techniques, vol. 5, no. 2, Art. no. 2, May 2022, doi: 10.1145/3530795.
    26. T. Kosch, R. Welsch, L. Chuang, and A. Schmidt, “The Placebo Effect of Artificial Intelligence in Human-Computer Interaction,” ACM Transactions on Computer-Human Interaction, 2022, doi: 10.1145/3529225.
    27. T. Krake, A. Bruhn, B. Eberhardt, and D. Weiskopf, “Efficient and Robust Background Modeling with Dynamic Mode Decomposition,” Journal of Mathematical Imaging and Vision (2022), 2022, doi: 10.1007/s10851-022-01068-0.
    28. T. Krake, D. Klötzl, B. Eberhardt, and D. Weiskopf, “Constrained Dynamic Mode Decomposition,” IEEE Transactions on Visualization and Computer Graphics, pp. 1–11, 2022, doi: 10.1109/TVCG.2022.3209437.
    29. T. Krake, M. von Scheven, J. Gade, M. Abdelaal, D. Weiskopf, and M. Bischoff, “Efficient Update of Redundancy Matrices for Truss and Frame Structures,” Journal of Theoretical, Computational and Applied Mechanics, 2022, doi: 10.46298/jtcam.9615.
    30. H. Lin et al., “Large-Scale Crowdsourced Subjective Assessment of Picturewise Just Noticeable Difference,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 9, Art. no. 9, 2022, doi: 10.1109/TCSVT.2022.3163860.
    31. H. Lin, H. Men, Y. Yan, J. Ren, and D. Saupe, “Crowdsourced Quality Assessment of Enhanced Underwater Images - a Pilot Study,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX). IEEE, Sep. 2022, pp. 1--4. doi: 10.1109/QoMEX55416.2022.9900904.
    32. J. Lou, H. Lin, D. Marshall, D. Saupe, and H. Liu, “TranSalNet: Towards perceptually relevant visual saliency prediction,” Neurocomputing, vol. 494, pp. 455–467, 2022, doi: https://doi.org/10.1016/j.neucom.2022.04.080.
    33. C. Müller, M. Heinemann, D. Weiskopf, and T. Ertl, “Power Overwhelming: Quantifying the Energy Cost of Visualisation,” in Proceedings of the 2022 IEEE Workshop on Evaluation and Beyond - Methodological Approaches for Visualization (BELIV), in Proceedings of the 2022 IEEE Workshop on Evaluation and Beyond - Methodological Approaches for Visualization (BELIV). Oct. 2022, pp. 38–46. doi: 10.1109/BELIV57783.2022.00009.
    34. Q. Q. Ngo, F. L. Dennig, D. A. Keim, and M. Sedlmair, “Machine Learning Meets Visualization – Experiences and Lessons Learned,” it - Information Technology, vol. 64, no. 4–5, Art. no. 4–5, 2022, doi: doi:10.1515/itit-2022-0034.
    35. A. Niarakis et al., “Addressing barriers in comprehensiveness, accessibility, reusability, interoperability and reproducibility of computational models in systems biology,” Briefings in bioinformatics, vol. 23, no. 4, Art. no. 4, 2022, doi: 10.1093/bib/bbac212.
    36. F. Petersen, B. Goldluecke, C. Borgelt, and O. Deussen, “GenDR: A Generalized Differentiable Renderer,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), in Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR). 2022, pp. 3992–4001. doi: 10.1109/CVPR52688.2022.00397.
    37. F. Petersen, B. Goldluecke, O. Deussen, and H. Kuehne, “Style Agnostic 3D Reconstruction via Adversarial Style Transfer,” in 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), in 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). IEEE, Jan. 2022, pp. 2273–2282. doi: 10.1109/WACV51458.2022.00233.
    38. M. Philipp, N. Bacher, S. Sauer, F. Mathis-Ullrich, and A. Bruhn, “From Chairs To Brains: Customizing Optical Flow For Surgical Activity Localization,” in Proceedings of the IEEE International Symposium on Biomedical Imaging (ISBI), in Proceedings of the IEEE International Symposium on Biomedical Imaging (ISBI). IEEE, Mar. 2022, pp. 1–5. doi: 10.1109/ISBI52829.2022.9761704.
    39. G. Richer, A. Pister, M. Abdelaal, J.-D. Fekete, M. Sedlmair, and D. Weiskopf, “Scalability in Visualization,” IEEE Transactions on Visualization and Computer Graphics, pp. 1–15, 2022, doi: 10.1109/TVCG.2022.3231230.
    40. N. Rodrigues, L. Shao, J. J. Yan, T. Schreck, and D. Weiskopf, “Eye Gaze on Scatterplot: Concept and First Results of Recommendations for Exploration of SPLOMs Using Implicit Data Selection,” in 2022 Symposium on Eye Tracking Research and Applications, in 2022 Symposium on Eye Tracking Research and Applications. Seattle, WA, USA: Association for Computing Machinery, 2022, pp. 59:1-59:7. doi: 10.1145/3517031.3531165.
    41. J. Schmalfuss, L. Mehl, and A. Bruhn, “Attacking Motion Estimation with Adversarial Snow,” in Proc. ECCV Workshop on Adversarial Robustness in the Real World (AROW), in Proc. ECCV Workshop on Adversarial Robustness in the Real World (AROW). 2022. doi: 10.48550/arXiv.2210.11242.
    42. J. Schmalfuss, P. Scholze, and A. Bruhn, “A Perturbation-Constrained Adversarial Attack for Evaluating the Robustness of Optical Flow,” Proceedings of the European Conference on Computer Vision (ECCV), Oct. 2022.
    43. C. Schneegass, V. Füseschi, V. Konevych, and F. Draxler, “Investigating the Use of Task Resumption Cues to Support Learning in Interruption-Prone Environments,” Multimodal Technologies and Interaction, vol. 6, no. 1, Art. no. 1, 2022, doi: 10.3390/mti6010002.
    44. F. Schreiber and D. Weiskopf, “Quantitative Visual Computing,” it - Information Technology, vol. 64, no. 4–5, Art. no. 4–5, 2022, doi: doi:10.1515/itit-2022-0048.
    45. P. Schäfer, N. Rodrigues, D. Weiskopf, and S. Storandt, “Group Diagrams for Simplified Representation of Scanpaths,” in Proceedings of the ACM Symposium on Visual Information Communication and Interaction (VINCI), in Proceedings of the ACM Symposium on Visual Information Communication and Interaction (VINCI). ACM, Aug. 2022. doi: 10.1145/3554944.3554971.
    46. S. Su et al., “Going the Extra Mile in Face Image Quality Assessment: A Novel Database and Model,” CoRR, 2022, doi: 10.48550/ARXIV.2207.04904.
    47. H. Tarner, V. Bruder, T. Ertl, S. Frey, and F. Beck, “Visually Comparing Rendering Performance from Multiple Perspectives,” in Vision, Modeling, and Visualization, J. Bender, M. Botsch, and D. Keim, Eds., in Vision, Modeling, and Visualization. The Eurographics Association, 2022. doi: 10.2312/vmv.20221211.
    48. Y. Wang, C. Jiao, M. Bâce, and A. Bulling, “VisRecall: Quantifying Information Visualisation Recallability Via Question Answering,” IEEE Transactions on Visualization and Computer Graphics, vol. 28, no. 12, Art. no. 12, 2022, doi: 10.1109/TVCG.2022.3198163.
    49. Y. Wang, M. Koch, M. Bâce, D. Weiskopf, and A. Bulling, “Impact of Gaze Uncertainty on AOIs in Information Visualisations,” in 2022 Symposium on Eye Tracking Research and Applications, in 2022 Symposium on Eye Tracking Research and Applications. ACM, Jun. 2022, pp. 1–6. doi: 10.1145/3517031.3531166.
    50. D. Weiskopf, “Uncertainty Visualization: Concepts, Methods, and Applications in Biological Data Visualization,” Frontiers in Bioinformatics, vol. 2, 2022, doi: 10.3389/fbinf.2022.793819.
    51. J. Zagermann et al., “Complementary Interfaces for Visual Computing,” it - Information Technology, vol. 64, no. 4–5, Art. no. 4–5, 2022, doi: doi:10.1515/itit-2022-0031.
    52. M. Zameshina et al., “Fairness in generative modeling: do it unsupervised!,” in Proceedings of the Genetic and Evolutionary Computation Conference Companion, in Proceedings of the Genetic and Evolutionary Computation Conference Companion. ACM, Jul. 2022, pp. 320--323. doi: 10.1145/3520304.3528992.
    53. Y. Zhang, K. Klein, O. Deussen, T. Gutschlag, and S. Storandt, “Robust Visualization of Trajectory Data,” it - Information Technology, vol. 64, no. 4–5, Art. no. 4–5, 2022, doi: doi:10.1515/itit-2022-0036.
  3. 2021

    1. M. Aichem et al., “Visual exploration of large metabolic models,” Bioinformatics, vol. 37, no. 23, Art. no. 23, May 2021, doi: 10.1093/bioinformatics/btab335.
    2. P. Balestrucci, V. Maffei, F. Lacquaniti, and A. Moscatelli, “The Effects of Visual Parabolic Motion on the Subjective Vertical and on Interception,” Neuroscience, vol. 453, pp. 124–137, Jan. 2021, doi: 10.1016/j.neuroscience.2020.09.052.
    3. H. Ben Lahmar and M. Herschel, “Collaborative filtering over evolution provenance data for interactive visual data exploration,” Information Systems, vol. 95, p. 101620, 2021, doi: 10.1016/j.is.2020.101620.
    4. J. Bernard, M. Hutter, M. Sedlmair, M. Zeppelzauer, and T. Munzner, “A Taxonomy of Property Measures to Unify Active Learning and Human-centered Approaches to Data Labeling,” ACM Transactions on Interactive Intelligent Systems (TiiS), vol. 11, no. 3–4, Art. no. 3–4, 2021, doi: 10.1145/3439333.
    5. J. Bernard, M. Hutter, M. Zeppelzauer, M. Sedlmair, and T. Munzner, “ProSeCo: Visual analysis of class separation measures and dataset characteristics,” Computers & Graphics, vol. 96, pp. 48–60, 2021, doi: https://doi.org/10.1016/j.cag.2021.03.004.
    6. D. Bethge et al., “VEmotion: Using Driving Context for Indirect Emotion Prediction in Real-Time,” in The 34th Annual ACM Symposium on User Interface Software and Technology, in The 34th Annual ACM Symposium on User Interface Software and Technology. New York, NY, USA: Association for Computing Machinery, 2021, pp. 638–651. doi: 10.1145/3472749.3474775.
    7. R. Bian et al., “Implicit Multidimensional Projection of Local Subspaces,” IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 2, Art. no. 2, 2021, doi: 10.1109/TVCG.2020.3030368.
    8. H. Booth and C. Beck, “Verb-second and Verb-first in the History of Icelandic,” Journal of Historical Syntax, vol. 5, no. 27, Art. no. 27, 2021, doi: 10.18148/hs/2021.v5i28.112.
    9. C. Bu et al., “SineStream: Improving the Readability of Streamgraphs by Minimizing Sine Illusion Effects,” IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 2, Art. no. 2, 2021, doi: 10.1109/TVCG.2020.3030404.
    10. M. Burch, W. Huang, M. Wakefield, H. C. Purchase, D. Weiskopf, and J. Hua, “The State of the Art in Empirical User Evaluation of Graph Visualizations,” IEEE Access, vol. 9, pp. 4173–4198, 2021, doi: 10.1109/ACCESS.2020.3047616.
    11. Y. Chen, K. C. Kwan, L.-Y. Wei, and H. Fu, “Autocomplete Repetitive Stroking with Image Guidance,” in SIGGRAPH Asia 2021 Technical Communications, in SIGGRAPH Asia 2021 Technical Communications. Tokyo, Japan: Association for Computing Machinery, 2021. doi: 10.1145/3478512.3488595.
    12. R. Cutura, K. Angerbauer, F. Heyen, N. Hube, and M. Sedlmair, “DaRt: Generative Art using Dimensionality Reduction Algorithms,” in 2021 IEEE VIS Arts Program (VISAP), in 2021 IEEE VIS Arts Program (VISAP). IEEE, 2021, pp. 59--72. doi: 10.1109/VISAP52981.2021.00013.
    13. R. Cutura, C. Morariu, Z. Cheng, Y. Wang, D. Weiskopf, and M. Sedlmair, “Hagrid — Gridify Scatterplots with Hilbert and Gosper Curves,” in The 14th International Symposium on Visual Information Communication and Interaction, in The 14th International Symposium on Visual Information Communication and Interaction. Potsdam, Germany: Association for Computing Machinery, 2021, p. 1:1—1:8. doi: 10.1145/3481549.3481569.
    14. F. L. Dennig, M. T. Fischer, M. Blumenschein, J. Fuchs, D. A. Keim, and E. Dimara, “ParSetgnostics: Quality Metrics for Parallel Sets,” Computer Graphics Forum, vol. 40, no. 3, Art. no. 3, 2021, doi: https://doi.org/10.1111/cgf.14314.
    15. F. Draxler, C. Schneegass, J. Safranek, and H. Hussmann, “Why Did You Stop? - Investigating Origins and Effects of Interruptions during Mobile Language Learning,” in Mensch Und Computer 2021, in Mensch Und Computer 2021. Ingolstadt, Germany: Association for Computing Machinery, 2021, pp. 21–33. doi: 10.1145/3473856.3473881.
    16. F. Frieß, M. Braun, V. Bruder, S. Frey, G. Reina, and T. Ertl, “Foveated Encoding for Large High-Resolution Displays,” IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 2, Art. no. 2, 2021, doi: 10.1109/TVCG.2020.3030445.
    17. F. Frieß, M. Becher, G. Reina, and T. Ertl, “Amortised Encoding for Large High-Resolution Displays,” in 2021 IEEE 11th Symposium on Large Data Analysis and Visualization (LDAV), in 2021 IEEE 11th Symposium on Large Data Analysis and Visualization (LDAV). 2021, pp. 53–62. doi: 10.1109/LDAV53230.2021.00013.
    18. K. Gadhave et al., “Predicting intent behind selections in scatterplot visualizations,” Information Visualization, vol. 20, no. 4, Art. no. 4, 2021, doi: 10.1177/14738716211038604.
    19. S. Giebenhain and B. Goldlücke, “AIR-Nets: An Attention-Based Framework for Locally Conditioned Implicit Representations,” in 2021 International Conference on 3D Vision (3DV), in 2021 International Conference on 3D Vision (3DV). 2021, pp. 1054–1064. doi: 10.1109/3DV53792.2021.00113.
    20. N. Grossmann, J. Bernard, M. Sedlmair, and M. Waldner, “Does the Layout Really Matter? A Study on Visual Model Accuracy Estimation,” in IEEE Visualization Conference  (VIS, Short Paper), in IEEE Visualization Conference  (VIS, Short Paper). 2021, pp. 61--65. doi: 10.1109/VIS49827.2021.9623326.
    21. F. Götz-Hahn, V. Hosu, H. Lin, and D. Saupe, “KonVid-150k : A Dataset for No-Reference Video Quality Assessment of Videos in-the-Wild,” IEEE Access, vol. 9, pp. 72139--72160, 2021, doi: 10.1109/ACCESS.2021.3077642.
    22. S. Hubenschmid, J. Zagermann, S. Butscher, and H. Reiterer, “STREAM: Exploring the Combination of Spatially-Aware Tablets with Augmented Reality Head-Mounted Displays for Immersive Analytics,” in Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, in Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. New York, NY, USA: Association for Computing Machinery, 2021. doi: 10.1145/3411764.3445298.
    23. S. Hubenschmid, J. Zagermann, D. Fink, J. Wieland, T. Feuchtner, and H. Reiterer, “Towards Asynchronous Hybrid User Interfaces for Cross-Reality Interaction,” in ISS’21 Workshop Proceedings: “Transitional Interfaces in Mixed and Cross-Reality: A new frontier?,” H.-C. Jetter, J.-H. Schröder, J. Gugenheimer, M. Billinghurst, C. Anthes, M. Khamis, and T. Feuchtner, Eds., in ISS’21 Workshop Proceedings: “Transitional Interfaces in Mixed and Cross-Reality: A new frontier?” 2021. doi: 10.18148/kops/352-2-84mm0sggczq02.
    24. K. Klein, M. Aichem, Y. Zhang, S. Erk, B. Sommer, and F. Schreiber, “TEAMwISE : synchronised immersive environments for exploration and analysis of animal behaviour,” Journal of Visualization, 2021, doi: 10.1007/s12650-021-00746-2.
    25. K. Klein, D. Garkov, S. Rütschlin, T. Böttcher, and F. Schreiber, “QSDB—a graphical Quorum Sensing Database,” Database, vol. 2021, no. 2021, Art. no. 2021, Nov. 2021, doi: 10.1093/database/baab058.
    26. K. Klein et al., “Visual analytics of sensor movement data for cheetah behaviour analysis,” Journal of Visualization, 2021, doi: 10.1007/s12650-021-00742-6.
    27. T. Krake, S. Reinhardt, M. Hlawatsch, B. Eberhardt, and D. Weiskopf, “Visualization and Selection of Dynamic Mode Decomposition Components for Unsteady Flow,” Visual Informatics, vol. 5, no. 3, Art. no. 3, 2021, doi: 10.1016/j.visinf.2021.06.003.
    28. M. Kraus et al., “Immersive Analytics with Abstract 3D Visualizations: A Survey,” Computer Graphics Forum, 2021, doi: https://doi.org/10.1111/cgf.14430.
    29. M. Kraus, K. Klein, J. Fuchs, D. A. Keim, F. Schreiber, and M. Sedlmair, “The Value of Immersive Visualization,” IEEE Computer Graphics and Applications (CG&A), vol. 41, no. 4, Art. no. 4, 2021, doi: 10.1109/MCG.2021.3075258.
    30. C. Krauter, J. Vogelsang, A. S. Calepso, K. Angerbauer, and M. Sedlmair, “Don’t Catch It: An Interactive Virtual-Reality Environment to Learn About COVID-19 Measures Using Gamification Elements,” in Mensch und Computer, in Mensch und Computer. ACM, 2021, pp. 593--596. doi: 10.1145/3473856.3474031.
    31. K. C. Kwan and H. Fu, “Automatic Image Checkpoint Selection for Guider-Follower Pedestrian Navigation,” Computer Graphics Forum, vol. 40, no. 1, Art. no. 1, 2021, doi: https://doi.org/10.1111/cgf.14192.
    32. H. Lin, G. Chen, and F. W. Siebert, “Positional Encoding: Improving Class-Imbalanced Motorcycle Helmet use Classification,” in 2021 IEEE International Conference on Image Processing (ICIP), in 2021 IEEE International Conference on Image Processing (ICIP). 2021, pp. 1194–1198. doi: 10.1109/ICIP42928.2021.9506178.
    33. K. Lu et al., “Palettailor: Discriminable Colorization for Categorical Data,” IEEE Transactions on Visualization & Computer Graphics, vol. 27, no. 02, Art. no. 02, 2021, doi: 10.1109/TVCG.2020.3030406.
    34. L. Mehl, C. Beschle, A. Barth, and A. Bruhn, “An Anisotropic Selection Scheme for Variational Optical Flow Methods with Order-Adaptive Regularisation,” in Proceedings of the International Conference on Scale Space and Variational Methods in Computer Vision (SSVM), in Proceedings of the International Conference on Scale Space and Variational Methods in Computer Vision (SSVM). Springer, 2021, pp. 140--152. doi: 10.1007/978-3-030-75549-2_12.
    35. H. Men, H. Lin, M. Jenadeleh, and D. Saupe, “Subjective Image Quality Assessment with Boosted Triplet Comparisons,” IEEE Access, vol. 9, pp. 138939–138975, 2021, doi: 10.1109/ACCESS.2021.3118295.
    36. C. Morariu, A. Bibal, R. Cutura, B. Frenay, and M. Sedlmair, “DumbleDR: Predicting User Preferences of Dimensionality Reduction Projection Quality,” arXiv preprint, Technical Report arXiv:2105.09275, 2021. [Online]. Available: https://arxiv.org/abs/2105.09275
    37. T. Müller, C. Schulz, and D. Weiskopf, “Adaptive Polygon Rendering for Interactive Visualization in the Schwarzschild Spacetime,” European Journal of Physics, vol. 43, no. 1, Art. no. 1, 2021, doi: 10.1088/1361-6404/ac2b36.
    38. G. J. Rijken et al., “Illegible Semantics: Exploring the Design Space of Metal Logos,” in IEEE VIS alt.VIS Workshop, in IEEE VIS alt.VIS Workshop. 2021. [Online]. Available: https://arxiv.org/abs/2109.01688
    39. B. Roziere et al., “Tarsier: Evolving Noise Injection in Super-Resolution GANs,” in 2020 25th International Conference on Pattern Recognition (ICPR), in 2020 25th International Conference on Pattern Recognition (ICPR). 2021, pp. 7028–7035. doi: 10.1109/ICPR48806.2021.9413318.
    40. B. Roziere et al., “EvolGAN: Evolutionary Generative Adversarial Networks,” in Computer Vision -- ACCV 2020, in Computer Vision -- ACCV 2020. Cham: Springer International Publishing, Nov. 2021, pp. 679--694. doi: 10.1007/978-3-030-69538-5_41.
    41. K. Schatz et al., “2019 IEEE Scientific Visualization Contest Winner: Visual Analysis of Structure Formation in Cosmic Evolution,” IEEE Computer Graphics and Applications, vol. 41, no. 6, Art. no. 6, 2021, doi: 10.1109/MCG.2020.3004613.
    42. C. Schulz et al., “Multi-Class Inverted Stippling,” ACM Trans. Graph., vol. 40, no. 6, Art. no. 6, Dec. 2021, doi: 10.1145/3478513.3480534.
    43. R. Sevastjanova, A.-L. Kalouli, C. Beck, H. Schäfer, and M. El-Assady, “Explaining Contextualization in Language Models using Visual Analytics,” in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Online: Association for Computational Linguistics, Aug. 2021, pp. 464--476. doi: 10.18653/v1/2021.acl-long.39.
    44. S. Su, V. Hosu, H. Lin, Y. Zhang, and D. Saupe, “KonIQ++: Boosting No-Reference Image Quality Assessment in the Wild by Jointly Predicting Image Quality and Defects,” in 32nd British Machine Vision Conference, in 32nd British Machine Vision Conference. 2021, pp. 1–12. [Online]. Available: https://www.bmvc2021-virtualconference.com/assets/papers/0868.pdf
    45. K. Vock, S. Hubenschmid, J. Zagermann, S. Butscher, and H. Reiterer, “IDIAR : Augmented Reality Dashboards to Supervise Mobile Intervention Studies,” in Mensch und Computer 2021 (MuC ’21), in Mensch und Computer 2021 (MuC ’21). New York, NY: ACM, 2021. doi: 10.1145/3473856.3473876.
    46. J. Wieland, J. Zagermann, J. Müller, and H. Reiterer, “Separation, Composition, or Hybrid? : Comparing Collaborative 3D Object Manipulation Techniques for Handheld Augmented Reality,” in 2021 IEEE International Symposium on Mixed and Augmented Reality, in 2021 IEEE International Symposium on Mixed and Augmented Reality. Piscataway, NJ: IEEE, 2021, pp. 403--412. doi: 10.1109/ISMAR52148.2021.00057.
    47. L. Zhou, C. R. Johnson, and D. Weiskopf, “Data-Driven Space-Filling Curves,” IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 2, Art. no. 2, 2021, doi: 10.1109/TVCG.2020.3030473.
  4. 2020

    1. P. Angelini, S. Chaplick, S. Cornelsen, and G. Da Lozzo, “Planar L-Drawings of Bimodal Graphs,” in Graph Drawing and Network Visualization, D. Auber and P. Valtr, Eds., in Graph Drawing and Network Visualization. Cham: Springer International Publishing, 2020, pp. 205–219. doi: 10.1007/978-3-030-68766-3_17.
    2. P. Balestrucci et al., “Pipelines Bent, Pipelines Broken: Interdisciplinary Self-Reflection on the Impact of COVID-19 on Current and Future Research (Position Paper),” in 2020 IEEE Workshop on Evaluation and Beyond-Methodological Approaches to Visualization (BELIV), in 2020 IEEE Workshop on Evaluation and Beyond-Methodological Approaches to Visualization (BELIV). IEEE, 2020, pp. 11--18. doi: 10.1109/BELIV51497.2020.00009.
    3. H. Bast, P. Brosi, and S. Storandt, “Metro Maps on Octilinear Grid Graphs,” in Computer Graphics Forum, in Computer Graphics Forum. Hoboken, New Jersey: Wiley-Blackwell - STM, 2020, pp. 357--367. doi: 10.1111/cgf.13986.
    4. C. Beck, “DiaSense at SemEval-2020 Task 1: Modeling Sense Change via Pre-trained BERT Embeddings,” in Proceedings of the Fourteenth Workshop on Semantic Evaluation, in Proceedings of the Fourteenth Workshop on Semantic Evaluation. Barcelona (online): International Committee for Computational Linguistics, Dec. 2020, pp. 50--58. [Online]. Available: https://www.aclweb.org/anthology/2020.semeval-1.4
    5. C. Beck, H. Booth, M. El-Assady, and M. Butt, “Representation Problems in Linguistic Annotations: Ambiguity, Variation, Uncertainty, Error and Bias,” in Proceedings of the 14th Linguistic Annotation Workshop, in Proceedings of the 14th Linguistic Annotation Workshop. Barcelona, Spain: Association for Computational Linguistics, Dec. 2020, pp. 60--73. [Online]. Available: https://www.aclweb.org/anthology/2020.law-1.6
    6. M. Beck and S. Storandt, “Puzzling Grid Embeddings,” in Proceedings of the Symposium on Algorithm Engineering and Experiments, ALENEX 2020, Salt Lake City, UT, USA, January 5-6, 2020, in Proceedings of the Symposium on Algorithm Engineering and Experiments, ALENEX 2020, Salt Lake City, UT, USA, January 5-6, 2020. 2020, pp. 94--105. doi: 10.1137/1.9781611976007.8.
    7. J. Bernard, M. Hutter, M. Zeppelzauer, M. Sedlmair, and T. Munzner, “SepEx: Visual Analysis of Class Separation Measures,” in Proceedings of the International Workshop on Visual Analytics (EuroVA), C. Turkay and K. Vrotsou, Eds., in Proceedings of the International Workshop on Visual Analytics (EuroVA). The Eurographics Association, 2020, pp. 1–5. doi: 10.2312/eurova.20201079.
    8. F. Bishop, J. Zagermann, U. Pfeil, G. Sanderson, H. Reiterer, and U. Hinrichs, “Construct-A-Vis: Exploring the Free-Form Visualization Processes of Children,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 1, Art. no. 1, 2020, doi: 10.1109/TVCG.2019.2934804.
    9. M. Blumenschein, “Pattern-Driven Design of Visualizations for High-Dimensional Data,” Universität Konstanz, Konstanz, 2020. [Online]. Available: http://nbn-resolving.de/urn:nbn:de:bsz:352-2-18wp9dhmhapww8
    10. M. Blumenschein, L. J. Debbeler, N. C. Lages, B. Renner, D. A. Keim, and M. El-Assady, “v-plots: Designing Hybrid Charts for the Comparative Analysis of Data Distributions,” Computer Graphics Forum, vol. 39, no. 3, Art. no. 3, 2020, doi: 10.1111/cgf.14002.
    11. M. Blumenschein, X. Zhang, D. Pomerenke, D. A. Keim, and J. Fuchs, “Evaluating Reordering Strategies for Cluster Identification in Parallel Coordinates,” Computer Graphics Forum, vol. 39, no. 3, Art. no. 3, 2020, doi: 10.1111/cgf.14000.
    12. M. Borowski, J. Zagermann, C. N. Klokmose, H. Reiterer, and R. Rädle, “Exploring the Benefits and Barriers of Using Computational Notebooks for Collaborative Programming Assignments,” in Proceedings of the ACM Technical Symposium on Computer Science Education (SIGCSE), in Proceedings of the ACM Technical Symposium on Computer Science Education (SIGCSE). 2020, pp. 468–474. doi: 10.1145/3328778.3366887.
    13. N. Brich et al., “Visual Analysis of Multivariate Intensive Care Surveillance Data,” in Eurographics Workshop on Visual Computing for Biology and Medicine, B. Kozlíková, M. Krone, N. Smit, K. Nieselt, and R. G. Raidou, Eds., in Eurographics Workshop on Visual Computing for Biology and Medicine. The Eurographics Association, 2020. doi: 10.2312/vcbm.20201174.
    14. V. Bruder, C. Müller, S. Frey, and T. Ertl, “On Evaluating Runtime Performance of Interactive Visualizations,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, pp. 2848–2862, Sep. 2020, doi: 10.1109/TVCG.2019.2898435.
    15. N. Chotisarn et al., “A Systematic Literature Review of Modern Software Visualization,” Journal of Visualization, vol. 23, no. 4, Art. no. 4, 2020, doi: 10.1007/s12650-020-00647-w.
    16. S. Cornelsen et al., “Drawing Shortest Paths in Geodetic Graphs,” in Graph Drawing and Network Visualization, D. Auber and P. Valtr, Eds., in Graph Drawing and Network Visualization. Cham: Springer International Publishing, 2020, pp. 333--340. doi: 10.1007/978-3-030-68766-3_26.
    17. M. Dias, D. Orellana, S. Vidal, L. Merino, and A. Bergel, “Evaluating a Visual Approach for Understanding JavaScript Source Code,” in Proceedings of the 28th International Conference on Program Comprehension, in Proceedings of the 28th International Conference on Program Comprehension. ACM, Jul. 2020, pp. 128–138. doi: https://doi.org/10.1145/3387904.3389275.
    18. F. Draxler, A. Labrie, A. Schmidt, and L. L. Chuang, “Augmented Reality to Enable Users in Learning Case Grammar from Their Real-World Interactions,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2020, pp. 410:1-410:12. doi: 10.1145/3313831.3376537.
    19. F. Frieß, C. Müller, and T. Ertl, “Real-Time High-Resolution Visualisation,” in Proceedings of the Eurographics Symposium on Vision, Modeling, and Visualization (VMV), J. Krüger, M. Niessner, and J. Stückler, Eds., in Proceedings of the Eurographics Symposium on Vision, Modeling, and Visualization (VMV). The Eurographics Association, 2020, pp. 127–135. doi: 10.2312/vmv.20201195.
    20. R. Garcia and D. Weiskopf, “Inner-Process Visualization of Hidden States in Recurrent Neural Networks,” in Proceedings of the 13th International Symposium on Visual Information Communication and Interaction, in Proceedings of the 13th International Symposium on Visual Information Communication and Interaction. Eindhoven, Netherlands: ACM, 2020, pp. 20:1-20:5. doi: 10.1145/3430036.3430047.
    21. T. Guha et al., “ATQAM/MAST’20: Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends,” in Proceedings of the 28th ACM International Conference on Multimedia, in Proceedings of the 28th ACM International Conference on Multimedia. Seattle, WA, USA: Association for Computing Machinery, 2020, pp. 4758–4760. doi: 10.1145/3394171.3421895.
    22. F. Heyen et al., “ClaVis: An Interactive Visual Comparison System for Classifiers,” in Proceedings of the International Conference on Advanced Visual Interfaces, in Proceedings of the International Conference on Advanced Visual Interfaces. ACM, 2020, pp. 9:1-9:9. doi: 10.1145/3399715.3399814.
    23. V. Hosu, H. Lin, T. Sziranyi, and D. Saupe, “KonIQ-10k : An Ecologically Valid Database for Deep Learning of Blind Image Quality Assessment,” IEEE Transactions on Image Processing, vol. 29, pp. 4041--4056, 2020, doi: 10.1109/TIP.2020.2967829.
    24. V. Hosu et al., “From Technical to Aesthetics Quality Assessment and Beyond: Challenges and Potential,” in Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, in Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends. Seattle, WA, USA: Association for Computing Machinery, 2020, pp. 19–20. doi: 10.1145/3423268.3423589.
    25. M. Jenadeleh, M. Pedersen, and D. Saupe, “Blind Quality Assessment of Iris Images Acquired in Visible Light for Biometric Recognition,” Sensors, vol. 20, no. 5, Art. no. 5, 2020, doi: 10.3390/s20051308.
    26. U. Ju, L. L. Chuang, and C. Wallraven, “Acoustic Cues Increase Situational Awareness in Accident Situations: A VR Car-Driving Study,” IEEE Transactions on Intelligent Transportation Systems, pp. 1–11, 2020, doi: 10.1109/TITS.2020.3035374.
    27. T. Kosch, A. Schmidt, S. Thanheiser, and L. L. Chuang, “One Does Not Simply RSVP: Mental Workload to Select Speed Reading Parameters Using Electroencephalography,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2020, pp. 637:1-637:13. doi: 10.1145/3313831.3376766.
    28. M. Kraus et al., “A Comparative Study of Orientation Support Tools in Virtual Reality Environments with Virtual Teleportation,” in 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), in 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). 2020, pp. 227–238. doi: 10.1109/ISMAR50242.2020.00046.
    29. M. Kraus et al., “Assessing 2D and 3D Heatmaps for Comparative Analysis: An Empirical Study,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, in Proceedings of the CHI Conference on Human Factors in Computing Systems. 2020, pp. 546:1–546:14. doi: 10.1145/3313831.3376675.
    30. A. Kumar, P. Howlader, R. Garcia, D. Weiskopf, and K. Mueller, “Challenges in Interpretability of Neural Networks for Eye Movement Data,” in ACM Symposium on Eye Tracking Research and Applications, in ACM Symposium on Eye Tracking Research and Applications. Stuttgart, Germany: Association for Computing Machinery, 2020. doi: 10.1145/3379156.3391361.
    31. A. Kumar, D. Mohanty, K. Kurzhals, F. Beck, D. Weiskopf, and K. Mueller, “Demo of the EyeSAC System for Visual Synchronization, Cleaning, and Annotation of Eye Movement Data,” in ACM Symposium on Eye Tracking Research and Applications, in ACM Symposium on Eye Tracking Research and Applications. Stuttgart, Germany: Association for Computing Machinery, 2020. doi: 10.1145/3379157.3391988.
    32. K. Kurzhals, M. Burch, and D. Weiskopf, “What We See and What We Get from Visualization: Eye Tracking Beyond Gaze Distributions and Scanpaths,” CoRR, vol. abs/2009.14515, 2020, [Online]. Available: https://arxiv.org/abs/2009.14515
    33. K. Kurzhals, F. Göbel, K. Angerbauer, M. Sedlmair, and M. Raubal, “A View on the Viewer: Gaze-Adaptive Captions for Videos,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, in Proceedings of the CHI Conference on Human Factors in Computing Systems. 2020, pp. 139:1–139:12. doi: 10.1145/3313831.3376266.
    34. K. Kurzhals et al., “Visual Analytics and Annotation of Pervasive Eye Tracking Video,” in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA). Stuttgart, Germany: ACM, 2020, pp. 16:1-16:9. doi: 10.1145/3379155.3391326.
    35. M. Lan Ha, V. Hosu, and V. Blanz, “Color Composition Similarity and Its Application in Fine-grained Similarity,” in 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), in 2020 IEEE Winter Conference on Applications of Computer Vision (WACV). Piscataway, NJ: IEEE, 2020, pp. 2548--2557. doi: 10.1109/WACV45572.2020.9093522.
    36. H. Lin, M. Jenadeleh, G. Chen, U. Reips, R. Hamzaoui, and D. Saupe, “Subjective Assessment of Global Picture-Wise Just Noticeable Difference,” in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME). 2020, pp. 1–6. doi: 10.1109/ICMEW46912.2020.9106058.
    37. H. Lin, J. D. Deng, D. Albers, and F. W. Siebert, “Helmet Use Detection of Tracked Motorcycles Using CNN-Based Multi-Task Learning,” IEEE Access, vol. 8, pp. 162073–162084, 2020, doi: 10.1109/ACCESS.2020.3021357.
    38. H. Lin et al., “SUR-FeatNet: Predicting the Satisfied User Ratio Curvefor Image Compression with Deep Feature Learning,” Quality and User Experience, vol. 5, no. 1, Art. no. 1, 2020, doi: 10.1007/s41233-020-00034-1.
    39. H. Men, V. Hosu, H. Lin, A. Bruhn, and D. Saupe, “Visual Quality Assessment for Interpolated Slow-Motion Videos Based on a Novel Database,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX). 2020, pp. 1–6. doi: 10.1109/QoMEX48832.2020.9123096.
    40. H. Men, V. Hosu, H. Lin, A. Bruhn, and D. Saupe, “Subjective annotation for a frame interpolation benchmark using artefact amplification,” Quality and User Experience, vol. 5, no. 1, Art. no. 1, 2020, doi: 10.1007/s41233-020-00037-y.
    41. L. Merino, M. Lungu, and C. Seidl, “Unleashing the Potentials of Immersive Augmented Reality for Software Engineering,” in 2020 IEEE 27th International Conference on Software Analysis, Evolution and Reengineering (SANER), in 2020 IEEE 27th International Conference on Software Analysis, Evolution and Reengineering (SANER). 2020, pp. 517–521. doi: 10.1109/SANER48275.2020.9054812.
    42. L. Merino, M. Schwarzl, M. Kraus, M. Sedlmair, D. Schmalstieg, and D. Weiskopf, “Evaluating Mixed and Augmented Reality: A Systematic Literature Review (2009 -- 2019),” in IEEE International Symposium on Mixed and Augmented Reality (ISMAR), in IEEE International Symposium on Mixed and Augmented Reality (ISMAR). 2020. doi: doi: 10.1109/ISMAR50242.2020.00069.
    43. L. Merino et al., “Toward Agile Situated Visualization: An Exploratory User Study,” in Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA), in Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA). 2020, p. LBW087:1–LBW087:7. doi: 10.1145/3334480.3383017.
    44. D. Okanovic et al., “Can a Chatbot Support Software Engineers with Load Testing? Approach and Experiences,” in Proceedings of the ACM/SPEC International Conference on Performance Engineering (ICPE), in Proceedings of the ACM/SPEC International Conference on Performance Engineering (ICPE). 2020, pp. 120–129. doi: 10.1145/3358960.3375792.
    45. N. Pathmanathan et al., “Eye vs. Head: Comparing Gaze Methods for Interaction in Augmented Reality,” in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA). Stuttgart, Germany: ACM, 2020, pp. 50:1-50:5. doi: 10.1145/3379156.3391829.
    46. N. Patkar, L. Merino, and O. Nierstrasz, “Towards Requirements Engineering with Immersive Augmented Reality,” in Conference Companion of the 4th International Conference on Art, Science, and Engineering of Programming, in Conference Companion of the 4th International Conference on Art, Science, and Engineering of Programming. Porto, Portugal: ACM, 2020, pp. 55–60. doi: 10.1145/3397537.3398472.
    47. N. Rodrigues, C. Schulz, A. Lhuillier, and D. Weiskopf, “Cluster-Flow Parallel Coordinates: Tracing Clusters Across Subspaces,” in Proceedings of Graphics Interface 2020, in Proceedings of Graphics Interface 2020. Canadian Human-Computer Communications Society / Société canadienne du dialogue humain-machine, 2020, pp. 382–392. doi: 10.20380/GI2020.38.
    48. B. Roziere et al., “Evolutionary Super-Resolution,” in Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion, in Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion. Cancún, Mexico: Association for Computing Machinery, 2020, pp. 151–152. doi: 10.1145/3377929.3389959.
    49. D. Schubring, M. Kraus, C. Stolz, N. Weiler, D. A. Keim, and H. Schupp, “Virtual Reality Potentiates Emotion and Task Effects of Alpha/Beta Brain Oscillations,” Brain Sciences, vol. 10, no. 8, Art. no. 8, 2020, doi: 10.3390/brainsci10080537.
    50. C. Schätzle and M. Butt, “Visual Analytics for Historical Linguistics: Opportunities and Challenges,” Journal of Data Mining and Digital Humanities, 2020, doi: 10.46298/jdmdh.6707.
    51. M. Sondag, W. Meulemans, C. Schulz, K. Verbeek, D. Weiskopf, and B. Speckmann, “Uncertainty Treemaps,” in Proceedings of the IEEE Pacific Visualization Symposium (PacificVis), in Proceedings of the IEEE Pacific Visualization Symposium (PacificVis). 2020, pp. 111–120. doi: 10.1109/PacificVis48177.2020.7614.
    52. J. Spoerhase, S. Storandt, and J. Zink, “Simplification of Polyline Bundles,” in 17th Scandinavian Symposium and Workshops on Algorithm Theory, SWAT 2020, June 22-24, 2020, Tórshavn, Faroe Islands, in 17th Scandinavian Symposium and Workshops on Algorithm Theory, SWAT 2020, June 22-24, 2020, Tórshavn, Faroe Islands. 2020, pp. 35:1--35:20. doi: 10.4230/LIPIcs.SWAT.2020.35.
    53. T. Stankov and S. Storandt, “Maximum Gap Minimization in Polylines,” in Web and Wireless Geographical Information Systems - 18th International Symposium, W2GIS 2020, Wuhan, China, November 13-14, 2020, Proceedings, in Web and Wireless Geographical Information Systems - 18th International Symposium, W2GIS 2020, Wuhan, China, November 13-14, 2020, Proceedings. 2020, pp. 181--196. doi: 10.1007/978-3-030-60952-8\_19.
    54. A. Streichert, K. Angerbauer, M. Schwarzl, and M. Sedlmair, “Comparing Input Modalities for Shape Drawing Tasks,” in Proceedings of the Symposium on Eye Tracking Research & Applications-Short Papers (ETRA-SP), in Proceedings of the Symposium on Eye Tracking Research & Applications-Short Papers (ETRA-SP). ACM, 2020, pp. 1–5. doi: 10.1145/3379156.3391830.
    55. D. R. Wahl et al., “Why We Eat What We Eat: Assessing Dispositional and In-the-Moment Eating Motives by Using Ecological Momentary Assessment,” JMIR mHealth and uHealth., vol. 8, no. 1, Art. no. 1, 2020, doi: doi:10.2196/13191.
    56. D. Weiskopf, “Vis4Vis: Visualization for (Empirical) Visualization Research,” in Foundations of Data Visualization, M. Chen, H. Hauser, P. Rheingans, and G. Scheuermann, Eds., in Foundations of Data Visualization. Springer International Publishing, 2020, pp. 209--224. doi: 10.1007/978-3-030-34444-3_10.
    57. O. Wiedemann, V. Hosu, H. Lin, and D. Saupe, “Foveated Video Coding for Real-Time Streaming Applications,” in 2020 Twelfth International Conference on Quality of Multimedia Experience (QoMEX), in 2020 Twelfth International Conference on Quality of Multimedia Experience (QoMEX). 2020, pp. 1–6. doi: 10.1109/QoMEX48832.2020.9123080.
    58. O. Wiedemann and D. Saupe, “Gaze Data for Quality Assessment of Foveated Video,” in ACM Symposium on Eye Tracking Research and Applications, in ACM Symposium on Eye Tracking Research and Applications. Stuttgart, Germany: Association for Computing Machinery, 2020. doi: 10.1145/3379157.3391656.
    59. J. Zagermann, U. Pfeil, P. von Bauer, D. Fink, and H. Reiterer, “‘It’s in my other hand!’ : Studying the Interplay of Interaction Techniques and Multi-Tablet Activities,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, in Proceedings of the CHI Conference on Human Factors in Computing Systems. 2020, pp. 1–13. [Online]. Available: https://kops.uni-konstanz.de/bitstream/handle/123456789/48393/CHI_2020_Camera_Ready%20%281%29.pdf?sequence=1&isAllowed=y
    60. X. Zhao, H. Lin, P. Guo, D. Saupe, and H. Liu, “Deep Learning VS. Traditional Algorithms for Saliency Prediction of Distorted Images,” in 2020 IEEE International Conference on Image Processing (ICIP), in 2020 IEEE International Conference on Image Processing (ICIP). 2020, pp. 156–160. doi: 10.1109/ICIP40778.2020.9191203.
    61. L. Zhou, M. Rivinius, C. R. Johnson, and D. Weiskopf, “Photographic High-Dynamic-Range Scalar Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 6, Art. no. 6, 2020, doi: 10.1109/TVCG.2020.2970522.
    62. S. Öney et al., “Evaluation of Gaze Depth Estimation from Eye Tracking in Augmented Reality,” in Proceedings of the Symposium on Eye Tracking Research & Applications-Short Paper (ETRA-SP), in Proceedings of the Symposium on Eye Tracking Research & Applications-Short Paper (ETRA-SP). ACM, 2020, pp. 49:1-49:5. doi: 10.1145/3379156.3391835.
  5. 2019

    1. M. Aupetit, M. Sedlmair, M. M. Abbas, A. Baggag, and H. Bensmail, “Toward Perception-based Evaluation of Clustering Techniques for Visual Analytics,” in Proceedings of the IEEE Visualization Conference (VIS), in Proceedings of the IEEE Visualization Conference (VIS). IEEE, 2019, pp. 141–145. doi: 10.1109/VISUAL.2019.8933620.
    2. P. Balestrucci and M. Ernst, “Visuo-motor adaptation during interaction with a user-adaptive system,” Journal of Vision, vol. 19, p. 187a, Sep. 2019, doi: 10.1167/19.10.187a.
    3. T. M. Benz, B. Riedl, and L. L. Chuang, “Projection Displays Induce Less Simulator Sickness than Head-Mounted Displays in a Real Vehicle Driving Simulator,” in Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI), C. P. Janssen, S. F. Donker, L. L. Chuang, and W. Ju, Eds., in Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI). ACM, 2019, pp. 379–387. doi: 10.1145/3342197.3344515.
    4. H. Booth and C. Schätzle, “The Syntactic Encoding of Information Structure in the History of Icelandic,” in Proceedings of the LFG’19 Conference, M. Butt, T. H. King, and I. Toivonen, Eds., in Proceedings of the LFG’19 Conference. CSLI Publications, 2019, pp. 69–89. [Online]. Available: http://web.stanford.edu/group/cslipublications/cslipublications/LFG/LFG-2019/lfg2019-booth-schaetzle.pdf
    5. V. Bruder, K. Kurzhals, S. Frey, D. Weiskopf, and T. Ertl, “Space-Time Volume Visualization of Gaze and Stimulus,” in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), K. Krejtz and B. Sharif, Eds., in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA). ACM, 2019, pp. 12:1-12:9. doi: 10.1145/3314111.3319812.
    6. V. Bruder et al., “Volume-Based Large Dynamic Graph Analysis Supported by Evolution Provenance,” Multimedia Tools and Applications, vol. 78, no. 23, Art. no. 23, 2019, doi: 10.1007/s11042-019-07878-6.
    7. V. Bruder, C. Schulz, R. Bauer, S. Frey, D. Weiskopf, and T. Ertl, “Voronoi-Based Foveated Volume Rendering,” in Proceedings of the Eurographics Conference on Visualization - Short Papers (EuroVis), J. Johansson, F. Sadlo, and G. E. Marai, Eds., in Proceedings of the Eurographics Conference on Visualization - Short Papers (EuroVis). Eurographics Association, 2019, pp. 67–71. doi: 10.2312/evs.20191172.
    8. T. Castermans, M. van Garderen, W. Meulemans, M. Nöllenburg, and X. Yuan, “Short Plane Supports for Spatial Hypergraphs,” in Graph Drawing and Network Visualization. GD 2018. Lecture Notes in Computer Science, T. Biedl and A. Kerren, Eds., in Graph Drawing and Network Visualization. GD 2018. Lecture Notes in Computer Science, vol. 11282. Springer International Publishing, 2019, pp. 53–66. doi: 10.1007/978-3-030-04414-5_4.
    9. F. L. Dennig, T. Polk, Z. Lin, T. Schreck, H. Pfister, and M. Behrisch, “FDive: Learning Relevance Models using Pattern-based Similarity Measures,” Proceedings of the IEEE Conference on Visual Analytics Science and Technology (VAST), 2019, doi: 10.1109/VAST47406.2019.8986940.
    10. C. Fan et al., “SUR-Net: Predicting the Satisfied User Ratio Curve for Image Compression with Deep Learning,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 2019, pp. 1–6. doi: 10.1109/QoMEX.2019.8743204.
    11. J. Görtler, M. Spicker, C. Schulz, D. Weiskopf, and O. Deussen, “Stippling of 2D Scalar Fields,” IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 6, Art. no. 6, 2019, doi: 10.1109/TVCG.2019.2903945.
    12. V. Hosu, B. Goldlücke, and D. Saupe, “Effective Aesthetics Prediction with Multi-level Spatially Pooled Features,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9367–9375, 2019, doi: 10.1109/CVPR.2019.00960.
    13. S. Jaeger et al., “Challenges for Brain Data Analysis in VR Environments,” in 2019 IEEE Pacific Visualization Symposium (PacificVis), in 2019 IEEE Pacific Visualization Symposium (PacificVis). 2019, pp. 42–46. doi: 10.1109/PacificVis.2019.00013.
    14. K. Klein, M. Aichem, B. Sommer, S. Erk, Y. Zhang, and F. Schreiber, “TEAMwISE: Synchronised Immersive Environments for Exploration and Analysis of Movement Data,” in Proceedings of the ACM Symposium on Visual Information Communication and Interaction (VINCI), in Proceedings of the ACM Symposium on Visual Information Communication and Interaction (VINCI). ACM, 2019, pp. 9:1-9:5. doi: 10.1145/3356422.3356450.
    15. K. Klein et al., “Visual Analytics for Cheetah Behaviour Analysis.,” in VINCI, in VINCI. ACM, 2019, pp. 16:1-16:8. [Online]. Available: http://dblp.uni-trier.de/db/conf/vinci/vinci2019.html#0001JMWHBS19
    16. K. Klein et al., “Fly with the flock : immersive solutions for animal movement visualization and analytics,” Journal of the Royal Society Interface, vol. 16, no. 153, Art. no. 153, 2019, doi: 10.1098/rsif.2018.0794.
    17. H. Lin, V. Hosu, and D. Saupe, “KADID-10k: A Large-scale Artificially Distorted IQA Database,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 2019, pp. 1–3. doi: 10.1109/QoMEX.2019.8743252.
    18. H. Men, H. Lin, V. Hosu, D. Maurer, A. Bruhn, and D. Saupe, “Visual Quality Assessment for Motion Compensated Frame Interpolation,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 2019, pp. 1–6. doi: 10.1109/QoMEX.2019.8743221.
    19. M. Miller, X. Zhang, J. Fuchs, and M. Blumenschein, “Evaluating Ordering Strategies of Star Glyph Axes,” in Proceedings of the IEEE Visualization Conference (VIS), in Proceedings of the IEEE Visualization Conference (VIS). IEEE, 2019, pp. 91–95. doi: 10.1109/VISUAL.2019.8933656.
    20. T. Munz, L. L. Chuang, S. Pannasch, and D. Weiskopf, “VisME: Visual microsaccades explorer,” Journal of Eye Movement Research, vol. 12, no. 6, Art. no. 6, Dec. 2019, doi: 10.16910/jemr.12.6.5.
    21. C. Müller, M. Braun, and T. Ertl, “Optimised Molecular Graphics on the HoloLens,” in IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2019, Osaka, Japan, March 23-27, 2019, in IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2019, Osaka, Japan, March 23-27, 2019. IEEE, 2019, pp. 97–102. doi: 10.1109/VR.2019.8798111.
    22. J. Müller, J. Zagermann, J. Wieland, U. Pfeil, and H. Reiterer, “A Qualitative Comparison Between Augmented and Virtual Reality Collaboration with Handheld Devices,” in Mensch und Computer 2019 – Tagungsband (MuC), F. Alt, A. Bulling, and T. Döring, Eds., in Mensch und Computer 2019 – Tagungsband (MuC). GI, ACM, 2019, pp. 399–410. doi: 10.1145/3340764.3340773.
    23. R. Netzel, N. Rodrigues, A. Haug, and D. Weiskopf, “Compensation of Simultaneous Orientation Contrast in Superimposed Textures,” in Proceedings of the Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP), A. Kerren, C. Hurter, and J. Braz, Eds., in Proceedings of the Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP), vol. 3: IVAPP. SciTePress, 2019, pp. 48–57. doi: 10.5220/0007356800480057.
    24. D. Pomerenke, F. L. Dennig, D. A. Keim, J. Fuchs, and M. Blumenschein, “Slope-Dependent Rendering of Parallel Coordinates to Reduce Density Distortion and Ghost Clusters,” in Proceedings of the IEEE Visualization Conference (VIS), in Proceedings of the IEEE Visualization Conference (VIS). IEEE, 2019, pp. 86–90. doi: 10.1109/VISUAL.2019.8933706.
    25. K. Schatz et al., “Visual Analysis of Structure Formation in Cosmic Evolution,” in Proceedings of the IEEE Scientific Visualization Conference (SciVis), in Proceedings of the IEEE Scientific Visualization Conference (SciVis). 2019, pp. 33–41. doi: 10.1109/scivis47405.2019.8968855.
    26. C. Schätzle and H. Booth, “DiaHClust: an Iterative Hierarchical Clustering Approach for Identifying Stages in Language Change,” in Proceedings of the International Workshop on Computational Approaches to Historical Language Change, in Proceedings of the International Workshop on Computational Approaches to Historical Language Change. Association for Computational Linguistics, 2019, pp. 126–135. doi: 10.18653/v1/W19-4716.
    27. C. Schätzle, F. L. Denning, M. Blumenschein, D. A. Keim, and M. Butt, “Visualizing Linguistic Change as Dimension Interactions,” in Proceedings of the International Workshop on Computational Approaches to Historical Language Change, in Proceedings of the International Workshop on Computational Approaches to Historical Language Change. 2019, pp. 272–278. doi: 10.18653/v1/W19-4734.
    28. N. Silva et al., “Eye Tracking Support for Visual Analytics Systems: Foundations, Current Applications, and Research Challenges,” in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), K. Krejtz and B. Sharif, Eds., in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA). ACM, 2019, pp. 11:1-11:9. doi: 10.1145/3314111.3319919.
    29. B. Sommer et al., “Tiled Stereoscopic 3D Display Wall - Concept, Applications and Evaluation,” Electronic Imaging, vol. 2019, no. 3, Art. no. 3, 2019, doi: 10.2352/ISSN.2470-1173.2019.3.SDA-641.
    30. Y. Wang, Z. Wang, C.-W. Fu, H. Schmauder, O. Deussen, and D. Weiskopf, “Image-Based Aspect Ratio Selection.,” IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 1, Art. no. 1, 2019, doi: 10.1109/TVCG.2018.2865266.
    31. Y. Wang et al., “Improving the Robustness of Scagnostics,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 1, Art. no. 1, 2019, doi: 10.1109/TVCG.2019.2934796.
    32. H. Zhang, S. Frey, H. Steeb, D. Uribe, T. Ertl, and W. Wang, “Visualization of Bubble Formation in Porous Media,” IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 1, Art. no. 1, 2019, doi: 10.1109/TVCG.2018.2864506.
    33. L. Zhou, R. Netzel, D. Weiskopf, and C. R. Johnson, “Spectral Visualization Sharpening.,” in Proceedings of the ACM Symposium on Applied Perception (SAP), S. Neyret, E. Kokkinara, M. González-Franco, L. Hoyet, D. W. Cunningham, and J. Swidrak, Eds., in Proceedings of the ACM Symposium on Applied Perception (SAP). ACM, 2019, pp. 18:1-18:9. doi: 10.1145/3343036.3343133.
  6. 2018

    1. H. Bast, P. Brosi, and S. Storandt, “Efficient Generation of Geographically Accurate Transit Maps,” in Proceedings of the ACM International Conference on Advances in Geographic Information Systems (SIGSPATIAL), F. B. Kashani, E. G. Hoel, R. H. Güting, R. Tamassia, and L. Xiong, Eds., in Proceedings of the ACM International Conference on Advances in Geographic Information Systems (SIGSPATIAL). ACM, 2018, pp. 13–22. doi: 10.1145/3274895.3274955.
    2. M. Behrisch et al., “Quality Metrics for Information Visualization,” Computer Graphics Forum, vol. 37, no. 3, Art. no. 3, 2018, doi: 10.1111/cgf.13446.
    3. H. Ben Lahmar, M. Herschel, M. Blumenschein, and D. A. Keim, “Provenance-based Visual Data Exploration with EVLIN,” in Proceedings of the Conference on Extending Database Technology (EDBT), in Proceedings of the Conference on Extending Database Technology (EDBT). 2018, pp. 686–689. doi: 10.5441/002/edbt.2018.85.
    4. M. Blumenschein et al., “SMARTexplore: Simplifying High-Dimensional Data Analysis through a Table-Based Visual Analytics Approach,” in Proceedings of the IEEE Conference on Visual Analytics Science and Technology (VAST), R. Chang, H. Qu, and T. Schreck, Eds., in Proceedings of the IEEE Conference on Visual Analytics Science and Technology (VAST). IEEE, 2018, pp. 36–47. doi: 10.1109/VAST.2018.8802486.
    5. S. S. Borojeni, S. C. J. Boll, W. Heuten, H. H. Bülthoff, and L. L. Chuang, “Feel the Movement: Real Motion Influences Responses to Take-Over Requests in Highly Automated Vehicles,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, R. L. Mandryk, M. Hancock, M. Perry, and A. L. Cox, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2018, pp. 246:1-246:13. doi: 10.1145/3173574.3173820.
    6. V. Bruder, M. Hlawatsch, S. Frey, M. Burch, D. Weiskopf, and T. Ertl, “Volume-Based Large Dynamic Graph Analytics,” in Proceedings of the International Conference Information Visualisation (IV), E. Banissi, R. Francese, M. W. McK. Bannatyne, T. G. Wyeld, M. Sarfraz, J. M. Pires, A. Ursyn, F. Bouali, N. Datia, G. Venturini, G. Polese, V. Deufemia, T. D. Mascio, M. Temperini, F. Sciarrone, D. Malandrino, R. Zaccagnino, P. Díaz, F. Papadopoulo, A. F. Anta, A. Cuzzocrea, M. Risi, U. Erra, and V. Rossano, Eds., in Proceedings of the International Conference Information Visualisation (IV). IEEE, 2018, pp. 210–219. doi: 10.1109/iV.2018.00045.
    7. L. L. Chuang and U. Pfeil, “Transparency and Openness Promotion Guidelines for HCI,” in Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA), R. L. Mandryk, M. Hancock, M. Perry, and A. L. Cox, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA). ACM, 2018, p. SIG04:1-SIG04:4. doi: 10.1145/3170427.3185377.
    8. M. de Ridder, K. Klein, and J. Kim, “A Review and Outlook on Visual Analytics for Uncertainties in Functional Magnetic Resonance Imaging,” Brain Informatics, vol. 5, no. 2, Art. no. 2, 2018, doi: 10.1186/s40708-018-0083-0.
    9. L. J. Debbeler, M. Gamp, M. Blumenschein, D. A. Keim, and B. Renner, “Polarized But Illusory Beliefs About Tap and Bottled Water: A Product- and Consumer-Oriented Survey and Blind Tasting Experiment,” Science of the Total Environment, vol. 643, pp. 1400–1410, 2018, doi: 10.1016/j.scitotenv.2018.06.190.
    10. T. Dingler, R. Rzayev, A. S. Shirazi, and N. Henze, “Designing Consistent Gestures Across Device Types: Eliciting RSVP Controls for Phone, Watch, and Glasses,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, R. L. Mandryk, M. Hancock, M. Perry, and A. L. Cox, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2018, pp. 419:1–419:12. doi: 10.1145/3173574.3173993.
    11. S. Frey, “Spatio-Temporal Contours from Deep Volume Raycasting,” Computer Graphics Forum, vol. 37, no. 3, Art. no. 3, 2018, doi: 10.1111/cgf.13438.
    12. F. Frieß, M. Landwehr, V. Bruder, S. Frey, and T. Ertl, “Adaptive Encoder Settings for Interactive Remote Visualisation on High-Resolution Displays,” in Proceedings of the IEEE Symposium on Large Data Analysis and Visualization - Short Papers (LDAV), in Proceedings of the IEEE Symposium on Large Data Analysis and Visualization - Short Papers (LDAV). IEEE, 2018, pp. 87–91. doi: 10.1109/LDAV.2018.8739215.
    13. M. Ghaffar et al., “3D Modelling and Visualisation of Heterogeneous Cell Membranes in Blender,” in Proceedings of the 11th International Symposium on Visual Information Communication and Interaction, in Proceedings of the 11th International Symposium on Visual Information Communication and Interaction. Växjö, Sweden: Association for Computing Machinery, 2018, pp. 64–71. doi: 10.1145/3231622.3231639.
    14. C. Glatz and L. L. Chuang, “The Time Course of Auditory Looming Cues in Redirecting Visuo-Spatial Attention,” Nature - Scientific Reports, vol. 9, pp. 743:1-743:10, 2018, doi: 10.1038/s41598-018-36033-8.
    15. C. Glatz, S. S. Krupenia, H. H. Bülthoff, and L. L. Chuang, “Use the Right Sound for the Right Job: Verbal Commands and Auditory Icons for a Task-Management System Favor Different Information Processes in the Brain,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, R. L. Mandryk, M. Hancock, M. Perry, and A. L. Cox, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2018, pp. 472:1-472:13. doi: 10.1145/3173574.3174046.
    16. J. Görtler, R. Kehlbeck, and O. Deussen, “A Visual Exploration of Gaussian Processes,” in Proceedings of the Workshop on Visualization for AI Explainability (VISxAI), in Proceedings of the Workshop on Visualization for AI Explainability (VISxAI). 2018. doi: 10.23915/distill.00017.
    17. J. Görtler, C. Schulz, O. Deussen, and D. Weiskopf, “Bubble Treemaps for Uncertainty Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 1, Art. no. 1, 2018, doi: 10.1109/TVCG.2017.2743959.
    18. A. Hautli-Janisz, C. Rohrdantz, C. Schätzle, A. Stoffel, M. Butt, and D. A. Keim, “Visual Analytics in Diachronic Linguistic Investigations,” Linguistic Visualizations, 2018.
    19. V. Hosu, H. Lin, and D. Saupe, “Expertise Screening in Crowdsourcing Image Quality,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 2018, pp. 276–281. doi: https://dx.doi.org/10.1109/QoMEX.2018.8463427.
    20. S. Hubenschmid, J. Zagermann, S. Butscher, and H. Reiterer, “Employing Tangible Visualisations in Augmented Reality with Mobile Devices,” in Proceedings of the Working Conference on Advanced Visual Interfaces (AVI), in Proceedings of the Working Conference on Advanced Visual Interfaces (AVI). 2018, pp. 1–4. [Online]. Available: http://nbn-resolving.de/urn:nbn:de:bsz:352-2-1iooenfo4fofm8
    21. K. Hänsel, R. Poguntke, H. Haddadi, A. Alomainy, and A. Schmidt, “What to Put on the User: Sensing Technologies for Studies and Physiology Aware Systems,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, R. L. Mandryk, M. Hancock, M. Perry, and A. L. Cox, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2018, pp. 145:1-145:14. doi: 10.1145/3173574.3173719.
    22. M. Jenadeleh, M. Pedersen, and D. Saupe, “Realtime Quality Assessment of Iris Biometrics Under Visible Light,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPRW), CVPR Workshops, in Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPRW), CVPR Workshops. IEEE, 2018, pp. 443–452. doi: 10.1109/CVPRW.2018.00085.
    23. J. Karolus, H. Schuff, T. Kosch, P. W. Wozniak, and A. Schmidt, “EMGuitar: Assisting Guitar Playing with Electromyography,” in Proceedings of the Designing Interactive Systems Conference (DIS), I. Koskinen, Y.-K. Lim, T. C. Pargman, K. K. N. Chow, and W. Odom, Eds., in Proceedings of the Designing Interactive Systems Conference (DIS). ACM, 2018, pp. 651–655. doi: 10.1145/3196709.3196803.
    24. M. Klapperstueck et al., “Contextuwall: Multi-site Collaboration Using Display Walls,” Journal of Visual Languages & Computing, vol. 46, pp. 35–42, 2018, doi: 10.1016/j.jvlc.2017.10.002.
    25. P. Knierim, V. Schwind, A. M. Feit, F. Nieuwenhuizen, and N. Henze, “Physical Keyboards in Virtual Reality: Analysis of Typing Performance and Effects of Avatar Hands,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, R. L. Mandryk, M. Hancock, M. Perry, and A. L. Cox, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2018, pp. 345:1–345:9. doi: 10.1145/3173574.3173919.
    26. T. Kosch, M. Funk, A. Schmidt, and L. L. Chuang, “Identifying Cognitive Assistance with Mobile Electroencephalography: A Case Study with In-Situ Projections for Manual Assembly.,” Proceedings of the ACM on Human-Computer Interaction (ACMHCI), vol. 2, pp. 11:1-11:20, 2018, doi: 10.1145/3229093.
    27. D. Laupheimer, P. Tutzauer, N. Haala, and M. Spicker, “Neural Networks for the Classification of Building Use from Street-view Imagery,” ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. IV–2, pp. 177–184, 2018, doi: 10.5194/isprs-annals-IV-2-177-2018.
    28. N. Marniok and B. Goldluecke, “Real-time Variational Range Image Fusion and Visualization for Large-scale Scenes using GPU Hash Tables,” in Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV), in Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV). 2018, pp. 912–920. doi: 10.1109/WACV.2018.00105.
    29. K. Marriott et al., Immersive Analytics, vol. 11190. in Lecture Notes in Computer Science (LNCS), vol. 11190. Springer International Publishing, 2018. doi: 10.1007/978-3-030-01388-2.
    30. D. Maurer and A. Bruhn, “ProFlow: Learning to Predict Optical Flow,” in Proceedings of the British Machine Vision Conference (BMVC), in Proceedings of the British Machine Vision Conference (BMVC), vol. 86:1-86:13. BMVA Press, 2018. doi: arXiv:1806.00800.
    31. D. Maurer, Y. C. Ju, M. Breuß, and A. Bruhn, “Combining Shape from Shading and Stereo: A Joint Variational Method for Estimating Depth, Illumination and Albedo,” International Journal of Computer Vision, vol. 126, no. 12, Art. no. 12, 2018, doi: 10.1007/s11263-018-1079-1.
    32. D. Maurer, N. Marniok, B. Goldluecke, and A. Bruhn, “Structure-from-motion-aware PatchMatch for Adaptive Optical Flow Estimation,” in Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, V. Ferrari, M. Hebert, C. Sminchisescu, and Y. Weiss, Eds., in Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol. 11212. Springer International Publishing, 2018, pp. 575–592. doi: 10.1007/978-3-030-01237-3_35.
    33. D. Maurer, M. Stoll, and A. Bruhn, “Directional Priors for Multi-Frame Optical Flow,” in Proceedings of the British Machine Vision Conference (BMVC), in Proceedings of the British Machine Vision Conference (BMVC). BMVA Press, 2018, pp. 106:1-106:13. [Online]. Available: http://bmvc2018.org/contents/papers/0377.pdf
    34. H. Men, H. Lin, and D. Saupe, “Spatiotemporal Feature Combination Model for No-Reference Video Quality Assessment,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 2018, pp. 1–3. doi: 10.1109/QoMEX.2018.8463426.
    35. A. Nesti, G. Rognini, B. Herbelin, H. H. Bülthoff, L. L. Chuang, and O. Blanke, “Modulation of Vection Latencies in the Full-Body Illusion,” PLoS ONE, vol. 13, no. 12, Art. no. 12, 2018, doi: 10.1371/journal.pone.0209189.
    36. S. Oppold and M. Herschel, “Provenance for Entity Resolution,” in Provenance and Annotation of Data and Processes. IPAW 2018. Lecture Notes in Computer Science, K. Belhajjame, A. Gehani, and P. Alper, Eds., in Provenance and Annotation of Data and Processes. IPAW 2018. Lecture Notes in Computer Science, vol. 11017. Springer International Publishing, 2018, pp. 226–230. doi: 10.1007/978-3-319-98379-0_25.
    37. N. Rodrigues, R. Netzel, J. Spalink, and D. Weiskopf, “Multiscale Scanpath Visualization and Filtering,” in Proceedings of the Symposium on Eye Tracking and Visualization (ETVIS), L. L. Chuang, M. Burch, and K. Kurzhals, Eds., in Proceedings of the Symposium on Eye Tracking and Visualization (ETVIS). ACM, 2018, pp. 2:1-2:5. doi: 10.1145/3205929.3205931.
    38. N. Rodrigues and D. Weiskopf, “Nonlinear Dot Plots,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 1, Art. no. 1, 2018, doi: 10.1109/TVCG.2017.2744018.
    39. D. Sacha et al., “SOMFlow: Guided Exploratory Cluster Analysis with Self-Organizing Maps and Analytic Provenance,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 1, Art. no. 1, 2018, doi: 10.1109/TVCG.2017.2744805.
    40. M. Scheer, H. H. Bülthoff, and L. L. Chuang, “Auditory Task Irrelevance: A Basis for Inattentional Deafness,” Human Factors, vol. 60, no. 3, Art. no. 3, 2018, doi: 10.1177/0018720818760919.
    41. C. Schulz, K. Schatz, M. Krone, M. Braun, T. Ertl, and D. Weiskopf, “Uncertainty Visualization for Secondary Structures of Proteins,” in Proceedings of the IEEE Pacific Visualization Symposium (PacificVis), in Proceedings of the IEEE Pacific Visualization Symposium (PacificVis). IEEE, 2018, pp. 96–105. doi: 10.1109/PacificVis.2018.00020.
    42. C. Schulz, A. Zeyfang, M. van Garderen, H. Ben Lahmar, M. Herschel, and D. Weiskopf, “Simultaneous Visual Analysis of Multiple Software Hierarchies,” in Proceedings of the IEEE Working Conference on Software Visualization (VISSOFT), in Proceedings of the IEEE Working Conference on Software Visualization (VISSOFT). IEEE, 2018, pp. 87–95. doi: 10.1109/VISSOFT.2018.00017.
    43. V. Schwind, K. Leicht, S. Jäger, K. Wolf, and N. Henze, “Is there an Uncanny Valley of Virtual Animals? A Quantitative and Qualitative Investigation,” International Journal of Human-Computer Studies, vol. 111, pp. 49–61, 2018, doi: 10.1016/j.ijhcs.2017.11.003.
    44. C. Schätzle, “Dative Subjects: Historical Change Visualized,” PhD diss., Universität Konstanz, Konstanz, 2018. [Online]. Available: http://nbn-resolving.de/urn:nbn:de:bsz:352-2-1d917i4avuz1a2
    45. T. Spinner, J. Körner, J. Görtler, and O. Deussen, “Towards an Interpretable Latent Space: An Intuitive Comparison of Autoencoders with Variational Autoencoders,” in Proceedings of the Workshop on Visualization for AI Explainability (VISxAI), in Proceedings of the Workshop on Visualization for AI Explainability (VISxAI). IEEE VIS, 2018. [Online]. Available: https://thilospinner.com/towards-an-interpretable-latent-space/
    46. T. Torsney-Weir, S. Afroozeh, M. Sedlmair, and T. Möller, “Risk Fixers and Sweet Spotters: a Study of the Different Approaches to Using Visual Sensitivity Analysis in an Investment Scenario,” in Proceedings of the Eurographics Conference on Visualization (EuroVis), J. Johansson, F. Sadlo, and T. Schreck, Eds., in Proceedings of the Eurographics Conference on Visualization (EuroVis). Eurographics Association, 2018, pp. 119–123. doi: 10.2312/eurovisshort.20181089.
    47. A. C. Valdez, M. Ziefle, and M. Sedlmair, “Priming and Anchoring Effects in Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 1, Art. no. 1, 2018, doi: 10.1109/TVCG.2017.2744138.
    48. D. Varga, D. Saupe, and T. Szirányi, “DeepRN: A Content Preserving Deep Architecture for Blind Image Quality Assessment,” in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2018, pp. 1–6. doi: 10.1109/ICME.2018.8486528.
    49. Y. Wang et al., “A Perception-driven Approach to Supervised Dimensionality Reduction for Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 5, Art. no. 5, 2018, doi: 10.1109/TVCG.2017.2701829.
    50. V. Yoghourdjian, T. Dwyer, K. Klein, K. Marriott, and M. Wybrow, “Graph Thumbnails: Identifying and Comparing Multiple Graphs at a Glance,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 12, Art. no. 12, 2018, doi: 10.1109/TVCG.2018.2790961.
    51. J. Zagermann, U. Pfeil, and H. Reiterer, “Studying Eye Movements as a Basis for Measuring Cognitive Load,” Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA), p. LBW095:1-LBW095:6, 2018, doi: 10.1145/3170427.3188628.
    52. Y. Zhu et al., “Genome-scale Metabolic Modeling of Responses to Polymyxins in Pseudomonas Aeruginosa,” GigaScience, vol. 7, no. 4, Art. no. 4, 2018, doi: 10.1093/gigascience/giy021.