Paper Awards & Personal Awards

To view more awards, please browse our news section.

All Publications

  1. 2022

    1. K. Angerbauer et al., “Accessibility for Color Vision Deficiencies: Challenges and Findings of a Large Scale Study on Paper Figures,” New Orleans, LA, USA, 2022. doi: 10.1145/3491102.3502133.
    2. P. Balestrucci, D. Wiebusch, and M. O. Ernst, “ReActLab: A Custom Framework for Sensorimotor Experiments ‘in-the-wild,’” Frontiers in Psychology, vol. 13, Jun. 2022, doi: 10.3389/fpsyg.2022.906643.
    3. D. Bienroth et al., “Spatially resolved transcriptomics in immersive environments,” Visual Computing for Industry, Biomedicine, and Art, vol. 5, no. 1, Art. no. 1, 2022, doi: 10.1186/s42492-021-00098-6.
    4. F. Chiossi et al., “Adapting visualizations and interfaces to the user,” it - Information Technology, vol. 64, no. 4–5, Art. no. 4–5, 2022, doi: 10.1515/itit-2022-0035.
    5. P. Fleck, A. Sousa Calepso, S. Hubenschmid, M. Sedlmair, and D. Schmalstieg, “RagRug: A Toolkit for Situated Analytics,” IEEE Transactions on Visualization and Computer Graphics, 2022, doi: 10.1109/TVCG.2022.3157058.
    6. S. Hubenschmid et al., “ReLive: Bridging In-Situ and Ex-Situ Visual Analytics for Analyzing Mixed Reality User Studies,” in CHI Conference on Human Factors in Computing Systems (CHI ’22), New York, NY, 2022, pp. 1–20. doi: 10.1145/3491102.3517550.
    7. D. Hägele et al., “Uncertainty Visualization: Fundamentals and Recent Developments,” it - Information Technology, vol. 64, no. 4–5, Art. no. 4–5, 2022, doi: 10.1515/itit-2022-0033.
    8. A. Jahedi, L. Mehl, M. Rivinius, and A. Bruhn, “Multi-Scale RAFT: combining hierarchical concepts for learning-based optical flow estimation,” Proceedings of the IEEE International Conference on Image Processing (ICIP), Oct. 2022.
    9. R. Kehlbeck, J. Görtler, Y. Wang, and O. Deussen, “SPEULER: Semantics-preserving Euler Diagrams,” IEEE Transactions on Visualization and Computer Graphics, vol. 28, no. 1, Art. no. 1, 2022, doi: 10.1109/TVCG.2021.3114834.
    10. K. Klein, M. Sedlmair, and F. Schreiber, “Immersive Analytics: An Overview,” it - Information Technology, vol. 64, no. 4–5, Art. no. 4–5, 2022, doi: doi:10.1515/itit-2022-0037.
    11. M. Koch, D. Weiskopf, and K. Kurzhals, “A Spiral into the Mind: Gaze Spiral Visualization for Mobile Eye Tracking,” Proceedings of the ACM on Computer Graphics and Interactive Techniques, vol. 5, no. 2, Art. no. 2, May 2022, doi: 10.1145/3530795.
    12. T. Krake, A. Bruhn, B. Eberhardt, and D. Weiskopf, “Efficient and Robust Background Modeling with Dynamic Mode Decomposition,” Journal of Mathematical Imaging and Vision (2022), 2022, doi: 10.1007/s10851-022-01068-0.
    13. Q. Q. Ngo, F. L. Dennig, D. A. Keim, and M. Sedlmair, “Machine Learning Meets Visualization – Experiences and Lessons Learned,” it - Information Technology, vol. 64, no. 4–5, Art. no. 4–5, 2022, doi: doi:10.1515/itit-2022-0034.
    14. F. Petersen, B. Goldluecke, C. Borgelt, and O. Deussen, “GenDR: A Generalized Differentiable Renderer,” 2022. doi: 10.48550/ARXIV.2204.13845.
    15. F. Petersen, B. Goldluecke, O. Deussen, and H. Kuehne, “Style Agnostic 3D Reconstruction via Adversarial Style Transfer,” in 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Jan. 2022, pp. 2273–2282. doi: 10.1109/WACV51458.2022.00233.
    16. M. Philipp, N. Bacher, S. Sauer, F. Mathis-Ullrich, and A. Bruhn, “From Chairs To Brains: Customizing Optical Flow For Surgical Activity Localization,” in Proceedings of the IEEE International Symposium on Biomedical Imaging (ISBI), Mar. 2022, pp. 1–5. doi: 10.1109/ISBI52829.2022.9761704.
    17. J. Schmalfuss, P. Scholze, and A. Bruhn, “A Perturbation-Constrained Adversarial Attack for Evaluating the Robustness of Optical Flow,” Proceedings of the European Conference on Computer Vision (ECCV), Oct. 2022.
    18. C. Schneegass, V. Füseschi, V. Konevych, and F. Draxler, “Investigating the Use of Task Resumption Cues to Support Learning in Interruption-Prone Environments,” Multimodal Technologies and Interaction, vol. 6, no. 1, Art. no. 1, 2022, doi: 10.3390/mti6010002.
    19. F. Schreiber and D. Weiskopf, “Quantitative Visual Computing,” it - Information Technology, vol. 64, no. 4–5, Art. no. 4–5, 2022, doi: doi:10.1515/itit-2022-0048.
    20. P. Schäfer, N. Rodrigues, D. Weiskopf, and S. Storandt, “Group Diagrams for Simplified Representation of Scanpaths,” Aug. 2022. doi: 10.1145/3554944.3554971.
    21. Y. Wang, M. Koch, M. Bâce, D. Weiskopf, and A. Bulling, “Impact of Gaze Uncertainty on AOIs in Information Visualisations,” in 2022 Symposium on Eye Tracking Research and Applications, Jun. 2022, pp. 1–6. doi: 10.1145/3517031.3531166.
    22. Y. Wang, C. Jiao, M. Bâce, and A. Bulling, “VisRecall: Quantifying Information Visualisation Recallability Via Question Answering,” IEEE Transactions on Visualization and Computer Graphics, pp. 1–12, 2022, doi: 10.1109/TVCG.2022.3198163.
    23. D. Weiskopf, “Uncertainty Visualization: Concepts, Methods, and Applications in Biological Data Visualization,” Frontiers in Bioinformatics, vol. 2, 2022, doi: 10.3389/fbinf.2022.793819.
    24. J. Zagermann et al., “Complementary Interfaces for Visual Computing,” it - Information Technology, vol. 64, no. 4–5, Art. no. 4–5, 2022, doi: doi:10.1515/itit-2022-0031.
    25. Y. Zhang, K. Klein, O. Deussen, T. Gutschlag, and S. Storandt, “Robust Visualization of Trajectory Data,” it - Information Technology, vol. 64, no. 4–5, Art. no. 4–5, 2022, doi: doi:10.1515/itit-2022-0036.
  2. 2021

    1. M. Aichem et al., “Visual exploration of large metabolic models,” Bioinformatics, vol. 37, no. 23, Art. no. 23, May 2021, doi: 10.1093/bioinformatics/btab335.
    2. P. Balestrucci, V. Maffei, F. Lacquaniti, and A. Moscatelli, “The Effects of Visual Parabolic Motion on the Subjective Vertical and on Interception,” Neuroscience, vol. 453, pp. 124–137, Jan. 2021, doi: 10.1016/j.neuroscience.2020.09.052.
    3. H. Ben Lahmar and M. Herschel, “Collaborative filtering over evolution provenance data for interactive visual data exploration,” Information Systems, vol. 95, p. 101620, 2021, doi: 10.1016/j.is.2020.101620.
    4. J. Bernard, M. Hutter, M. Zeppelzauer, M. Sedlmair, and T. Munzner, “ProSeCo: Visual analysis of class separation measures and dataset characteristics,” Computers & Graphics, vol. 96, pp. 48–60, 2021, doi: https://doi.org/10.1016/j.cag.2021.03.004.
    5. J. Bernard, M. Hutter, M. Sedlmair, M. Zeppelzauer, and T. Munzner, “A Taxonomy of Property Measures to Unify Active Learning and Human-centered Approaches to Data Labeling,” ACM Transactions on Interactive Intelligent Systems (TiiS), vol. 11, no. 3–4, Art. no. 3–4, 2021, doi: 10.1145/3439333.
    6. D. Bethge et al., “VEmotion: Using Driving Context for Indirect Emotion Prediction in Real-Time,” in The 34th Annual ACM Symposium on User Interface Software and Technology, New York, NY, USA: Association for Computing Machinery, 2021, pp. 638–651. doi: 10.1145/3472749.3474775.
    7. R. Bian et al., “Implicit Multidimensional Projection of Local Subspaces,” IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 2, Art. no. 2, 2021, doi: 10.1109/TVCG.2020.3030368.
    8. H. Booth and C. Beck, “Verb-second and Verb-first in the History of Icelandic,” Journal of Historical Syntax, vol. 5, no. 27, Art. no. 27, 2021, doi: 10.18148/hs/2021.v5i28.112.
    9. C. Bu et al., “SineStream: Improving the Readability of Streamgraphs by Minimizing Sine Illusion Effects,” IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 2, Art. no. 2, 2021, doi: 10.1109/TVCG.2020.3030404.
    10. M. Burch, W. Huang, M. Wakefield, H. C. Purchase, D. Weiskopf, and J. Hua, “The State of the Art in Empirical User Evaluation of Graph Visualizations,” IEEE Access, vol. 9, pp. 4173–4198, 2021, doi: 10.1109/ACCESS.2020.3047616.
    11. Y. Chen, K. C. Kwan, L.-Y. Wei, and H. Fu, “Autocomplete Repetitive Stroking with Image Guidance,” Tokyo, Japan, 2021. doi: 10.1145/3478512.3488595.
    12. R. Cutura, K. Angerbauer, F. Heyen, N. Hube, and M. Sedlmair, “DaRt: Generative Art using Dimensionality Reduction Algorithms,” in 2021 IEEE VIS Arts Program (VISAP), 2021, pp. 59--72. doi: 10.1109/VISAP52981.2021.00013.
    13. R. Cutura, C. Morariu, Z. Cheng, Y. Wang, D. Weiskopf, and M. Sedlmair, “Hagrid — Gridify Scatterplots with Hilbert and Gosper Curves,” in The 14th International Symposium on Visual Information Communication and Interaction, Potsdam, Germany, 2021, p. 1:1—1:8. doi: 10.1145/3481549.3481569.
    14. F. L. Dennig, M. T. Fischer, M. Blumenschein, J. Fuchs, D. A. Keim, and E. Dimara, “ParSetgnostics: Quality Metrics for Parallel Sets,” Computer Graphics Forum, vol. 40, no. 3, Art. no. 3, 2021, doi: https://doi.org/10.1111/cgf.14314.
    15. F. Draxler, C. Schneegass, J. Safranek, and H. Hussmann, “Why Did You Stop? - Investigating Origins and Effects of Interruptions during Mobile Language Learning,” in Mensch Und Computer 2021, Ingolstadt, Germany, 2021, pp. 21–33. doi: 10.1145/3473856.3473881.
    16. F. Frieß, M. Becher, G. Reina, and T. Ertl, “Amortised Encoding for Large High-Resolution Displays,” in 2021 IEEE 11th Symposium on Large Data Analysis and Visualization (LDAV), 2021, pp. 53–62. doi: 10.1109/LDAV53230.2021.00013.
    17. K. Gadhave et al., “Predicting intent behind selections in scatterplot visualizations,” Information Visualization, vol. 20, no. 4, Art. no. 4, 2021, doi: 10.1177/14738716211038604.
    18. S. Giebenhain and B. Goldlücke, “AIR-Nets: An Attention-Based Framework for Locally Conditioned Implicit Representations,” in 2021 International Conference on 3D Vision (3DV), 2021, pp. 1054–1064. doi: 10.1109/3DV53792.2021.00113.
    19. N. Grossmann, J. Bernard, M. Sedlmair, and M. Waldner, “Does the Layout Really Matter? A Study on Visual Model Accuracy Estimation,” in IEEE Visualization Conference  (VIS, Short Paper), 2021, pp. 61--65. doi: 10.1109/VIS49827.2021.9623326.
    20. F. Götz-Hahn, V. Hosu, H. Lin, and D. Saupe, “KonVid-150k : A Dataset for No-Reference Video Quality Assessment of Videos in-the-Wild,” IEEE Access, vol. 9, pp. 72139--72160, 2021, doi: 10.1109/ACCESS.2021.3077642.
    21. S. Hubenschmid, J. Zagermann, D. Fink, J. Wieland, T. Feuchtner, and H. Reiterer, “Towards Asynchronous Hybrid User Interfaces for Cross-Reality Interaction,” in ISS’21 Workshop Proceedings: “Transitional Interfaces in Mixed and Cross-Reality: A new frontier?,” 2021. doi: 10.18148/kops/352-2-84mm0sggczq02.
    22. S. Hubenschmid, J. Zagermann, S. Butscher, and H. Reiterer, “STREAM: Exploring the Combination of Spatially-Aware Tablets with Augmented Reality Head-Mounted Displays for Immersive Analytics,” in Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, New York, NY, USA: Association for Computing Machinery, 2021. doi: 10.1145/3411764.3445298.
    23. K. Klein, D. Garkov, S. Rütschlin, T. Böttcher, and F. Schreiber, “QSDB—a graphical Quorum Sensing Database,” Database, vol. 2021, no. 2021, Art. no. 2021, Nov. 2021, doi: 10.1093/database/baab058.
    24. K. Klein, M. Aichem, Y. Zhang, S. Erk, B. Sommer, and F. Schreiber, “TEAMwISE : synchronised immersive environments for exploration and analysis of animal behaviour,” Journal of Visualization, 2021, doi: 10.1007/s12650-021-00746-2.
    25. K. Klein et al., “Visual analytics of sensor movement data for cheetah behaviour analysis,” Journal of Visualization, 2021, doi: 10.1007/s12650-021-00742-6.
    26. T. Krake, S. Reinhardt, M. Hlawatsch, B. Eberhardt, and D. Weiskopf, “Visualization and Selection of Dynamic Mode Decomposition Components for Unsteady Flow,” Visual Informatics, vol. 5, no. 3, Art. no. 3, 2021, doi: 10.1016/j.visinf.2021.06.003.
    27. M. Kraus et al., “Immersive Analytics with Abstract 3D Visualizations: A Survey,” Computer Graphics Forum, 2021, doi: https://doi.org/10.1111/cgf.14430.
    28. M. Kraus, K. Klein, J. Fuchs, D. A. Keim, F. Schreiber, and M. Sedlmair, “The Value of Immersive Visualization,” IEEE Computer Graphics and Applications (CG&A), vol. 41, no. 4, Art. no. 4, 2021, doi: 10.1109/MCG.2021.3075258.
    29. C. Krauter, J. Vogelsang, A. S. Calepso, K. Angerbauer, and M. Sedlmair, “Don’t Catch It: An Interactive Virtual-Reality Environment to Learn About COVID-19 Measures Using Gamification Elements,” in Mensch und Computer, 2021, pp. 593--596. doi: 10.1145/3473856.3474031.
    30. K. C. Kwan and H. Fu, “Automatic Image Checkpoint Selection for Guider-Follower Pedestrian Navigation,” Computer Graphics Forum, vol. 40, no. 1, Art. no. 1, 2021, doi: https://doi.org/10.1111/cgf.14192.
    31. H. Lin, G. Chen, and F. W. Siebert, “Positional Encoding: Improving Class-Imbalanced Motorcycle Helmet use Classification,” in 2021 IEEE International Conference on Image Processing (ICIP), 2021, pp. 1194–1198. doi: 10.1109/ICIP42928.2021.9506178.
    32. K. Lu et al., “Palettailor: Discriminable Colorization for Categorical Data,” IEEE Transactions on Visualization & Computer Graphics, vol. 27, no. 02, Art. no. 02, 2021, doi: 10.1109/TVCG.2020.3030406.
    33. L. Mehl, C. Beschle, A. Barth, and A. Bruhn, “An Anisotropic Selection Scheme for Variational Optical Flow Methods with Order-Adaptive Regularisation,” in Proceedings of the International Conference on Scale Space and Variational Methods in Computer Vision (SSVM), 2021, pp. 140--152. doi: 10.1007/978-3-030-75549-2_12.
    34. H. Men, H. Lin, M. Jenadeleh, and D. Saupe, “Subjective Image Quality Assessment with Boosted Triplet Comparisons,” IEEE Access, vol. 9, pp. 138939–138975, 2021, doi: 10.1109/ACCESS.2021.3118295.
    35. C. Morariu, A. Bibal, R. Cutura, B. Frenay, and M. Sedlmair, “DumbleDR: Predicting User Preferences of Dimensionality Reduction Projection Quality,” arXiv preprint, Technical Report arXiv:2105.09275, 2021. [Online]. Available: https://arxiv.org/abs/2105.09275
    36. T. Müller, C. Schulz, and D. Weiskopf, “Adaptive Polygon Rendering for Interactive Visualization in the Schwarzschild Spacetime,” European Journal of Physics, vol. 43, no. 1, Art. no. 1, 2021, doi: 10.1088/1361-6404/ac2b36.
    37. G. J. Rijken et al., “Illegible Semantics: Exploring the Design Space of Metal Logos,” 2021. [Online]. Available: https://arxiv.org/abs/2109.01688
    38. B. Roziere et al., “EvolGAN: Evolutionary Generative Adversarial Networks,” in Computer Vision -- ACCV 2020, Cham, Nov. 2021, pp. 679--694. doi: 10.1007/978-3-030-69538-5_41.
    39. B. Roziere et al., “Tarsier: Evolving Noise Injection in Super-Resolution GANs,” in 2020 25th International Conference on Pattern Recognition (ICPR), 2021, pp. 7028–7035. doi: 10.1109/ICPR48806.2021.9413318.
    40. C. Schulz et al., “Multi-Class Inverted Stippling,” ACM Trans. Graph., vol. 40, no. 6, Art. no. 6, Dec. 2021, doi: 10.1145/3478513.3480534.
    41. R. Sevastjanova, A.-L. Kalouli, C. Beck, H. Schäfer, and M. El-Assady, “Explaining Contextualization in Language Models using Visual Analytics,” in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Online, 2021, pp. 464--476. doi: 10.18653/v1/2021.acl-long.39.
    42. S. Su, V. Hosu, H. Lin, Y. Zhang, and D. Saupe, “KonIQ++: Boosting No-Reference Image Quality Assessment in the Wild by Jointly Predicting Image Quality and Defects,” in 32nd British Machine Vision Conference, 2021, pp. 1–12. [Online]. Available: https://www.bmvc2021-virtualconference.com/assets/papers/0868.pdf
    43. K. Vock, S. Hubenschmid, J. Zagermann, S. Butscher, and H. Reiterer, “IDIAR : Augmented Reality Dashboards to Supervise Mobile Intervention Studies,” New York, NY, 2021. doi: 10.1145/3473856.3473876.
    44. J. Wieland, J. Zagermann, J. Müller, and H. Reiterer, “Separation, Composition, or Hybrid? : Comparing Collaborative 3D Object Manipulation Techniques for Handheld Augmented Reality,” in 2021 IEEE International Symposium on Mixed and Augmented Reality, Piscataway, NJ, 2021, pp. 403--412. doi: 10.1109/ISMAR52148.2021.00057.
    45. L. Zhou, C. R. Johnson, and D. Weiskopf, “Data-Driven Space-Filling Curves,” IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 2, Art. no. 2, 2021, doi: 10.1109/TVCG.2020.3030473.
  3. 2020

    1. P. Angelini, S. Chaplick, S. Cornelsen, and G. Da Lozzo, “Planar L-Drawings of Bimodal Graphs,” in Graph Drawing and Network Visualization, Cham, 2020, pp. 205–219. doi: 10.1007/978-3-030-68766-3_17.
    2. P. Balestrucci et al., “Pipelines Bent, Pipelines Broken: Interdisciplinary Self-Reflection on the Impact of COVID-19 on Current and Future Research (Position Paper),” in 2020 IEEE Workshop on Evaluation and Beyond-Methodological Approaches to Visualization (BELIV), 2020, pp. 11--18. doi: 10.1109/BELIV51497.2020.00009.
    3. H. Bast, P. Brosi, and S. Storandt, “Metro Maps on Octilinear Grid Graphs,” in Computer Graphics Forum, Hoboken, New Jersey, 2020, no. Vol. 39, pp. 357--367. doi: 10.1111/cgf.13986.
    4. C. Beck, H. Booth, M. El-Assady, and M. Butt, “Representation Problems in Linguistic Annotations: Ambiguity, Variation, Uncertainty, Error and Bias,” in Proceedings of the 14th Linguistic Annotation Workshop, Barcelona, Spain, 2020, pp. 60--73. [Online]. Available: https://www.aclweb.org/anthology/2020.law-1.6
    5. C. Beck, “DiaSense at SemEval-2020 Task 1: Modeling Sense Change via Pre-trained BERT Embeddings,” in Proceedings of the Fourteenth Workshop on Semantic Evaluation, Barcelona (online), 2020, pp. 50--58. [Online]. Available: https://www.aclweb.org/anthology/2020.semeval-1.4
    6. M. Beck and S. Storandt, “Puzzling Grid Embeddings,” in Proceedings of the Symposium on Algorithm Engineering and Experiments, ALENEX 2020, Salt Lake City, UT, USA, January 5-6, 2020, 2020, pp. 94--105. doi: 10.1137/1.9781611976007.8.
    7. J. Bernard, M. Hutter, M. Zeppelzauer, M. Sedlmair, and T. Munzner, “SepEx: Visual Analysis of Class Separation Measures,” in Proceedings of the International Workshop on Visual Analytics (EuroVA), 2020, pp. 1–5. doi: 10.2312/eurova.20201079.
    8. F. Bishop, J. Zagermann, U. Pfeil, G. Sanderson, H. Reiterer, and U. Hinrichs, “Construct-A-Vis: Exploring the Free-Form Visualization Processes of Children,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 1, Art. no. 1, 2020, doi: 10.1109/TVCG.2019.2934804.
    9. M. Blumenschein, L. J. Debbeler, N. C. Lages, B. Renner, D. A. Keim, and M. El-Assady, “v-plots: Designing Hybrid Charts for the Comparative Analysis of Data Distributions,” Computer Graphics Forum, vol. 39, no. 3, Art. no. 3, 2020, doi: 10.1111/cgf.14002.
    10. M. Blumenschein, X. Zhang, D. Pomerenke, D. A. Keim, and J. Fuchs, “Evaluating Reordering Strategies for Cluster Identification in Parallel Coordinates,” Computer Graphics Forum, vol. 39, no. 3, Art. no. 3, 2020, doi: 10.1111/cgf.14000.
    11. M. Blumenschein, “Pattern-Driven Design of Visualizations for High-Dimensional Data,” Universität Konstanz, Konstanz, 2020. [Online]. Available: http://nbn-resolving.de/urn:nbn:de:bsz:352-2-18wp9dhmhapww8
    12. M. Borowski, J. Zagermann, C. N. Klokmose, H. Reiterer, and R. Rädle, “Exploring the Benefits and Barriers of Using Computational Notebooks for Collaborative Programming Assignments,” in Proceedings of the ACM Technical Symposium on Computer Science Education (SIGCSE), 2020, pp. 468–474. doi: 10.1145/3328778.3366887.
    13. N. Brich et al., “Visual Analysis of Multivariate Intensive Care Surveillance Data,” in Eurographics Workshop on Visual Computing for Biology and Medicine, 2020. doi: 10.2312/vcbm.20201174.
    14. V. Bruder, C. Müller, S. Frey, and T. Ertl, “On Evaluating Runtime Performance of Interactive Visualizations,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, pp. 2848–2862, Sep. 2020, doi: 10.1109/TVCG.2019.2898435.
    15. N. Chotisarn et al., “A Systematic Literature Review of Modern Software Visualization,” Journal of Visualization, vol. 23, no. 4, Art. no. 4, 2020, doi: 10.1007/s12650-020-00647-w.
    16. S. Cornelsen et al., “Drawing Shortest Paths in Geodetic Graphs,” in Graph Drawing and Network Visualization, Cham, 2020, pp. 333--340. doi: 10.1007/978-3-030-68766-3_26.
    17. M. Dias, D. Orellana, S. Vidal, L. Merino, and A. Bergel, “Evaluating a Visual Approach for Understanding JavaScript Source Code,” in Proceedings of the 28th International Conference on Program Comprehension, Jul. 2020, pp. 128–138. doi: https://doi.org/10.1145/3387904.3389275.
    18. F. Draxler, A. Labrie, A. Schmidt, and L. L. Chuang, “Augmented Reality to Enable Users in Learning Case Grammar from Their Real-World Interactions,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2020, pp. 410:1-410:12. doi: 10.1145/3313831.3376537.
    19. F. Frieß, M. Braun, V. Bruder, S. Frey, G. Reina, and T. Ertl, “Foveated Encoding for Large High-Resolution Displays,” IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 2, Art. no. 2, 2020, doi: 10.1109/TVCG.2020.3030445.
    20. F. Frieß, C. Müller, and T. Ertl, “Real-Time High-Resolution Visualisation,” in Proceedings of the Eurographics Symposium on Vision, Modeling, and Visualization (VMV), 2020, pp. 127–135. doi: 10.2312/vmv.20201195.
    21. R. Garcia and D. Weiskopf, “Inner-Process Visualization of Hidden States in Recurrent Neural Networks,” in Proceedings of the 13th International Symposium on Visual Information Communication and Interaction, Eindhoven, Netherlands, 2020, pp. 20:1-20:5. doi: 10.1145/3430036.3430047.
    22. T. Guha et al., “ATQAM/MAST’20: Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends,” in Proceedings of the 28th ACM International Conference on Multimedia, Seattle, WA, USA, 2020, pp. 4758–4760. doi: 10.1145/3394171.3421895.
    23. F. Heyen et al., “ClaVis: An Interactive Visual Comparison System for Classifiers,” in Proceedings of the International Conference on Advanced Visual Interfaces, 2020, pp. 9:1-9:9. doi: 10.1145/3399715.3399814.
    24. V. Hosu et al., “From Technical to Aesthetics Quality Assessment and Beyond: Challenges and Potential,” in Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, Seattle, WA, USA, 2020, pp. 19–20. doi: 10.1145/3423268.3423589.
    25. M. Jenadeleh, M. Pedersen, and D. Saupe, “Blind Quality Assessment of Iris Images Acquired in Visible Light for Biometric Recognition,” Sensors, vol. 20, no. 5, Art. no. 5, 2020, doi: 10.3390/s20051308.
    26. U. Ju, L. L. Chuang, and C. Wallraven, “Acoustic Cues Increase Situational Awareness in Accident Situations: A VR Car-Driving Study,” IEEE Transactions on Intelligent Transportation Systems, pp. 1–11, 2020, doi: 10.1109/TITS.2020.3035374.
    27. T. Kosch, A. Schmidt, S. Thanheiser, and L. L. Chuang, “One Does Not Simply RSVP: Mental Workload to Select Speed Reading Parameters Using Electroencephalography,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2020, pp. 637:1-637:13. doi: 10.1145/3313831.3376766.
    28. M. Kraus et al., “A Comparative Study of Orientation Support Tools in Virtual Reality Environments with Virtual Teleportation,” in 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2020, pp. 227–238. doi: 10.1109/ISMAR50242.2020.00046.
    29. M. Kraus et al., “Assessing 2D and 3D Heatmaps for Comparative Analysis: An Empirical Study,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2020, pp. 546:1–546:14. doi: 10.1145/3313831.3376675.
    30. A. Kumar, D. Mohanty, K. Kurzhals, F. Beck, D. Weiskopf, and K. Mueller, “Demo of the EyeSAC System for Visual Synchronization, Cleaning, and Annotation of Eye Movement Data,” Stuttgart, Germany, 2020. doi: 10.1145/3379157.3391988.
    31. A. Kumar, P. Howlader, R. Garcia, D. Weiskopf, and K. Mueller, “Challenges in Interpretability of Neural Networks for Eye Movement Data,” Stuttgart, Germany, 2020. doi: 10.1145/3379156.3391361.
    32. K. Kurzhals, M. Burch, and D. Weiskopf, “What We See and What We Get from Visualization: Eye Tracking Beyond Gaze Distributions and Scanpaths,” CoRR, vol. abs/2009.14515, 2020, [Online]. Available: https://arxiv.org/abs/2009.14515
    33. K. Kurzhals, F. Göbel, K. Angerbauer, M. Sedlmair, and M. Raubal, “A View on the Viewer: Gaze-Adaptive Captions for Videos,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2020, pp. 139:1–139:12. doi: 10.1145/3313831.3376266.
    34. K. Kurzhals et al., “Visual Analytics and Annotation of Pervasive Eye Tracking Video,” in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), Stuttgart, Germany, 2020, pp. 16:1-16:9. doi: 10.1145/3379155.3391326.
    35. M. Lan Ha, V. Hosu, and V. Blanz, “Color Composition Similarity and Its Application in Fine-grained Similarity,” in 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), Piscataway, NJ, 2020, pp. 2548--2557. doi: 10.1109/WACV45572.2020.9093522.
    36. H. Lin, M. Jenadeleh, G. Chen, U. Reips, R. Hamzaoui, and D. Saupe, “Subjective Assessment of Global Picture-Wise Just Noticeable Difference,” in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), 2020, pp. 1–6. doi: 10.1109/ICMEW46912.2020.9106058.
    37. H. Lin, J. D. Deng, D. Albers, and F. W. Siebert, “Helmet Use Detection of Tracked Motorcycles Using CNN-Based Multi-Task Learning,” IEEE Access, vol. 8, pp. 162073–162084, 2020, doi: 10.1109/ACCESS.2020.3021357.
    38. H. Lin et al., “SUR-FeatNet: Predicting the Satisfied User Ratio Curvefor Image Compression with Deep Feature Learning,” CoRR, vol. abs/2001.02002, 2020, doi: 10.1007/s41233-020-00034-1.
    39. H. Men, V. Hosu, H. Lin, A. Bruhn, and D. Saupe, “Visual Quality Assessment for Interpolated Slow-Motion Videos Based on a Novel Database,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), 2020, pp. 1–6. doi: 10.1109/QoMEX48832.2020.9123096.
    40. H. Men, V. Hosu, H. Lin, A. Bruhn, and D. Saupe, “Subjective annotation for a frame interpolation benchmark using artefact amplification,” Quality and User Experience, vol. 5, no. 1, Art. no. 1, 2020, doi: 10.1007/s41233-020-00037-y.
    41. L. Merino, M. Lungu, and C. Seidl, “Unleashing the Potentials of Immersive Augmented Reality for Software Engineering,” in 2020 IEEE 27th International Conference on Software Analysis, Evolution and Reengineering (SANER), 2020, pp. 517–521. doi: 10.1109/SANER48275.2020.9054812.
    42. L. Merino et al., “Toward Agile Situated Visualization: An Exploratory User Study,” in Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA), 2020, p. LBW087:1–LBW087:7. doi: 10.1145/3334480.3383017.
    43. L. Merino, M. Schwarzl, M. Kraus, M. Sedlmair, D. Schmalstieg, and D. Weiskopf, “Evaluating Mixed and Augmented Reality: A Systematic Literature Review (2009 -- 2019),” 2020. doi: doi: 10.1109/ISMAR50242.2020.00069.
    44. D. Okanovic et al., “Can a Chatbot Support Software Engineers with Load Testing? Approach and Experiences,” in Proceedings of the ACM/SPEC International Conference on Performance Engineering (ICPE), 2020, pp. 120–129. doi: 10.1145/3358960.3375792.
    45. N. Pathmanathan et al., “Eye vs. Head: Comparing Gaze Methods for Interaction in Augmented Reality,” in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), Stuttgart, Germany, 2020, pp. 50:1-50:5. doi: 10.1145/3379156.3391829.
    46. N. Patkar, L. Merino, and O. Nierstrasz, “Towards Requirements Engineering with Immersive Augmented Reality,” in Conference Companion of the 4th International Conference on Art, Science, and Engineering of Programming, Porto, Portugal, 2020, pp. 55–60. doi: 10.1145/3397537.3398472.
    47. N. Rodrigues, C. Schulz, A. Lhuillier, and D. Weiskopf, “Cluster-Flow Parallel Coordinates: Tracing Clusters Across Subspaces,” in Proceedings of Graphics Interface 2020, 2020, pp. 382–392. doi: 10.20380/GI2020.38.
    48. B. Roziere et al., “Evolutionary Super-Resolution,” in Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion, Cancún, Mexico, 2020, pp. 151–152. doi: 10.1145/3377929.3389959.
    49. D. Schubring, M. Kraus, C. Stolz, N. Weiler, D. A. Keim, and H. Schupp, “Virtual Reality Potentiates Emotion and Task Effects of Alpha/Beta Brain Oscillations,” Brain Sciences, vol. 10, no. 8, Art. no. 8, 2020, doi: 10.3390/brainsci10080537.
    50. C. Schätzle and M. Butt, “Visual Analytics for Historical Linguistics: Opportunities and Challenges,” Journal of Data Mining and Digital Humanities, 2020, doi: 10.46298/jdmdh.6707.
    51. M. Sondag, W. Meulemans, C. Schulz, K. Verbeek, D. Weiskopf, and B. Speckmann, “Uncertainty Treemaps,” in Proceedings of the IEEE Pacific Visualization Symposium (PacificVis), 2020, pp. 111–120. doi: 10.1109/PacificVis48177.2020.7614.
    52. J. Spoerhase, S. Storandt, and J. Zink, “Simplification of Polyline Bundles,” in 17th Scandinavian Symposium and Workshops on Algorithm Theory, SWAT 2020, June 22-24, 2020, Tórshavn, Faroe Islands, 2020, pp. 35:1--35:20. doi: 10.4230/LIPIcs.SWAT.2020.35.
    53. T. Stankov and S. Storandt, “Maximum Gap Minimization in Polylines,” in Web and Wireless Geographical Information Systems - 18th International Symposium, W2GIS 2020, Wuhan, China, November 13-14, 2020, Proceedings, 2020, pp. 181--196. doi: 10.1007/978-3-030-60952-8\_19.
    54. A. Streichert, K. Angerbauer, M. Schwarzl, and M. Sedlmair, “Comparing Input Modalities for Shape Drawing Tasks,” in Proceedings of the Symposium on Eye Tracking Research & Applications-Short Papers (ETRA-SP), 2020, no. 51, pp. 1–5. doi: 10.1145/3379156.3391830.
    55. D. R. Wahl et al., “Why We Eat What We Eat: Assessing Dispositional and In-the-Moment Eating Motives by Using Ecological Momentary Assessment,” JMIR mHealth and uHealth., vol. 8, no. 1, Art. no. 1, 2020, doi: doi:10.2196/13191.
    56. D. Weiskopf, “Vis4Vis: Visualization for (Empirical) Visualization Research,” in Foundations of Data Visualization, M. Chen, H. Hauser, P. Rheingans, and G. Scheuermann, Eds. Springer International Publishing, 2020, pp. 209--224. doi: 10.1007/978-3-030-34444-3_10.
    57. O. Wiedemann, V. Hosu, H. Lin, and D. Saupe, “Foveated Video Coding for Real-Time Streaming Applications,” in 2020 Twelfth International Conference on Quality of Multimedia Experience (QoMEX), 2020, pp. 1–6. doi: 10.1109/QoMEX48832.2020.9123080.
    58. O. Wiedemann and D. Saupe, “Gaze Data for Quality Assessment of Foveated Video,” Stuttgart, Germany, 2020. doi: 10.1145/3379157.3391656.
    59. J. Zagermann, U. Pfeil, P. von Bauer, D. Fink, and H. Reiterer, “‘It’s in my other hand!’: Studying the Interplay of Interaction Techniques and Multi-Tablet Activities,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2020, pp. 413:1-413:13. doi: 10.1145/3313831.3376540.
    60. X. Zhao, H. Lin, P. Guo, D. Saupe, and H. Liu, “Deep Learning VS. Traditional Algorithms for Saliency Prediction of Distorted Images,” in 2020 IEEE International Conference on Image Processing (ICIP), 2020, pp. 156–160. doi: 10.1109/ICIP40778.2020.9191203.
    61. L. Zhou, M. Rivinius, C. R. Johnson, and D. Weiskopf, “Photographic High-Dynamic-Range Scalar Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 6, Art. no. 6, 2020, doi: 10.1109/TVCG.2020.2970522.
    62. S. Öney et al., “Evaluation of Gaze Depth Estimation from Eye Tracking in Augmented Reality,” in Proceedings of the Symposium on Eye Tracking Research & Applications-Short Paper (ETRA-SP), 2020, pp. 49:1-49:5. doi: 10.1145/3379156.3391835.
  4. 2019

    1. M. Aupetit, M. Sedlmair, M. M. Abbas, A. Baggag, and H. Bensmail, “Toward Perception-based Evaluation of Clustering Techniques for Visual Analytics,” in Proceedings of the IEEE Visualization Conference (VIS), 2019, pp. 141–145. doi: 10.1109/VISUAL.2019.8933620.
    2. P. Balestrucci and M. Ernst, “Visuo-motor adaptation during interaction with a user-adaptive system,” Journal of Vision, vol. 19, p. 187a, Sep. 2019, doi: 10.1167/19.10.187a.
    3. T. M. Benz, B. Riedl, and L. L. Chuang, “Projection Displays Induce Less Simulator Sickness than Head-Mounted Displays in a Real Vehicle Driving Simulator,” in Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI), 2019, pp. 379–387. doi: 10.1145/3342197.3344515.
    4. H. Booth and C. Schätzle, “The Syntactic Encoding of Information Structure in the History of Icelandic,” in Proceedings of the LFG’19 Conference, 2019, pp. 69–89. [Online]. Available: http://web.stanford.edu/group/cslipublications/cslipublications/LFG/LFG-2019/lfg2019-booth-schaetzle.pdf
    5. V. Bruder, C. Schulz, R. Bauer, S. Frey, D. Weiskopf, and T. Ertl, “Voronoi-Based Foveated Volume Rendering,” in Proceedings of the Eurographics Conference on Visualization - Short Papers (EuroVis), 2019, pp. 67–71. doi: 10.2312/evs.20191172.
    6. V. Bruder, K. Kurzhals, S. Frey, D. Weiskopf, and T. Ertl, “Space-Time Volume Visualization of Gaze and Stimulus,” in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), 2019, pp. 12:1-12:9. doi: 10.1145/3314111.3319812.
    7. V. Bruder et al., “Volume-Based Large Dynamic Graph Analysis Supported by Evolution Provenance,” Multimedia Tools and Applications, vol. 78, no. 23, Art. no. 23, 2019, doi: 10.1007/s11042-019-07878-6.
    8. T. Castermans, M. van Garderen, W. Meulemans, M. Nöllenburg, and X. Yuan, “Short Plane Supports for Spatial Hypergraphs,” in Graph Drawing and Network Visualization. GD 2018. Lecture Notes in Computer Science, vol. 11282, T. Biedl and A. Kerren, Eds. Springer International Publishing, 2019, pp. 53–66. doi: 10.1007/978-3-030-04414-5_4.
    9. F. L. Dennig, T. Polk, Z. Lin, T. Schreck, H. Pfister, and M. Behrisch, “FDive: Learning Relevance Models using Pattern-based Similarity Measures,” Proceedings of the IEEE Conference on Visual Analytics Science and Technology (VAST), 2019, doi: 10.1109/VAST47406.2019.8986940.
    10. C. Fan et al., “SUR-Net: Predicting the Satisfied User Ratio Curve for Image Compression with Deep Learning,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), 2019, pp. 1–6. doi: 10.1109/QoMEX.2019.8743204.
    11. J. Görtler, M. Spicker, C. Schulz, D. Weiskopf, and O. Deussen, “Stippling of 2D Scalar Fields,” IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 6, Art. no. 6, 2019, doi: 10.1109/TVCG.2019.2903945.
    12. V. Hosu, H. Lin, T. Sziranyi, and D. Saupe, “KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment,” CoRR, vol. abs/1910.06180, 2019, doi: 10.1109/TIP.2020.2967829.
    13. V. Hosu, B. Goldlücke, and D. Saupe, “Effective Aesthetics Prediction with Multi-level Spatially Pooled Features,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9367–9375, 2019, doi: 10.1109/CVPR.2019.00960.
    14. S. Jaeger et al., “Challenges for Brain Data Analysis in VR Environments,” in 2019 IEEE Pacific Visualization Symposium (PacificVis), 2019, pp. 42–46. doi: 10.1109/PacificVis.2019.00013.
    15. K. Klein, M. Aichem, B. Sommer, S. Erk, Y. Zhang, and F. Schreiber, “TEAMwISE: Synchronised Immersive Environments for Exploration and Analysis of Movement Data,” in Proceedings of the ACM Symposium on Visual Information Communication and Interaction (VINCI), 2019, pp. 9:1-9:5. doi: 10.1145/3356422.3356450.
    16. K. Klein et al., “Fly with the flock : immersive solutions for animal movement visualization and analytics,” Journal of the Royal Society Interface, vol. 16, no. 153, Art. no. 153, 2019, doi: 10.1098/rsif.2018.0794.
    17. K. Klein et al., “Visual Analytics for Cheetah Behaviour Analysis.,” in VINCI, 2019, pp. 16:1-16:8. [Online]. Available: http://dblp.uni-trier.de/db/conf/vinci/vinci2019.html#0001JMWHBS19
    18. H. Lin, V. Hosu, and D. Saupe, “KADID-10k: A Large-scale Artificially Distorted IQA Database,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), 2019, pp. 1–3. doi: 10.1109/QoMEX.2019.8743252.
    19. H. Men, H. Lin, V. Hosu, D. Maurer, A. Bruhn, and D. Saupe, “Visual Quality Assessment for Motion Compensated Frame Interpolation,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), 2019, pp. 1–6. doi: 10.1109/QoMEX.2019.8743221.
    20. M. Miller, X. Zhang, J. Fuchs, and M. Blumenschein, “Evaluating Ordering Strategies of Star Glyph Axes,” in Proceedings of the IEEE Visualization Conference (VIS), 2019, pp. 91–95. doi: 10.1109/VISUAL.2019.8933656.
    21. T. Munz, L. L. Chuang, S. Pannasch, and D. Weiskopf, “VisME: Visual microsaccades explorer,” Journal of Eye Movement Research, vol. 12, no. 6, Art. no. 6, 2019, doi: 10.16910/jemr.12.6.5.
    22. C. Müller, M. Braun, and T. Ertl, “Optimised Molecular Graphics on the HoloLens,” in IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2019, Osaka, Japan, March 23-27, 2019, 2019, pp. 97–102. doi: 10.1109/VR.2019.8798111.
    23. J. Müller, J. Zagermann, J. Wieland, U. Pfeil, and H. Reiterer, “A Qualitative Comparison Between Augmented and Virtual Reality Collaboration with Handheld Devices,” in Mensch und Computer 2019 – Tagungsband (MuC), 2019, pp. 399–410. doi: 10.1145/3340764.3340773.
    24. R. Netzel, N. Rodrigues, A. Haug, and D. Weiskopf, “Compensation of Simultaneous Orientation Contrast in Superimposed Textures,” in Proceedings of the Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP), 2019, vol. 3: IVAPP, pp. 48–57. doi: 10.5220/0007356800480057.
    25. D. Pomerenke, F. L. Dennig, D. A. Keim, J. Fuchs, and M. Blumenschein, “Slope-Dependent Rendering of Parallel Coordinates to Reduce Density Distortion and Ghost Clusters,” in Proceedings of the IEEE Visualization Conference (VIS), 2019, pp. 86–90. doi: 10.1109/VISUAL.2019.8933706.
    26. K. Schatz et al., “Visual Analysis of Structure Formation in Cosmic Evolution,” in Proceedings of the IEEE Scientific Visualization Conference (SciVis), 2019, pp. 33–41. doi: 10.1109/scivis47405.2019.8968855.
    27. C. Schätzle and H. Booth, “DiaHClust: an Iterative Hierarchical Clustering Approach for Identifying Stages in Language Change,” in Proceedings of the International Workshop on Computational Approaches to Historical Language Change, 2019, pp. 126–135. doi: 10.18653/v1/W19-4716.
    28. C. Schätzle, F. L. Denning, M. Blumenschein, D. A. Keim, and M. Butt, “Visualizing Linguistic Change as Dimension Interactions,” in Proceedings of the International Workshop on Computational Approaches to Historical Language Change, 2019, pp. 272–278. doi: 10.18653/v1/W19-4734.
    29. N. Silva et al., “Eye Tracking Support for Visual Analytics Systems: Foundations, Current Applications, and Research Challenges,” in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), 2019, pp. 11:1-11:9. doi: 10.1145/3314111.3319919.
    30. B. Sommer et al., “Tiled Stereoscopic 3D Display Wall - Concept, Applications and Evaluation,” Electronic Imaging, vol. 2019, no. 3, Art. no. 3, 2019, doi: 10.2352/ISSN.2470-1173.2019.3.SDA-641.
    31. Y. Wang, Z. Wang, C.-W. Fu, H. Schmauder, O. Deussen, and D. Weiskopf, “Image-Based Aspect Ratio Selection.,” IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 1, Art. no. 1, 2019, doi: 10.1109/TVCG.2018.2865266.
    32. Y. Wang et al., “Improving the Robustness of Scagnostics,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 1, Art. no. 1, 2019, doi: 10.1109/TVCG.2019.2934796.
    33. H. Zhang, S. Frey, H. Steeb, D. Uribe, T. Ertl, and W. Wang, “Visualization of Bubble Formation in Porous Media,” IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 1, Art. no. 1, 2019, doi: 10.1109/TVCG.2018.2864506.
    34. L. Zhou, R. Netzel, D. Weiskopf, and C. R. Johnson, “Spectral Visualization Sharpening.,” in Proceedings of the ACM Symposium on Applied Perception (SAP), 2019, pp. 18:1-18:9. doi: 10.1145/3343036.3343133.
  5. 2018

    1. H. Bast, P. Brosi, and S. Storandt, “Efficient Generation of Geographically Accurate Transit Maps,” in Proceedings of the ACM International Conference on Advances in Geographic Information Systems (SIGSPATIAL), 2018, pp. 13–22. doi: 10.1145/3274895.3274955.
    2. M. Behrisch et al., “Quality Metrics for Information Visualization,” Computer Graphics Forum, vol. 37, no. 3, Art. no. 3, 2018, doi: https://doi.org/10.1111/cgf.13446.
    3. H. Ben Lahmar, M. Herschel, M. Blumenschein, and D. A. Keim, “Provenance-based Visual Data Exploration with EVLIN,” in Proceedings of the Conference on Extending Database Technology (EDBT), 2018, pp. 686–689. doi: 10.5441/002/edbt.2018.85.
    4. M. Blumenschein et al., “SMARTexplore: Simplifying High-Dimensional Data Analysis through a Table-Based Visual Analytics Approach,” in Proceedings of the IEEE Conference on Visual Analytics Science and Technology (VAST), 2018, pp. 36–47. doi: 10.1109/VAST.2018.8802486.
    5. S. S. Borojeni, S. C. J. Boll, W. Heuten, H. H. Bülthoff, and L. L. Chuang, “Feel the Movement: Real Motion Influences Responses to Take-Over Requests in Highly Automated Vehicles,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2018, pp. 246:1-246:13. doi: 10.1145/3173574.3173820.
    6. V. Bruder, M. Hlawatsch, S. Frey, M. Burch, D. Weiskopf, and T. Ertl, “Volume-Based Large Dynamic Graph Analytics,” in Proceedings of the International Conference Information Visualisation (IV), 2018, pp. 210–219. doi: 10.1109/iV.2018.00045.
    7. L. L. Chuang and U. Pfeil, “Transparency and Openness Promotion Guidelines for HCI,” in Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA), 2018, p. SIG04:1-SIG04:4. doi: 10.1145/3170427.3185377.
    8. M. de Ridder, K. Klein, and J. Kim, “A Review and Outlook on Visual Analytics for Uncertainties in Functional Magnetic Resonance Imaging,” Brain Informatics, vol. 5, no. 2, Art. no. 2, 2018, doi: 10.1186/s40708-018-0083-0.
    9. L. J. Debbeler, M. Gamp, M. Blumenschein, D. A. Keim, and B. Renner, “Polarized But Illusory Beliefs About Tap and Bottled Water: A Product- and Consumer-Oriented Survey and Blind Tasting Experiment,” Science of the Total Environment, vol. 643, pp. 1400–1410, 2018, doi: 10.1016/j.scitotenv.2018.06.190.
    10. T. Dingler, R. Rzayev, A. S. Shirazi, and N. Henze, “Designing Consistent Gestures Across Device Types: Eliciting RSVP Controls for Phone, Watch, and Glasses,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2018, pp. 419:1–419:12. doi: 10.1145/3173574.3173993.
    11. S. Frey, “Spatio-Temporal Contours from Deep Volume Raycasting,” Computer Graphics Forum, vol. 37, no. 3, Art. no. 3, 2018, doi: 10.1111/cgf.13438.
    12. F. Frieß, M. Landwehr, V. Bruder, S. Frey, and T. Ertl, “Adaptive Encoder Settings for Interactive Remote Visualisation on High-Resolution Displays,” in Proceedings of the IEEE Symposium on Large Data Analysis and Visualization - Short Papers (LDAV), 2018, pp. 87–91. doi: 10.1109/LDAV.2018.8739215.
    13. M. Ghaffar et al., “3D Modelling and Visualisation of Heterogeneous Cell Membranes in Blender,” in Proceedings of the 11th International Symposium on Visual Information Communication and Interaction, Växjö, Sweden, 2018, pp. 64–71. doi: 10.1145/3231622.3231639.
    14. C. Glatz and L. L. Chuang, “The Time Course of Auditory Looming Cues in Redirecting Visuo-Spatial Attention,” Nature - Scientific Reports, vol. 9, pp. 743:1-743:10, 2018, doi: 10.1038/s41598-018-36033-8.
    15. C. Glatz, S. S. Krupenia, H. H. Bülthoff, and L. L. Chuang, “Use the Right Sound for the Right Job: Verbal Commands and Auditory Icons for a Task-Management System Favor Different Information Processes in the Brain,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2018, pp. 472:1-472:13. doi: 10.1145/3173574.3174046.
    16. J. Görtler, R. Kehlbeck, and O. Deussen, “A Visual Exploration of Gaussian Processes,” 2018. doi: 10.23915/distill.00017.
    17. J. Görtler, C. Schulz, O. Deussen, and D. Weiskopf, “Bubble Treemaps for Uncertainty Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 1, Art. no. 1, 2018, doi: 10.1109/TVCG.2017.2743959.
    18. A. Hautli-Janisz, C. Rohrdantz, C. Schätzle, A. Stoffel, M. Butt, and D. A. Keim, “Visual Analytics in Diachronic Linguistic Investigations,” Linguistic Visualizations, 2018.
    19. V. Hosu, H. Lin, and D. Saupe, “Expertise Screening in Crowdsourcing Image Quality,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), 2018, pp. 276–281. doi: https://dx.doi.org/10.1109/QoMEX.2018.8463427.
    20. S. Hubenschmid, J. Zagermann, S. Butscher, and H. Reiterer, “Employing Tangible Visualisations in Augmented Reality with Mobile Devices,” in Proceedings of the Working Conference on Advanced Visual Interfaces (AVI), 2018, pp. 1–4. [Online]. Available: http://nbn-resolving.de/urn:nbn:de:bsz:352-2-1iooenfo4fofm8
    21. K. Hänsel, R. Poguntke, H. Haddadi, A. Alomainy, and A. Schmidt, “What to Put on the User: Sensing Technologies for Studies and Physiology Aware Systems,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2018, pp. 145:1-145:14. doi: 10.1145/3173574.3173719.
    22. M. Jenadeleh, M. Pedersen, and D. Saupe, “Realtime Quality Assessment of Iris Biometrics Under Visible Light,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPRW), CVPR Workshops, 2018, pp. 443–452. doi: 10.1109/CVPRW.2018.00085.
    23. J. Karolus, H. Schuff, T. Kosch, P. W. Wozniak, and A. Schmidt, “EMGuitar: Assisting Guitar Playing with Electromyography,” in Proceedings of the Designing Interactive Systems Conference (DIS), 2018, pp. 651–655. doi: 10.1145/3196709.3196803.
    24. M. Klapperstueck et al., “Contextuwall: Multi-site Collaboration Using Display Walls,” Journal of Visual Languages & Computing, vol. 46, pp. 35–42, 2018, doi: 10.1016/j.jvlc.2017.10.002.
    25. P. Knierim, V. Schwind, A. M. Feit, F. Nieuwenhuizen, and N. Henze, “Physical Keyboards in Virtual Reality: Analysis of Typing Performance and Effects of Avatar Hands,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2018, pp. 345:1–345:9. doi: 10.1145/3173574.3173919.
    26. T. Kosch, M. Funk, A. Schmidt, and L. L. Chuang, “Identifying Cognitive Assistance with Mobile Electroencephalography: A Case Study with In-Situ Projections for Manual Assembly.,” Proceedings of the ACM on Human-Computer Interaction (ACMHCI), vol. 2, pp. 11:1-11:20, 2018, doi: 10.1145/3229093.
    27. D. Laupheimer, P. Tutzauer, N. Haala, and M. Spicker, “Neural Networks for the Classification of Building Use from Street-view Imagery,” ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. IV–2, pp. 177–184, 2018, doi: 10.5194/isprs-annals-IV-2-177-2018.
    28. N. Marniok and B. Goldluecke, “Real-time Variational Range Image Fusion and Visualization for Large-scale Scenes using GPU Hash Tables,” in Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV), 2018, pp. 912–920. doi: 10.1109/WACV.2018.00105.
    29. K. Marriott et al., Immersive Analytics, vol. 11190. Springer International Publishing, 2018. doi: 10.1007/978-3-030-01388-2.
    30. D. Maurer, M. Stoll, and A. Bruhn, “Directional Priors for Multi-Frame Optical Flow,” in Proceedings of the British Machine Vision Conference (BMVC), 2018, pp. 106:1-106:13. [Online]. Available: http://bmvc2018.org/contents/papers/0377.pdf
    31. D. Maurer and A. Bruhn, “ProFlow: Learning to Predict Optical Flow,” in Proceedings of the British Machine Vision Conference (BMVC), 2018, vol. 86:1-86:13. doi: arXiv:1806.00800.
    32. D. Maurer, N. Marniok, B. Goldluecke, and A. Bruhn, “Structure-from-motion-aware PatchMatch for Adaptive Optical Flow Estimation,” in Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol. 11212, V. Ferrari, M. Hebert, C. Sminchisescu, and Y. Weiss, Eds. Springer International Publishing, 2018, pp. 575–592. doi: 10.1007/978-3-030-01237-3_35.
    33. D. Maurer, Y. C. Ju, M. Breuß, and A. Bruhn, “Combining Shape from Shading and Stereo: A Joint Variational Method for Estimating Depth, Illumination and Albedo,” International Journal of Computer Vision, vol. 126, no. 12, Art. no. 12, 2018, doi: 10.1007/s11263-018-1079-1.
    34. H. Men, H. Lin, and D. Saupe, “Spatiotemporal Feature Combination Model for No-Reference Video Quality Assessment,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), 2018, pp. 1–3. doi: 10.1109/QoMEX.2018.8463426.
    35. A. Nesti, G. Rognini, B. Herbelin, H. H. Bülthoff, L. L. Chuang, and O. Blanke, “Modulation of Vection Latencies in the Full-Body Illusion,” PLoS ONE, vol. 13, no. 12, Art. no. 12, 2018, doi: 10.1371/journal.pone.0209189.
    36. S. Oppold and M. Herschel, “Provenance for Entity Resolution,” in Provenance and Annotation of Data and Processes. IPAW 2018. Lecture Notes in Computer Science, vol. 11017, K. Belhajjame, A. Gehani, and P. Alper, Eds. Springer International Publishing, 2018, pp. 226–230. doi: 10.1007/978-3-319-98379-0_25.
    37. N. Rodrigues and D. Weiskopf, “Nonlinear Dot Plots,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 1, Art. no. 1, 2018, doi: 10.1109/TVCG.2017.2744018.
    38. N. Rodrigues, R. Netzel, J. Spalink, and D. Weiskopf, “Multiscale Scanpath Visualization and Filtering,” in Proceedings of the Symposium on Eye Tracking and Visualization (ETVIS), 2018, pp. 2:1-2:5. doi: 10.1145/3205929.3205931.
    39. D. Sacha et al., “SOMFlow: Guided Exploratory Cluster Analysis with Self-Organizing Maps and Analytic Provenance,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 1, Art. no. 1, 2018, doi: 10.1109/TVCG.2017.2744805.
    40. M. Scheer, H. H. Bülthoff, and L. L. Chuang, “Auditory Task Irrelevance: A Basis for Inattentional Deafness,” Human Factors, vol. 60, no. 3, Art. no. 3, 2018, doi: 10.1177/0018720818760919.
    41. C. Schulz, K. Schatz, M. Krone, M. Braun, T. Ertl, and D. Weiskopf, “Uncertainty Visualization for Secondary Structures of Proteins,” in Proceedings of the IEEE Pacific Visualization Symposium (PacificVis), 2018, pp. 96–105. doi: 10.1109/PacificVis.2018.00020.
    42. C. Schulz, A. Zeyfang, M. van Garderen, H. Ben Lahmar, M. Herschel, and D. Weiskopf, “Simultaneous Visual Analysis of Multiple Software Hierarchies,” in Proceedings of the IEEE Working Conference on Software Visualization (VISSOFT), 2018, pp. 87–95. doi: 10.1109/VISSOFT.2018.00017.
    43. V. Schwind, K. Leicht, S. Jäger, K. Wolf, and N. Henze, “Is there an Uncanny Valley of Virtual Animals? A Quantitative and Qualitative Investigation,” International Journal of Human-Computer Studies, vol. 111, pp. 49–61, 2018, doi: 10.1016/j.ijhcs.2017.11.003.
    44. C. Schätzle, “Dative Subjects: Historical Change Visualized,” PhD diss., Universität Konstanz, Konstanz, 2018. [Online]. Available: http://nbn-resolving.de/urn:nbn:de:bsz:352-2-1d917i4avuz1a2
    45. T. Spinner, J. Körner, J. Görtler, and O. Deussen, “Towards an Interpretable Latent Space: An Intuitive Comparison of Autoencoders with Variational Autoencoders,” 2018. [Online]. Available: https://thilospinner.com/towards-an-interpretable-latent-space/
    46. T. Torsney-Weir, S. Afroozeh, M. Sedlmair, and T. Möller, “Risk Fixers and Sweet Spotters: a Study of the Different Approaches to Using Visual Sensitivity Analysis in an Investment Scenario,” in Proceedings of the Eurographics Conference on Visualization (EuroVis), 2018, pp. 119–123. doi: 10.2312/eurovisshort.20181089.
    47. A. C. Valdez, M. Ziefle, and M. Sedlmair, “Priming and Anchoring Effects in Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 1, Art. no. 1, 2018, doi: 10.1109/TVCG.2017.2744138.
    48. D. Varga, D. Saupe, and T. Szirányi, “DeepRN: A Content Preserving Deep Architecture for Blind Image Quality Assessment,” in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), 2018, pp. 1–6. doi: 10.1109/ICME.2018.8486528.
    49. Y. Wang et al., “A Perception-driven Approach to Supervised Dimensionality Reduction for Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 5, Art. no. 5, 2018, doi: 10.1109/TVCG.2017.2701829.
    50. V. Yoghourdjian, T. Dwyer, K. Klein, K. Marriott, and M. Wybrow, “Graph Thumbnails: Identifying and Comparing Multiple Graphs at a Glance,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 12, Art. no. 12, 2018, doi: 10.1109/TVCG.2018.2790961.
    51. J. Zagermann, U. Pfeil, and H. Reiterer, “Studying Eye Movements as a Basis for Measuring Cognitive Load,” Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA), p. LBW095:1-LBW095:6, 2018, doi: 10.1145/3170427.3188628.
    52. Y. Zhu et al., “Genome-scale Metabolic Modeling of Responses to Polymyxins in Pseudomonas Aeruginosa,” GigaScience, vol. 7, no. 4, Art. no. 4, 2018, doi: 10.1093/gigascience/giy021.
  6. 2017

    1. Y. Abdelrahman, P. Knierim, P. W. Wozniak, N. Henze, and A. Schmidt, “See Through the Fire: Evaluating the Augmentation of Visual Perception of Firefighters Using Depth and Thermal Cameras,” in Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing and Symposium on Wearable Computers (UbiComp/ISWC), 2017, pp. 693–696. doi: 10.1145/3123024.3129269.
    2. J. Allsop, R. Gray, H. Bülthoff, and L. Chuang, “Eye Movement Planning on Single-Sensor-Single-Indicator Displays is Vulnerable to User Anxiety and Cognitive Load,” Journal of Eye Movement Research, vol. 10, no. 5, Art. no. 5, 2017, doi: 10.16910/jemr.10.5.8.
    3. M. A. Baazizi, H. Ben Lahmar, D. Colazzo, G. Ghelli, and C. Sartiani, “Schema Inference for Massive JSON Datasets,” in Proceedings of the Conference on Extending Database Technology (EDBT), 2017, pp. 222–233. doi: 10.5441/002/edbt.2017.21.
    4. D. Bahrdt et al., “Growing Balls in ℝd,” in Proceedings of the Meeting on Algorithm Engineering and Experiments (ALENEX), 2017, pp. 247–258. doi: 10.1137/1.9781611974768.20.
    5. A. Barth, B. Harrach, N. Hyvönen, and L. Mustonen, “Detecting Stochastic Inclusions in Electrical Impedance Tomography,” Inverse Problems, vol. 33, no. 11, Art. no. 11, 2017, doi: 10.1088/1361-6420/aa8f5c.
    6. M. Behrisch et al., “Magnostics: Image-Based Search of Interesting Matrix Views for Guided Network Exploration,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, Art. no. 1, 2017, doi: 10.1109/TVCG.2016.2598467.
    7. H. Ben Lahmar and M. Herschel, “Provenance-based Recommendations for Visual Data Exploration,” in Proceedings of the USENIX Conference on Theory and Practice of Provenance (TAPP), 2017, pp. 1–7.
    8. H. Booth, C. Schätzle, K. Börjars, and M. Butt, “Dative Subjects and the Rise of Positional Licensing in Icelandic,” in Proceedings of the LFG’17 Conference, 2017, pp. 104–124. [Online]. Available: http://web.stanford.edu/group/cslipublications/cslipublications/LFG/LFG-2017/lfg2017-bsbb.pdf
    9. V. Bruder, S. Frey, and T. Ertl, “Prediction-Based Load Balancing and Resolution Tuning for Interactive Volume Raycasting,” Visual Informatics, vol. 1, no. 2, Art. no. 2, 2017, doi: 10.1016/j.visinf.2017.09.001.
    10. M. Burch, M. Hlawatsch, and D. Weiskopf, “Visualizing a Sequence of a Thousand Graphs (or Even More),” Computer Graphics Forum, vol. 36, no. 3, Art. no. 3, 2017, doi: 10.1111/cgf.13185.
    11. L. L. Chuang, C. Glatz, and S. S. Krupenia, “Using EEG to Understand why Behavior to Auditory In-vehicle Notifications Differs Across Test Environments,” in Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI), 2017, pp. 123–133. doi: 10.1145/3122986.3123017.
    12. M. Correll and J. Heer, “Surprise! Bayesian Weighting for De-Biasing Thematic Maps.,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, Art. no. 1, 2017, [Online]. Available: http://dblp.uni-trier.de/db/journals/tvcg/tvcg23.html#CorrellH17
    13. M. de Ridder, K. Klein, and J. Kim, “Temporaltracks: Visual Analytics for Exploration of 4D fMRI Time-series Coactivation,” in Proceedings of the Computer Graphics International Conference (CGI), 2017, pp. 13:1-13:6. doi: 10.1145/3095140.3095153.
    14. K. de Winkel, A. Nesti, H. Ayaz, and H. Bülthoff, “Neural Correlates of Decision Making on Whole Body Yaw Rotation: an fNIRS Study,” Neuroscience Letters, vol. 654, pp. 56–62, 2017, doi: 10.1016/j.neulet.2017.04.053.
    15. O. Deussen, M. Spicker, and Q. Zheng, “Weighted Linde-Buzo-Gray Stippling,” ACM Transactions on Graphics, vol. 36, no. 6, Art. no. 6, 2017, doi: 10.1145/3130800.3130819.
    16. R. Diestelkämper, M. Herschel, and P. Jadhav, “Provenance in DISC Systems: Reducing Space Overhead at Runtime,” in Proceedings of the USENIX Conference on Theory and Practice of Provenance (TAPP), 2017, pp. 1–13. [Online]. Available: https://dl.acm.org/doi/abs/10.5555/3183865.3183883
    17. T. Dingler, A. Schmidt, and T. Machulla, “Building Cognition-Aware Systems: A Mobile Toolkit for Extracting Time-of-Day Fluctuations of Cognitive Performance,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), vol. 1, no. 3, Art. no. 3, 2017, doi: 10.1145/3132025.
    18. S. Egger-Lampl et al., “Crowdsourcing Quality of Experience Experiments,” in Information Systems and Applications, incl. Internet/Web, and HCI, vol. Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments. Dagstuhl Seminar 15481, Dagstuhl Castle, Germany, November 22 – 27, 2015, Revised Contributions, no. LNCS 10264, D. Archambault, H. Purchase, and T. Hossfeld, Eds. Springer International Publishing, 2017, pp. 154–190. doi: 10.1007/978-3-319-66435-4_7.
    19. S. Frey, “Sampling and Estimation of Pairwise Similarity in Spatio-Temporal Data Based on Neural Networks,” in Informatics, 2017, vol. 4, no. 3, p. 27. doi: 10.3390/informatics4030027.
    20. S. Frey and T. Ertl, “Progressive Direct Volume-to-Volume Transformation,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, Art. no. 1, 2017, doi: 10.1109/TVCG.2016.2599042.
    21. S. Frey and T. Ertl, “Flow-Based Temporal Selection for Interactive Volume Visualization,” Computer Graphics Forum, vol. 36, no. 8, Art. no. 8, 2017, doi: 10.1111/cgf.13070.
    22. D. Fritsch, “Photogrammetrische Auswertung digitaler Bilder – Neue Methoden der Kamerakalibration, dichten Bildzuordnung und Interpretation von Punktwolken,” in Photogrammetrie und Fernerkundung, C. Heipke, Ed. Springer Spektrum, 2017, pp. 157–196. doi: 10.1007/978-3-662-47094-7_41.
    23. D. Fritsch and M. Klein, “3D and 4D Modeling for AR and VR App Developments,” in Proceedings of the International Conference on Virtual System & Multimedia (VSMM), 2017, pp. 1–8. doi: 10.1109/VSMM.2017.8346270.
    24. S. Funke, T. Mendel, A. Miller, S. Storandt, and M. Wiebe, “Map Simplification with Topology Constraints: Exactly and in Practice,” in Proceedings of the Meeting on Algorithm Engineering and Experiments (ALENEX), 2017, pp. 185–196. doi: 10.1137/1.9781611974768.15.
    25. S. Funke, N. Schnelle, and S. Storandt, “URAN: A Unified Data Structure for Rendering and Navigation,” in Web and Wireless Geographical Information Systems. W2GIS 2017. Lecture Notes in Computer Science, vol. 10181, D. Brosset, C. Claramunt, X. Li, and T. Wang, Eds. 2017, pp. 66–82. doi: 10.1007/978-3-319-55998-8_5.
    26. U. Gadiraju et al., “Crowdsourcing Versus the Laboratory: Towards Human-centered Experiments Using the Crowd,” in Information Systems and Applications, incl. Internet/Web, and HCI, vol. Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments. Dagstuhl Seminar 15481, Dagstuhl Castle, Germany, November 22 – 27, 2015, Revised Contributions, no. LNCS 10264, D. Archambault, H. Purchase, and T. Hossfeld, Eds. Springer International Publishing, 2017, pp. 6–26. doi: 10.1007/978-3-319-66435-4_2.
    27. P. Gralka, C. Schulz, G. Reina, D. Weiskopf, and T. Ertl, “Visual Exploration of Memory Traces and Call Stacks,” in Proceedings of the IEEE Working Conference on Software Visualization (VISSOFT), 2017, pp. 54–63. doi: 10.1109/VISSOFT.2017.15.
    28. M. Heinemann, V. Bruder, S. Frey, and T. Ertl, “Power Efficiency of Volume Raycasting on Mobile Devices,” 2017. doi: 10.2312/eurp.20171166.
    29. M. Herschel, R. Diestelkämper, and H. Ben Lahmar, “A Survey on Provenance - What for? What form? What from?,” The VLDB Journal, vol. 26, pp. 881–906, 2017, doi: 10.1007/s00778-017-0486-1.
    30. V. Hosu et al., “The Konstanz natural video database (KoNViD-1k).,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), 2017, pp. 1–6. doi: 10.1109/QoMEX.2017.7965673.