All Publications

  1. 2021

    1. H. Ben Lahmar and M. Herschel, “Collaborative filtering over evolution provenance data for interactive visual data exploration,” Information Systems, vol. 95, p. 101620, 2021, doi: 10.1016/j.is.2020.101620.
    2. J. Bernard, M. Hutter, M. Zeppelzauer, M. Sedlmair, and T. Munzner, “ProSeCo: Visual analysis of class separation measures and dataset characteristics,” Computers & Graphics, vol. 96, pp. 48–60, 2021, doi: https://doi.org/10.1016/j.cag.2021.03.004.
    3. R. Bian et al., “Implicit Multidimensional Projection of Local Subspaces,” IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 2, Art. no. 2, 2021, doi: 10.1109/TVCG.2020.3030368.
    4. H. Booth and C. Beck, “Verb-second and Verb-first in the History of Icelandic,” Journal of Historical Syntax, vol. 5, no. 27, Art. no. 27, 2021, doi: 10.18148/hs/2021.v5i28.112.
    5. C. Bu et al., “SineStream: Improving the Readability of Streamgraphs by Minimizing Sine Illusion Effects,” IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 2, Art. no. 2, 2021, doi: 10.1109/TVCG.2020.3030404.
    6. M. Burch, W. Huang, M. Wakefield, H. C. Purchase, D. Weiskopf, and J. Hua, “The State of the Art in Empirical User Evaluation of Graph Visualizations,” IEEE Access, vol. 9, pp. 4173–4198, 2021, doi: 10.1109/ACCESS.2020.3047616.
    7. F. Götz-Hahn, V. Hosu, H. Lin, and D. Saupe, “KonVid-150k : A Dataset for No-Reference Video Quality Assessment of Videos in-the-Wild,” IEEE Access, vol. 9, pp. 72139--72160, 2021, doi: 10.1109/ACCESS.2021.3077642.
    8. S. Hubenschmid, J. Zagermann, S. Butscher, and H. Reiterer, “STREAM: Exploring the Combination of Spatially-Aware Tablets with Augmented Reality Head-Mounted Displays for Immersive Analytics,” in Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, New York, NY, USA: Association for Computing Machinery, 2021. doi: 10.1145/3411764.3445298.
    9. K. Klein, M. Aichem, Y. Zhang, S. Erk, B. Sommer, and F. Schreiber, “TEAMwISE : synchronised immersive environments for exploration and analysis of animal behaviour,” Journal of Visualization, 2021, doi: 10.1007/s12650-021-00746-2.
    10. K. Klein et al., “Visual analytics of sensor movement data for cheetah behaviour analysis,” Journal of Visualization, 2021, doi: 10.1007/s12650-021-00742-6.
    11. K. C. Kwan and H. Fu, “Automatic Image Checkpoint Selection for Guider-Follower Pedestrian Navigation,” Computer Graphics Forum, vol. 40, no. 1, Art. no. 1, 2021, doi: https://doi.org/10.1111/cgf.14192.
    12. K. Lu et al., “Palettailor: Discriminable Colorization for Categorical Data,” IEEE Transactions on Visualization & Computer Graphics, vol. 27, no. 02, Art. no. 02, 2021, doi: 10.1109/TVCG.2020.3030406.
    13. L. Mehl, C. Beschle, A. Barth, and A. Bruhn, “An Anisotropic Selection Scheme for Variational Optical Flow Methods with Order-Adaptive Regularisation,” in Proc. International Conference on Scale Space and Variational Methods in Computer Vision (SSVM), 2021, pp. 140--152. doi: 10.1007/978-3-030-75549-2_12.
    14. B. Roziere et al., “EvolGAN: Evolutionary Generative Adversarial Networks,” in Computer Vision -- ACCV 2020, Cham, Nov. 2021, pp. 679--694. doi: 10.1007/978-3-030-69538-5_41.
    15. B. Roziere et al., “Tarsier: Evolving Noise Injection in Super-Resolution GANs,” in 2020 25th International Conference on Pattern Recognition (ICPR), 2021, pp. 7028–7035. doi: 10.1109/ICPR48806.2021.9413318.
    16. K. Vock, S. Hubenschmid, J. Zagermann, S. Butscher, and H. Reiterer, “IDIAR : Augmented Reality Dashboards to Supervise Mobile Intervention Studies,” New York, NY, 2021. doi: 10.1145/3473856.3473876.
    17. L. Zhou, C. R. Johnson, and D. Weiskopf, “Data-Driven Space-Filling Curves,” IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 2, Art. no. 2, 2021, doi: 10.1109/TVCG.2020.3030473.
  2. 2020

    1. P. Angelini, S. Chaplick, S. Cornelsen, and G. Da Lozzo, “Planar L-Drawings of Bimodal Graphs,” in Graph Drawing and Network Visualization, Cham, 2020, pp. 205–219. doi: 10.1007/978-3-030-68766-3_17.
    2. P. Balestrucci et al., “Pipelines Bent, Pipelines Broken: Interdisciplinary Self-Reflection on the Impact of COVID-19 on Current and Future Research (Position Paper),” in 2020 IEEE Workshop on Evaluation and Beyond-Methodological Approaches to Visualization (BELIV), 2020, pp. 11--18. doi: 10.1109/BELIV51497.2020.00009.
    3. H. Bast, P. Brosi, and S. Storandt, “Metro Maps on Octilinear Grid Graphs,” in Computer Graphics Forum, Hoboken, New Jersey, 2020, no. Vol. 39, pp. 357--367. doi: 10.1111/cgf.13986.
    4. C. Beck, H. Booth, M. El-Assady, and M. Butt, “Representation Problems in Linguistic Annotations: Ambiguity, Variation, Uncertainty, Error and Bias,” in Proceedings of the 14th Linguistic Annotation Workshop, Barcelona, Spain, 2020, pp. 60--73. [Online]. Available: https://www.aclweb.org/anthology/2020.law-1.6
    5. C. Beck, “DiaSense at SemEval-2020 Task 1: Modeling Sense Change via Pre-trained BERT Embeddings,” in Proceedings of the Fourteenth Workshop on Semantic Evaluation, Barcelona (online), 2020, pp. 50--58. [Online]. Available: https://www.aclweb.org/anthology/2020.semeval-1.4
    6. J. Bernard, M. Hutter, M. Zeppelzauer, M. Sedlmair, and T. Munzner, “SepEx: Visual Analysis of Class Separation Measures,” in Proceedings of the International Workshop on Visual Analytics (EuroVA), 2020, pp. 1–5. doi: 10.2312/eurova.20201079.
    7. F. Bishop, J. Zagermann, U. Pfeil, G. Sanderson, H. Reiterer, and U. Hinrichs, “Construct-A-Vis: Exploring the Free-Form Visualization Processes of Children,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 1, Art. no. 1, 2020, doi: 10.1109/TVCG.2019.2934804.
    8. M. Blumenschein, L. J. Debbeler, N. C. Lages, B. Renner, D. A. Keim, and M. El-Assady, “v-plots: Designing Hybrid Charts for the Comparative Analysis of Data Distributions,” Computer Graphics Forum, vol. 39, no. 3, Art. no. 3, 2020, doi: 10.1111/cgf.14002.
    9. M. Blumenschein, X. Zhang, D. Pomerenke, D. A. Keim, and J. Fuchs, “Evaluating Reordering Strategies for Cluster Identification in Parallel Coordinates,” Computer Graphics Forum, vol. 39, no. 3, Art. no. 3, 2020, doi: 10.1111/cgf.14000.
    10. M. Blumenschein, “Pattern-Driven Design of Visualizations for High-Dimensional Data,” Universität Konstanz, Konstanz, 2020. [Online]. Available: http://nbn-resolving.de/urn:nbn:de:bsz:352-2-18wp9dhmhapww8
    11. M. Borowski, J. Zagermann, C. N. Klokmose, H. Reiterer, and R. Rädle, “Exploring the Benefits and Barriers of Using Computational Notebooks for Collaborative Programming Assignments,” in Proceedings of the ACM Technical Symposium on Computer Science Education (SIGCSE), 2020, pp. 468–474. doi: 10.1145/3328778.3366887.
    12. N. Brich et al., “Visual Analysis of Multivariate Intensive Care Surveillance Data,” in Eurographics Workshop on Visual Computing for Biology and Medicine, 2020. doi: 10.2312/vcbm.20201174.
    13. V. Bruder, C. Müller, S. Frey, and T. Ertl, “On Evaluating Runtime Performance of Interactive Visualizations,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, pp. 2848–2862, Sep. 2020, doi: 10.1109/TVCG.2019.2898435.
    14. N. Chotisarn et al., “A Systematic Literature Review of Modern Software Visualization,” Journal of Visualization, vol. 23, no. 4, Art. no. 4, 2020, doi: 10.1007/s12650-020-00647-w.
    15. S. Cornelsen et al., “Drawing Shortest Paths in Geodetic Graphs,” in Graph Drawing and Network Visualization, Cham, 2020, pp. 333--340. doi: 10.1007/978-3-030-68766-3_26.
    16. M. Dias, D. Orellana, S. Vidal, L. Merino, and A. Bergel, “Evaluating a Visual Approach for Understanding JavaScript Source Code,” in Proceedings of the 28th International Conference on Program Comprehension, Jul. 2020, pp. 128–138. doi: https://doi.org/10.1145/3387904.3389275.
    17. F. Draxler, A. Labrie, A. Schmidt, and L. L. Chuang, “Augmented Reality to Enable Users in Learning Case Grammar from Their Real-World Interactions,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2020, pp. 410:1-410:12. doi: 10.1145/3313831.3376537.
    18. F. Frieß, M. Braun, V. Bruder, S. Frey, G. Reina, and T. Ertl, “Foveated Encoding for Large High-Resolution Displays,” IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 2, Art. no. 2, 2020, doi: 10.1109/TVCG.2020.3030445.
    19. F. Frieß, C. Müller, and T. Ertl, “Real-Time High-Resolution Visualisation,” in Proceedings of the Eurographics Symposium on Vision, Modeling, and Visualization (VMV), 2020, pp. 127–135. doi: 10.2312/vmv.20201195.
    20. T. Guha et al., “ATQAM/MAST’20: Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends,” in Proceedings of the 28th ACM International Conference on Multimedia, Seattle, WA, USA, 2020, pp. 4758–4760. doi: 10.1145/3394171.3421895.
    21. F. Heyen et al., “ClaVis: An Interactive Visual Comparison System for Classifiers,” in Proceedings of the International Conference on Advanced Visual Interfaces, 2020, pp. 9:1-9:9. doi: 10.1145/3399715.3399814.
    22. V. Hosu et al., “From Technical to Aesthetics Quality Assessment and Beyond: Challenges and Potential,” in Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, Seattle, WA, USA, 2020, pp. 19–20. doi: 10.1145/3423268.3423589.
    23. M. Jenadeleh, M. Pedersen, and D. Saupe, “Blind Quality Assessment of Iris Images Acquired in Visible Light for Biometric Recognition,” Sensors, vol. 20, no. 5, Art. no. 5, 2020, doi: 10.3390/s20051308.
    24. U. Ju, L. L. Chuang, and C. Wallraven, “Acoustic Cues Increase Situational Awareness in Accident Situations: A VR Car-Driving Study,” IEEE Transactions on Intelligent Transportation Systems, pp. 1–11, 2020, doi: 10.1109/TITS.2020.3035374.
    25. T. Kosch, A. Schmidt, S. Thanheiser, and L. L. Chuang, “One Does Not Simply RSVP: Mental Workload to Select Speed Reading Parameters Using Electroencephalography,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2020, pp. 637:1-637:13. doi: 10.1145/3313831.3376766.
    26. M. Kraus et al., “A Comparative Study of Orientation Support Tools in Virtual Reality Environments with Virtual Teleportation,” in 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2020, pp. 227–238. doi: 10.1109/ISMAR50242.2020.00046.
    27. M. Kraus et al., “Assessing 2D and 3D Heatmaps for Comparative Analysis: An Empirical Study,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2020, pp. 546:1–546:14. doi: 10.1145/3313831.3376675.
    28. A. Kumar, D. Mohanty, K. Kurzhals, F. Beck, D. Weiskopf, and K. Mueller, “Demo of the EyeSAC System for Visual Synchronization, Cleaning, and Annotation of Eye Movement Data,” Stuttgart, Germany, 2020. doi: 10.1145/3379157.3391988.
    29. A. Kumar, P. Howlader, R. Garcia, D. Weiskopf, and K. Mueller, “Challenges in Interpretability of Neural Networks for Eye Movement Data,” Stuttgart, Germany, 2020. doi: 10.1145/3379156.3391361.
    30. K. Kurzhals, M. Burch, and D. Weiskopf, “What We See and What We Get from Visualization: Eye Tracking Beyond Gaze Distributions and Scanpaths,” CoRR, vol. abs/2009.14515, 2020, [Online]. Available: https://arxiv.org/abs/2009.14515
    31. K. Kurzhals, F. Göbel, K. Angerbauer, M. Sedlmair, and M. Raubal, “A View on the Viewer: Gaze-Adaptive Captions for Videos,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2020, pp. 139:1–139:12. doi: 10.1145/3313831.3376266.
    32. K. Kurzhals et al., “Visual Analytics and Annotation of Pervasive Eye Tracking Video,” in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), Stuttgart, Germany, 2020, pp. 16:1-16:9. doi: 10.1145/3379155.3391326.
    33. M. Lan Ha, V. Hosu, and V. Blanz, “Color Composition Similarity and Its Application in Fine-grained Similarity,” in 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), Piscataway, NJ, 2020, pp. 2548--2557. doi: 10.1109/WACV45572.2020.9093522.
    34. H. Lin, M. Jenadeleh, G. Chen, U. Reips, R. Hamzaoui, and D. Saupe, “Subjective Assessment of Global Picture-Wise Just Noticeable Difference,” in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), 2020, pp. 1–6. doi: 10.1109/ICMEW46912.2020.9106058.
    35. H. Lin, J. D. Deng, D. Albers, and F. W. Siebert, “Helmet Use Detection of Tracked Motorcycles Using CNN-Based Multi-Task Learning,” IEEE Access, vol. 8, pp. 162073–162084, 2020, doi: 10.1109/ACCESS.2020.3021357.
    36. H. Lin et al., “SUR-FeatNet: Predicting the Satisfied User Ratio Curvefor Image Compression with Deep Feature Learning,” CoRR, vol. abs/2001.02002, 2020, doi: 10.1007/s41233-020-00034-1.
    37. H. Men, V. Hosu, H. Lin, A. Bruhn, and D. Saupe, “Visual Quality Assessment for Interpolated Slow-Motion Videos Based on a Novel Database,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), 2020, pp. 1–6. doi: 10.1109/QoMEX48832.2020.9123096.
    38. H. Men, V. Hosu, H. Lin, A. Bruhn, and D. Saupe, “Subjective annotation for a frame interpolation benchmark using artefact amplification,” Quality and User Experience, vol. 5, no. 1, Art. no. 1, 2020, doi: 10.1007/s41233-020-00037-y.
    39. L. Merino, M. Lungu, and C. Seidl, “Unleashing the Potentials of Immersive Augmented Reality for Software Engineering,” in 2020 IEEE 27th International Conference on Software Analysis, Evolution and Reengineering (SANER), 2020, pp. 517–521. doi: 10.1109/SANER48275.2020.9054812.
    40. L. Merino et al., “Toward Agile Situated Visualization: An Exploratory User Study,” in Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA), 2020, p. LBW087:1–LBW087:7. doi: 10.1145/3334480.3383017.
    41. L. Merino, M. Schwarzl, M. Kraus, M. Sedlmair, D. Schmalstieg, and D. Weiskopf, “Evaluating Mixed and Augmented Reality: A Systematic Literature Review (2009 -- 2019),” 2020. doi: doi: 10.1109/ISMAR50242.2020.00069.
    42. D. Okanovic et al., “Can a Chatbot Support Software Engineers with Load Testing? Approach and Experiences,” in Proceedings of the ACM/SPEC International Conference on Performance Engineering (ICPE), 2020, pp. 120–129. doi: 10.1145/3358960.3375792.
    43. N. Pathmanathan et al., “Eye vs. Head: Comparing Gaze Methods for Interaction in Augmented Reality,” in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), Stuttgart, Germany, 2020, pp. 50:1-50:5. doi: 10.1145/3379156.3391829.
    44. N. Patkar, L. Merino, and O. Nierstrasz, “Towards Requirements Engineering with Immersive Augmented Reality,” in Conference Companion of the 4th International Conference on Art, Science, and Engineering of Programming, Porto, Portugal, 2020, pp. 55–60. doi: 10.1145/3397537.3398472.
    45. N. Rodrigues, C. Schulz, A. Lhuillier, and D. Weiskopf, “Cluster-Flow Parallel Coordinates: Tracing Clusters Across Subspaces,” in Proceedings of the Graphics Interface Conference (GI) (forthcoming), 2020, pp. 0:1-0:11. [Online]. Available: https://openreview.net/forum?id=oVHjlwLkl-
    46. B. Roziere et al., “Evolutionary Super-Resolution,” in Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion, Cancún, Mexico, 2020, pp. 151–152. doi: 10.1145/3377929.3389959.
    47. D. Schubring, M. Kraus, C. Stolz, N. Weiler, D. A. Keim, and H. Schupp, “Virtual Reality Potentiates Emotion and Task Effects of Alpha/Beta Brain Oscillations,” Brain Sciences, vol. 10, no. 8, Art. no. 8, 2020, doi: 10.3390/brainsci10080537.
    48. C. Schätzle and M. Butt, “Visual Analytics for Historical Linguistics: Opportunities and Challenges,” Journal of Data Mining and Digital Humanities, 2020, doi: 10.46298/jdmdh.6707.
    49. M. Sondag, W. Meulemans, C. Schulz, K. Verbeek, D. Weiskopf, and B. Speckmann, “Uncertainty Treemaps,” in Proceedings of the IEEE Pacific Visualization Symposium (PacificVis), 2020, pp. 111–120. doi: 10.1109/PacificVis48177.2020.7614.
    50. J. Spoerhase, S. Storandt, and J. Zink, “Simplification of Polyline Bundles,” in 17th Scandinavian Symposium and Workshops on Algorithm Theory, SWAT 2020, June 22-24, 2020, Tórshavn, Faroe Islands, 2020, pp. 35:1--35:20. doi: 10.4230/LIPIcs.SWAT.2020.35.
    51. A. Streichert, K. Angerbauer, M. Schwarzl, and M. Sedlmair, “Comparing Input Modalities for Shape Drawing Tasks,” in Proceedings of the Symposium on Eye Tracking Research & Applications-Short Papers (ETRA-SP), 2020, no. 51, pp. 1–5. doi: 10.1145/3379156.3391830.
    52. D. R. Wahl et al., “Why We Eat What We Eat: Assessing Dispositional and In-the-Moment Eating Motives by Using Ecological Momentary Assessment,” JMIR mHealth and uHealth., vol. 8, no. 1, Art. no. 1, 2020, doi: doi:10.2196/13191.
    53. D. Weiskopf, “Vis4Vis: Visualization for (Empirical) Visualization Research,” in Foundations of Data Visualization, M. Chen, H. Hauser, P. Rheingans, and G. Scheuermann, Eds. Springer International Publishing, 2020, pp. 209--224. doi: 10.1007/978-3-030-34444-3_10.
    54. O. Wiedemann, V. Hosu, H. Lin, and D. Saupe, “Foveated Video Coding for Real-Time Streaming Applications,” in 2020 Twelfth International Conference on Quality of Multimedia Experience (QoMEX), 2020, pp. 1–6. doi: 10.1109/QoMEX48832.2020.9123080.
    55. O. Wiedemann and D. Saupe, “Gaze Data for Quality Assessment of Foveated Video,” Stuttgart, Germany, 2020. doi: 10.1145/3379157.3391656.
    56. J. Zagermann, U. Pfeil, P. von Bauer, D. Fink, and H. Reiterer, “‘It’s in my other hand!’: Studying the Interplay of Interaction Techniques and Multi-Tablet Activities,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2020, pp. 413:1-413:13. doi: 10.1145/3313831.3376540.
    57. X. Zhao, H. Lin, P. Guo, D. Saupe, and H. Liu, “Deep Learning VS. Traditional Algorithms for Saliency Prediction of Distorted Images,” in 2020 IEEE International Conference on Image Processing (ICIP), 2020, pp. 156–160. doi: 10.1109/ICIP40778.2020.9191203.
    58. L. Zhou, M. Rivinius, C. R. Johnson, and D. Weiskopf, “Photographic High-Dynamic-Range Scalar Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 6, Art. no. 6, 2020, doi: 10.1109/TVCG.2020.2970522.
    59. S. Öney et al., “Evaluation of Gaze Depth Estimation from Eye Tracking in Augmented Reality,” in Proceedings of the Symposium on Eye Tracking Research & Applications-Short Paper (ETRA-SP), 2020, pp. 49:1-49:5. doi: 10.1145/3379156.3391835.
  3. 2019

    1. M. Aupetit, M. Sedlmair, M. M. Abbas, A. Baggag, and H. Bensmail, “Toward Perception-based Evaluation of Clustering Techniques for Visual Analytics,” in Proceedings of the IEEE Visualization Conference (VIS), 2019, pp. 141–145. doi: 10.1109/VISUAL.2019.8933620.
    2. P. Balestrucci and M. Ernst, “Visuo-motor adaptation during interaction with a user-adaptive system,” Journal of Vision, vol. 19, p. 187a, Sep. 2019, doi: 10.1167/19.10.187a.
    3. T. M. Benz, B. Riedl, and L. L. Chuang, “Projection Displays Induce Less Simulator Sickness than Head-Mounted Displays in a Real Vehicle Driving Simulator,” in Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI), 2019, pp. 379–387. doi: 10.1145/3342197.3344515.
    4. H. Booth and C. Schätzle, “The Syntactic Encoding of Information Structure in the History of Icelandic,” in Proceedings of the LFG’19 Conference, 2019, pp. 69–89. [Online]. Available: http://web.stanford.edu/group/cslipublications/cslipublications/LFG/LFG-2019/lfg2019-booth-schaetzle.pdf
    5. V. Bruder, C. Schulz, R. Bauer, S. Frey, D. Weiskopf, and T. Ertl, “Voronoi-Based Foveated Volume Rendering,” in Proceedings of the Eurographics Conference on Visualization - Short Papers (EuroVis), 2019, pp. 67–71. doi: 10.2312/evs.20191172.
    6. V. Bruder, K. Kurzhals, S. Frey, D. Weiskopf, and T. Ertl, “Space-Time Volume Visualization of Gaze and Stimulus,” in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), 2019, pp. 12:1-12:9. doi: 10.1145/3314111.3319812.
    7. V. Bruder et al., “Volume-Based Large Dynamic Graph Analysis Supported by Evolution Provenance,” Multimedia Tools and Applications, vol. 78, no. 23, Art. no. 23, 2019, doi: 10.1007/s11042-019-07878-6.
    8. T. Castermans, M. van Garderen, W. Meulemans, M. Nöllenburg, and X. Yuan, “Short Plane Supports for Spatial Hypergraphs,” in Graph Drawing and Network Visualization. GD 2018. Lecture Notes in Computer Science, vol. 11282, T. Biedl and A. Kerren, Eds. Springer International Publishing, 2019, pp. 53–66. doi: 10.1007/978-3-030-04414-5_4.
    9. F. L. Dennig, T. Polk, Z. Lin, T. Schreck, H. Pfister, and M. Behrisch, “FDive: Learning Relevance Models using Pattern-based Similarity Measures,” Proceedings of the IEEE Conference on Visual Analytics Science and Technology (VAST), 2019, doi: 10.1109/VAST47406.2019.8986940.
    10. C. Fan et al., “SUR-Net: Predicting the Satisfied User Ratio Curve for Image Compression with Deep Learning,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), 2019, pp. 1–6. doi: 10.1109/QoMEX.2019.8743204.
    11. J. Görtler, M. Spicker, C. Schulz, D. Weiskopf, and O. Deussen, “Stippling of 2D Scalar Fields,” IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 6, Art. no. 6, 2019, doi: 10.1109/TVCG.2019.2903945.
    12. V. Hosu, H. Lin, T. Sziranyi, and D. Saupe, “KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment,” CoRR, vol. abs/1910.06180, 2019, doi: 10.1109/TIP.2020.2967829.
    13. V. Hosu, B. Goldlücke, and D. Saupe, “Effective Aesthetics Prediction with Multi-level Spatially Pooled Features,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9367–9375, 2019, doi: 10.1109/CVPR.2019.00960.
    14. S. Jaeger et al., “Challenges for Brain Data Analysis in VR Environments,” in 2019 IEEE Pacific Visualization Symposium (PacificVis), 2019, pp. 42–46. doi: 10.1109/PacificVis.2019.00013.
    15. K. Klein, M. Aichem, B. Sommer, S. Erk, Y. Zhang, and F. Schreiber, “TEAMwISE: Synchronised Immersive Environments for Exploration and Analysis of Movement Data,” in Proceedings of the ACM Symposium on Visual Information Communication and Interaction (VINCI), 2019, pp. 9:1-9:5. doi: 10.1145/3356422.3356450.
    16. K. Klein et al., “Fly with the flock : immersive solutions for animal movement visualization and analytics,” Journal of the Royal Society Interface, vol. 16, no. 153, Art. no. 153, 2019, doi: 10.1098/rsif.2018.0794.
    17. K. Klein et al., “Visual Analytics for Cheetah Behaviour Analysis.,” in VINCI, 2019, pp. 16:1-16:8. [Online]. Available: http://dblp.uni-trier.de/db/conf/vinci/vinci2019.html#0001JMWHBS19
    18. H. Lin, V. Hosu, and D. Saupe, “KADID-10k: A Large-scale Artificially Distorted IQA Database,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), 2019, pp. 1–3. doi: 10.1109/QoMEX.2019.8743252.
    19. H. Men, H. Lin, V. Hosu, D. Maurer, A. Bruhn, and D. Saupe, “Visual Quality Assessment for Motion Compensated Frame Interpolation,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), 2019, pp. 1–6. doi: 10.1109/QoMEX.2019.8743221.
    20. M. Miller, X. Zhang, J. Fuchs, and M. Blumenschein, “Evaluating Ordering Strategies of Star Glyph Axes,” in Proceedings of the IEEE Visualization Conference (VIS), 2019, pp. 91–95. doi: 10.1109/VISUAL.2019.8933656.
    21. T. Munz, L. L. Chuang, S. Pannasch, and D. Weiskopf, “VisME: Visual microsaccades explorer,” Journal of Eye Movement Research, vol. 12, no. 6, Art. no. 6, 2019, doi: 10.16910/jemr.12.6.5.
    22. C. Müller, M. Braun, and T. Ertl, “Optimised Molecular Graphics on the HoloLens,” in IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2019, Osaka, Japan, March 23-27, 2019, 2019, pp. 97–102. doi: 10.1109/VR.2019.8798111.
    23. J. Müller, J. Zagermann, J. Wieland, U. Pfeil, and H. Reiterer, “A Qualitative Comparison Between Augmented and Virtual Reality Collaboration with Handheld Devices,” in Mensch und Computer 2019 – Tagungsband (MuC), 2019, pp. 399–410. doi: 10.1145/3340764.3340773.
    24. R. Netzel, N. Rodrigues, A. Haug, and D. Weiskopf, “Compensation of Simultaneous Orientation Contrast in Superimposed Textures,” in Proceedings of the Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP), 2019, vol. 3: IVAPP, pp. 48–57. doi: 10.5220/0007356800480057.
    25. D. Pomerenke, F. L. Dennig, D. A. Keim, J. Fuchs, and M. Blumenschein, “Slope-Dependent Rendering of Parallel Coordinates to Reduce Density Distortion and Ghost Clusters,” in Proceedings of the IEEE Visualization Conference (VIS), 2019, pp. 86–90. doi: 10.1109/VISUAL.2019.8933706.
    26. K. Schatz et al., “Visual Analysis of Structure Formation in Cosmic Evolution,” in Proceedings of the IEEE Scientific Visualization Conference (SciVis), 2019, pp. 33–41. doi: 10.1109/scivis47405.2019.8968855.
    27. C. Schätzle and H. Booth, “DiaHClust: an Iterative Hierarchical Clustering Approach for Identifying Stages in Language Change,” in Proceedings of the International Workshop on Computational Approaches to Historical Language Change, 2019, pp. 126–135. doi: 10.18653/v1/W19-4716.
    28. C. Schätzle, F. L. Denning, M. Blumenschein, D. A. Keim, and M. Butt, “Visualizing Linguistic Change as Dimension Interactions,” in Proceedings of the International Workshop on Computational Approaches to Historical Language Change, 2019, pp. 272–278. doi: 10.18653/v1/W19-4734.
    29. N. Silva et al., “Eye Tracking Support for Visual Analytics Systems: Foundations, Current Applications, and Research Challenges,” in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), 2019, pp. 11:1-11:9. doi: 10.1145/3314111.3319919.
    30. B. Sommer et al., “Tiled Stereoscopic 3D Display Wall - Concept, Applications and Evaluation,” Electronic Imaging, vol. 2019, no. 3, Art. no. 3, 2019, doi: 10.2352/ISSN.2470-1173.2019.3.SDA-641.
    31. Y. Wang, Z. Wang, C.-W. Fu, H. Schmauder, O. Deussen, and D. Weiskopf, “Image-Based Aspect Ratio Selection.,” IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 1, Art. no. 1, 2019, doi: 10.1109/TVCG.2018.2865266.
    32. Y. Wang et al., “Improving the Robustness of Scagnostics,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 1, Art. no. 1, 2019, doi: 10.1109/TVCG.2019.2934796.
    33. H. Zhang, S. Frey, H. Steeb, D. Uribe, T. Ertl, and W. Wang, “Visualization of Bubble Formation in Porous Media,” IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 1, Art. no. 1, 2019, doi: 10.1109/TVCG.2018.2864506.
    34. L. Zhou, R. Netzel, D. Weiskopf, and C. R. Johnson, “Spectral Visualization Sharpening.,” in Proceedings of the ACM Symposium on Applied Perception (SAP), 2019, pp. 18:1-18:9. doi: 10.1145/3343036.3343133.
  4. 2018

    1. H. Bast, P. Brosi, and S. Storandt, “Efficient Generation of Geographically Accurate Transit Maps,” in Proceedings of the ACM International Conference on Advances in Geographic Information Systems (SIGSPATIAL), 2018, pp. 13–22. doi: 10.1145/3274895.3274955.
    2. M. Behrisch et al., “Quality Metrics for Information Visualization,” Computer Graphics Forum, vol. 37, no. 3, Art. no. 3, 2018, doi: https://doi.org/10.1111/cgf.13446.
    3. H. Ben Lahmar, M. Herschel, M. Blumenschein, and D. A. Keim, “Provenance-based Visual Data Exploration with EVLIN,” in Proceedings of the Conference on Extending Database Technology (EDBT), 2018, pp. 686–689. doi: 10.5441/002/edbt.2018.85.
    4. M. Blumenschein et al., “SMARTexplore: Simplifying High-Dimensional Data Analysis through a Table-Based Visual Analytics Approach,” in Proceedings of the IEEE Conference on Visual Analytics Science and Technology (VAST), 2018, pp. 36–47. doi: 10.1109/VAST.2018.8802486.
    5. S. S. Borojeni, S. C. J. Boll, W. Heuten, H. H. Bülthoff, and L. L. Chuang, “Feel the Movement: Real Motion Influences Responses to Take-Over Requests in Highly Automated Vehicles,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2018, pp. 246:1-246:13. doi: 10.1145/3173574.3173820.
    6. V. Bruder, M. Hlawatsch, S. Frey, M. Burch, D. Weiskopf, and T. Ertl, “Volume-Based Large Dynamic Graph Analytics,” in Proceedings of the International Conference Information Visualisation (IV), 2018, pp. 210–219. doi: 10.1109/iV.2018.00045.
    7. L. L. Chuang and U. Pfeil, “Transparency and Openness Promotion Guidelines for HCI,” in Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA), 2018, p. SIG04:1-SIG04:4. doi: 10.1145/3170427.3185377.
    8. M. de Ridder, K. Klein, and J. Kim, “A Review and Outlook on Visual Analytics for Uncertainties in Functional Magnetic Resonance Imaging,” Brain Informatics, vol. 5, no. 2, Art. no. 2, 2018, doi: 10.1186/s40708-018-0083-0.
    9. L. J. Debbeler, M. Gamp, M. Blumenschein, D. A. Keim, and B. Renner, “Polarized But Illusory Beliefs About Tap and Bottled Water: A Product- and Consumer-Oriented Survey and Blind Tasting Experiment,” Science of the Total Environment, vol. 643, pp. 1400–1410, 2018, doi: 10.1016/j.scitotenv.2018.06.190.
    10. T. Dingler, R. Rzayev, A. S. Shirazi, and N. Henze, “Designing Consistent Gestures Across Device Types: Eliciting RSVP Controls for Phone, Watch, and Glasses,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2018, pp. 419:1–419:12. doi: 10.1145/3173574.3173993.
    11. S. Frey, “Spatio-Temporal Contours from Deep Volume Raycasting,” Computer Graphics Forum, vol. 37, no. 3, Art. no. 3, 2018, doi: 10.1111/cgf.13438.
    12. F. Frieß, M. Landwehr, V. Bruder, S. Frey, and T. Ertl, “Adaptive Encoder Settings for Interactive Remote Visualisation on High-Resolution Displays,” in Proceedings of the IEEE Symposium on Large Data Analysis and Visualization - Short Papers (LDAV), 2018, pp. 87–91. doi: 10.1109/LDAV.2018.8739215.
    13. M. Ghaffar et al., “3D Modelling and Visualisation of Heterogeneous Cell Membranes in Blender,” in Proceedings of the 11th International Symposium on Visual Information Communication and Interaction, Växjö, Sweden, 2018, pp. 64–71. doi: 10.1145/3231622.3231639.
    14. C. Glatz and L. L. Chuang, “The Time Course of Auditory Looming Cues in Redirecting Visuo-Spatial Attention,” Nature - Scientific Reports, vol. 9, pp. 743:1-743:10, 2018, doi: 10.1038/s41598-018-36033-8.
    15. C. Glatz, S. S. Krupenia, H. H. Bülthoff, and L. L. Chuang, “Use the Right Sound for the Right Job: Verbal Commands and Auditory Icons for a Task-Management System Favor Different Information Processes in the Brain,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2018, pp. 472:1-472:13. doi: 10.1145/3173574.3174046.
    16. J. Görtler, R. Kehlbeck, and O. Deussen, “A Visual Exploration of Gaussian Processes,” 2018. doi: 10.23915/distill.00017.
    17. J. Görtler, C. Schulz, O. Deussen, and D. Weiskopf, “Bubble Treemaps for Uncertainty Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 1, Art. no. 1, 2018, doi: 10.1109/TVCG.2017.2743959.
    18. A. Hautli-Janisz, C. Rohrdantz, C. Schätzle, A. Stoffel, M. Butt, and D. A. Keim, “Visual Analytics in Diachronic Linguistic Investigations,” Linguistic Visualizations, 2018.
    19. V. Hosu, H. Lin, and D. Saupe, “Expertise Screening in Crowdsourcing Image Quality,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), 2018, pp. 276–281. doi: https://dx.doi.org/10.1109/QoMEX.2018.8463427.
    20. S. Hubenschmid, J. Zagermann, S. Butscher, and H. Reiterer, “Employing Tangible Visualisations in Augmented Reality with Mobile Devices,” in Proceedings of the Working Conference on Advanced Visual Interfaces (AVI), 2018, pp. 1–4. [Online]. Available: http://nbn-resolving.de/urn:nbn:de:bsz:352-2-1iooenfo4fofm8
    21. K. Hänsel, R. Poguntke, H. Haddadi, A. Alomainy, and A. Schmidt, “What to Put on the User: Sensing Technologies for Studies and Physiology Aware Systems,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2018, pp. 145:1-145:14. doi: 10.1145/3173574.3173719.
    22. M. Jenadeleh, M. Pedersen, and D. Saupe, “Realtime Quality Assessment of Iris Biometrics Under Visible Light,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPRW), CVPR Workshops, 2018, pp. 443–452. doi: 10.1109/CVPRW.2018.00085.
    23. J. Karolus, H. Schuff, T. Kosch, P. W. Wozniak, and A. Schmidt, “EMGuitar: Assisting Guitar Playing with Electromyography,” in Proceedings of the Designing Interactive Systems Conference (DIS), 2018, pp. 651–655. doi: 10.1145/3196709.3196803.
    24. M. Klapperstueck et al., “Contextuwall: Multi-site Collaboration Using Display Walls,” Journal of Visual Languages & Computing, vol. 46, pp. 35–42, 2018, doi: 10.1016/j.jvlc.2017.10.002.
    25. P. Knierim, V. Schwind, A. M. Feit, F. Nieuwenhuizen, and N. Henze, “Physical Keyboards in Virtual Reality: Analysis of Typing Performance and Effects of Avatar Hands,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2018, pp. 345:1–345:9. doi: 10.1145/3173574.3173919.
    26. T. Kosch, M. Funk, A. Schmidt, and L. L. Chuang, “Identifying Cognitive Assistance with Mobile Electroencephalography: A Case Study with In-Situ Projections for Manual Assembly.,” Proceedings of the ACM on Human-Computer Interaction (ACMHCI), vol. 2, pp. 11:1-11:20, 2018, doi: 10.1145/3229093.
    27. D. Laupheimer, P. Tutzauer, N. Haala, and M. Spicker, “Neural Networks for the Classification of Building Use from Street-view Imagery,” ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. IV–2, pp. 177–184, 2018, doi: 10.5194/isprs-annals-IV-2-177-2018.
    28. N. Marniok and B. Goldluecke, “Real-time Variational Range Image Fusion and Visualization for Large-scale Scenes using GPU Hash Tables,” in Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV), 2018, pp. 912–920. doi: 10.1109/WACV.2018.00105.
    29. K. Marriott et al., Immersive Analytics, vol. 11190. Springer International Publishing, 2018. doi: 10.1007/978-3-030-01388-2.
    30. D. Maurer, M. Stoll, and A. Bruhn, “Directional Priors for Multi-Frame Optical Flow,” in Proceedings of the British Machine Vision Conference (BMVC), 2018, pp. 106:1-106:13. [Online]. Available: http://bmvc2018.org/contents/papers/0377.pdf
    31. D. Maurer and A. Bruhn, “ProFlow: Learning to Predict Optical Flow,” in Proceedings of the British Machine Vision Conference (BMVC), 2018, vol. 86:1-86:13. doi: arXiv:1806.00800.
    32. D. Maurer, N. Marniok, B. Goldluecke, and A. Bruhn, “Structure-from-motion-aware PatchMatch for Adaptive Optical Flow Estimation,” in Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol. 11212, V. Ferrari, M. Hebert, C. Sminchisescu, and Y. Weiss, Eds. Springer International Publishing, 2018, pp. 575–592. doi: 10.1007/978-3-030-01237-3_35.
    33. D. Maurer, Y. C. Ju, M. Breuß, and A. Bruhn, “Combining Shape from Shading and Stereo: A Joint Variational Method for Estimating Depth, Illumination and Albedo,” International Journal of Computer Vision, vol. 126, no. 12, Art. no. 12, 2018, doi: 10.1007/s11263-018-1079-1.
    34. H. Men, H. Lin, and D. Saupe, “Spatiotemporal Feature Combination Model for No-Reference Video Quality Assessment,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), 2018, pp. 1–3. doi: 10.1109/QoMEX.2018.8463426.
    35. A. Nesti, G. Rognini, B. Herbelin, H. H. Bülthoff, L. L. Chuang, and O. Blanke, “Modulation of Vection Latencies in the Full-Body Illusion,” PLoS ONE, vol. 13, no. 12, Art. no. 12, 2018, doi: 10.1371/journal.pone.0209189.
    36. S. Oppold and M. Herschel, “Provenance for Entity Resolution,” in Provenance and Annotation of Data and Processes. IPAW 2018. Lecture Notes in Computer Science, vol. 11017, K. Belhajjame, A. Gehani, and P. Alper, Eds. Springer International Publishing, 2018, pp. 226–230. doi: 10.1007/978-3-319-98379-0_25.
    37. N. Rodrigues and D. Weiskopf, “Nonlinear Dot Plots,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 1, Art. no. 1, 2018, doi: 10.1109/TVCG.2017.2744018.
    38. N. Rodrigues, R. Netzel, J. Spalink, and D. Weiskopf, “Multiscale Scanpath Visualization and Filtering,” in Proceedings of the Symposium on Eye Tracking and Visualization (ETVIS), 2018, pp. 2:1-2:5. doi: 10.1145/3205929.3205931.
    39. D. Sacha et al., “SOMFlow: Guided Exploratory Cluster Analysis with Self-Organizing Maps and Analytic Provenance,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 1, Art. no. 1, 2018, doi: 10.1109/TVCG.2017.2744805.
    40. M. Scheer, H. H. Bülthoff, and L. L. Chuang, “Auditory Task Irrelevance: A Basis for Inattentional Deafness,” Human Factors, vol. 60, no. 3, Art. no. 3, 2018, doi: 10.1177/0018720818760919.
    41. C. Schulz, K. Schatz, M. Krone, M. Braun, T. Ertl, and D. Weiskopf, “Uncertainty Visualization for Secondary Structures of Proteins,” in Proceedings of the IEEE Pacific Visualization Symposium (PacificVis), 2018, pp. 96–105. doi: 10.1109/PacificVis.2018.00020.
    42. C. Schulz, A. Zeyfang, M. van Garderen, H. Ben Lahmar, M. Herschel, and D. Weiskopf, “Simultaneous Visual Analysis of Multiple Software Hierarchies,” in Proceedings of the IEEE Working Conference on Software Visualization (VISSOFT), 2018, pp. 87–95. doi: 10.1109/VISSOFT.2018.00017.
    43. V. Schwind, K. Leicht, S. Jäger, K. Wolf, and N. Henze, “Is there an Uncanny Valley of Virtual Animals? A Quantitative and Qualitative Investigation,” International Journal of Human-Computer Studies, vol. 111, pp. 49–61, 2018, doi: 10.1016/j.ijhcs.2017.11.003.
    44. C. Schätzle, “Dative Subjects: Historical Change Visualized,” PhD diss., Universität Konstanz, Konstanz, 2018. [Online]. Available: http://nbn-resolving.de/urn:nbn:de:bsz:352-2-1d917i4avuz1a2
    45. T. Spinner, J. Körner, J. Görtler, and O. Deussen, “Towards an Interpretable Latent Space: An Intuitive Comparison of Autoencoders with Variational Autoencoders,” 2018. [Online]. Available: https://thilospinner.com/towards-an-interpretable-latent-space/
    46. T. Torsney-Weir, S. Afroozeh, M. Sedlmair, and T. Möller, “Risk Fixers and Sweet Spotters: a Study of the Different Approaches to Using Visual Sensitivity Analysis in an Investment Scenario,” in Proceedings of the Eurographics Conference on Visualization (EuroVis), 2018, pp. 119–123. doi: 10.2312/eurovisshort.20181089.
    47. A. C. Valdez, M. Ziefle, and M. Sedlmair, “Priming and Anchoring Effects in Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 1, Art. no. 1, 2018, doi: 10.1109/TVCG.2017.2744138.
    48. D. Varga, D. Saupe, and T. Szirányi, “DeepRN: A Content Preserving Deep Architecture for Blind Image Quality Assessment,” in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), 2018, pp. 1–6. doi: 10.1109/ICME.2018.8486528.
    49. Y. Wang et al., “A Perception-driven Approach to Supervised Dimensionality Reduction for Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 5, Art. no. 5, 2018, doi: 10.1109/TVCG.2017.2701829.
    50. V. Yoghourdjian, T. Dwyer, K. Klein, K. Marriott, and M. Wybrow, “Graph Thumbnails: Identifying and Comparing Multiple Graphs at a Glance,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 12, Art. no. 12, 2018, doi: 10.1109/TVCG.2018.2790961.
    51. J. Zagermann, U. Pfeil, and H. Reiterer, “Studying Eye Movements as a Basis for Measuring Cognitive Load,” Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA), p. LBW095:1-LBW095:6, 2018, doi: 10.1145/3170427.3188628.
    52. Y. Zhu et al., “Genome-scale Metabolic Modeling of Responses to Polymyxins in Pseudomonas Aeruginosa,” GigaScience, vol. 7, no. 4, Art. no. 4, 2018, doi: 10.1093/gigascience/giy021.
  5. 2017

    1. Y. Abdelrahman, P. Knierim, P. W. Wozniak, N. Henze, and A. Schmidt, “See Through the Fire: Evaluating the Augmentation of Visual Perception of Firefighters Using Depth and Thermal Cameras,” in Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing and Symposium on Wearable Computers (UbiComp/ISWC), 2017, pp. 693–696. doi: 10.1145/3123024.3129269.
    2. J. Allsop, R. Gray, H. Bülthoff, and L. Chuang, “Eye Movement Planning on Single-Sensor-Single-Indicator Displays is Vulnerable to User Anxiety and Cognitive Load,” Journal of Eye Movement Research, vol. 10, no. 5, Art. no. 5, 2017, doi: 10.16910/jemr.10.5.8.
    3. M. A. Baazizi, H. Ben Lahmar, D. Colazzo, G. Ghelli, and C. Sartiani, “Schema Inference for Massive JSON Datasets,” in Proceedings of the Conference on Extending Database Technology (EDBT), 2017, pp. 222–233. doi: 10.5441/002/edbt.2017.21.
    4. D. Bahrdt et al., “Growing Balls in ℝd,” in Proceedings of the Meeting on Algorithm Engineering and Experiments (ALENEX), 2017, pp. 247–258. doi: 10.1137/1.9781611974768.20.
    5. A. Barth, B. Harrach, N. Hyvönen, and L. Mustonen, “Detecting Stochastic Inclusions in Electrical Impedance Tomography,” Inverse Problems, vol. 33, no. 11, Art. no. 11, 2017, doi: 10.1088/1361-6420/aa8f5c.
    6. M. Behrisch et al., “Magnostics: Image-Based Search of Interesting Matrix Views for Guided Network Exploration,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, Art. no. 1, 2017, doi: 10.1109/TVCG.2016.2598467.
    7. H. Ben Lahmar and M. Herschel, “Provenance-based Recommendations for Visual Data Exploration,” in Proceedings of the USENIX Conference on Theory and Practice of Provenance (TAPP), 2017, pp. 1–7.
    8. H. Booth, C. Schätzle, K. Börjars, and M. Butt, “Dative Subjects and the Rise of Positional Licensing in Icelandic,” in Proceedings of the LFG’17 Conference, 2017, pp. 104–124. [Online]. Available: http://web.stanford.edu/group/cslipublications/cslipublications/LFG/LFG-2017/lfg2017-bsbb.pdf
    9. V. Bruder, S. Frey, and T. Ertl, “Prediction-Based Load Balancing and Resolution Tuning for Interactive Volume Raycasting,” Visual Informatics, vol. 1, no. 2, Art. no. 2, 2017, doi: 10.1016/j.visinf.2017.09.001.
    10. M. Burch, M. Hlawatsch, and D. Weiskopf, “Visualizing a Sequence of a Thousand Graphs (or Even More),” Computer Graphics Forum, vol. 36, no. 3, Art. no. 3, 2017, doi: 10.1111/cgf.13185.
    11. L. L. Chuang, C. Glatz, and S. S. Krupenia, “Using EEG to Understand why Behavior to Auditory In-vehicle Notifications Differs Across Test Environments,” in Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI), 2017, pp. 123–133. doi: 10.1145/3122986.3123017.
    12. M. Correll and J. Heer, “Surprise! Bayesian Weighting for De-Biasing Thematic Maps.,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, Art. no. 1, 2017, [Online]. Available: http://dblp.uni-trier.de/db/journals/tvcg/tvcg23.html#CorrellH17
    13. M. de Ridder, K. Klein, and J. Kim, “Temporaltracks: Visual Analytics for Exploration of 4D fMRI Time-series Coactivation,” in Proceedings of the Computer Graphics International Conference (CGI), 2017, pp. 13:1-13:6. doi: 10.1145/3095140.3095153.
    14. K. de Winkel, A. Nesti, H. Ayaz, and H. Bülthoff, “Neural Correlates of Decision Making on Whole Body Yaw Rotation: an fNIRS Study,” Neuroscience Letters, vol. 654, pp. 56–62, 2017, doi: 10.1016/j.neulet.2017.04.053.
    15. O. Deussen, M. Spicker, and Q. Zheng, “Weighted Linde-Buzo-Gray Stippling,” ACM Transactions on Graphics, vol. 36, no. 6, Art. no. 6, 2017, doi: 10.1145/3130800.3130819.
    16. R. Diestelkämper, M. Herschel, and P. Jadhav, “Provenance in DISC Systems: Reducing Space Overhead at Runtime,” in Proceedings of the USENIX Conference on Theory and Practice of Provenance (TAPP), 2017, pp. 1–13. [Online]. Available: https://dl.acm.org/doi/abs/10.5555/3183865.3183883
    17. T. Dingler, A. Schmidt, and T. Machulla, “Building Cognition-Aware Systems: A Mobile Toolkit for Extracting Time-of-Day Fluctuations of Cognitive Performance,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), vol. 1, no. 3, Art. no. 3, 2017, doi: 10.1145/3132025.
    18. S. Egger-Lampl et al., “Crowdsourcing Quality of Experience Experiments,” in Information Systems and Applications, incl. Internet/Web, and HCI, vol. Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments. Dagstuhl Seminar 15481, Dagstuhl Castle, Germany, November 22 – 27, 2015, Revised Contributions, no. LNCS 10264, D. Archambault, H. Purchase, and T. Hossfeld, Eds. Springer International Publishing, 2017, pp. 154–190. doi: 10.1007/978-3-319-66435-4_7.
    19. S. Frey, “Sampling and Estimation of Pairwise Similarity in Spatio-Temporal Data Based on Neural Networks,” in Informatics, 2017, vol. 4, no. 3, p. 27. doi: 10.3390/informatics4030027.
    20. S. Frey and T. Ertl, “Progressive Direct Volume-to-Volume Transformation,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, Art. no. 1, 2017, doi: 10.1109/TVCG.2016.2599042.
    21. S. Frey and T. Ertl, “Flow-Based Temporal Selection for Interactive Volume Visualization,” Computer Graphics Forum, vol. 36, no. 8, Art. no. 8, 2017, doi: 10.1111/cgf.13070.
    22. D. Fritsch, “Photogrammetrische Auswertung digitaler Bilder – Neue Methoden der Kamerakalibration, dichten Bildzuordnung und Interpretation von Punktwolken,” in Photogrammetrie und Fernerkundung, C. Heipke, Ed. Springer Spektrum, 2017, pp. 157–196. doi: 10.1007/978-3-662-47094-7_41.
    23. D. Fritsch and M. Klein, “3D and 4D Modeling for AR and VR App Developments,” in Proceedings of the International Conference on Virtual System & Multimedia (VSMM), 2017, pp. 1–8. doi: 10.1109/VSMM.2017.8346270.
    24. S. Funke, T. Mendel, A. Miller, S. Storandt, and M. Wiebe, “Map Simplification with Topology Constraints: Exactly and in Practice,” in Proceedings of the Meeting on Algorithm Engineering and Experiments (ALENEX), 2017, pp. 185–196. doi: 10.1137/1.9781611974768.15.
    25. S. Funke, N. Schnelle, and S. Storandt, “URAN: A Unified Data Structure for Rendering and Navigation,” in Web and Wireless Geographical Information Systems. W2GIS 2017. Lecture Notes in Computer Science, vol. 10181, D. Brosset, C. Claramunt, X. Li, and T. Wang, Eds. 2017, pp. 66–82. doi: 10.1007/978-3-319-55998-8_5.
    26. U. Gadiraju et al., “Crowdsourcing Versus the Laboratory: Towards Human-centered Experiments Using the Crowd,” in Information Systems and Applications, incl. Internet/Web, and HCI, vol. Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments. Dagstuhl Seminar 15481, Dagstuhl Castle, Germany, November 22 – 27, 2015, Revised Contributions, no. LNCS 10264, D. Archambault, H. Purchase, and T. Hossfeld, Eds. Springer International Publishing, 2017, pp. 6–26. doi: 10.1007/978-3-319-66435-4_2.
    27. P. Gralka, C. Schulz, G. Reina, D. Weiskopf, and T. Ertl, “Visual Exploration of Memory Traces and Call Stacks,” in Proceedings of the IEEE Working Conference on Software Visualization (VISSOFT), 2017, pp. 54–63. doi: 10.1109/VISSOFT.2017.15.
    28. M. Heinemann, V. Bruder, S. Frey, and T. Ertl, “Power Efficiency of Volume Raycasting on Mobile Devices,” 2017. doi: 10.2312/eurp.20171166.
    29. M. Herschel, R. Diestelkämper, and H. Ben Lahmar, “A Survey on Provenance - What for? What form? What from?,” The VLDB Journal, vol. 26, pp. 881–906, 2017, doi: 10.1007/s00778-017-0486-1.
    30. V. Hosu et al., “The Konstanz natural video database (KoNViD-1k).,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), 2017, pp. 1–6. doi: 10.1109/QoMEX.2017.7965673.
    31. J. Iseringhausen et al., “4D Imaging through Spray-on Optics,” ACM Transactions on Graphics, vol. 36, no. 4, Art. no. 4, 2017, doi: 10.1145/3072959.3073589.
    32. O. Johannsen et al., “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Workshops, 2017, pp. 1795–1812. doi: 10.1109/CVPRW.2017.226.
    33. D. Jäckle, F. Stoffel, S. Mittelstädt, D. A. Keim, and H. Reiterer, “Interpretation of Dimensionally-Reduced Crime Data: A Study with Untrained Domain Experts,” in Proceedings of the Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP), 2017, vol. 3, pp. 164–175. doi: http://dx.doi.org/10.5220/0006265101640175.
    34. D. Jäckle, M. Hund, M. Behrisch, D. A. Keim, and T. Schreck, “Pattern Trails: Visual Analysis of Pattern Transitions in Subspaces,” in Proceedings of the IEEE Conference on Visual Analytics Science and Technology (VAST), 2017, pp. 1–12. doi: 10.1109/VAST.2017.8585613.
    35. J. Karolus, P. W. Wozniak, L. L. Chuang, and A. Schmidt, “Robust Gaze Features for Enabling Language Proficiency Awareness,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2017, pp. 2998–3010. doi: 10.1145/3025453.3025601.
    36. P. Knierim et al., “Tactile Drones - Providing Immersive Tactile Feedback in Virtual Reality through Quadcopters,” in Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA), 2017, pp. 433–436. doi: https://doi.org/10.1145/3027063.3050426.
    37. J. Kratt, F. Eisenkeil, M. Spicker, Y. Wang, D. Weiskopf, and O. Deussen, “Structure-aware Stylization of Mountainous Terrains,” in Vision, Modeling & Visualization, M. Hullin, R. Klein, T. Schultz, and A. Yao, Eds. The Eurographics Association, 2017. doi: 10.2312/vmv.20171255.
    38. M. Krone et al., “Molecular Surface Maps,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, Art. no. 1, 2017, [Online]. Available: http://dx.doi.org/10.1109/TVCG.2016.2598824
    39. K. Kurzhals, M. Hlawatsch, C. Seeger, and D. Weiskopf, “Visual Analytics for Mobile Eye Tracking,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, Art. no. 1, 2017, doi: 10.1109/TVCG.2016.2598695.
    40. K. Kurzhals, E. Çetinkaya, Y. Hu, W. Wang, and D. Weiskopf, “Close to the Action: Eye-tracking Evaluation of Speaker-following Subtitles,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2017, pp. 6559–6568. doi: https://doi.org/10.1145/3025453.3025772.
    41. K. Kurzhals, M. Stoll, A. Bruhn, and D. Weiskopf, “FlowBrush: Optical Flow Art,” in Symposium on Computational Aesthetics, Sketch-Based Interfaces and Modeling, and Non-Photorealistic Animation and Rendering (EXPRESSIVE, co-located with SIGGRAPH)., 2017, pp. 1:1-1:9. doi: 10.1145/3092912.3092914.
    42. H. V. Le, V. Schwind, P. Göttlich, and N. Henze, “PredicTouch: A System to Reduce Touchscreen Latency using Neural Networks and Inertial Measurement Units,” in Proceedings of the ACM International Conference on Interactive Surfaces and Spaces (ISS), 2017, pp. 230–239. doi: 10.1145/3132272.3134138.
    43. T. Machulla, L. Chuang, F. Kiss, M. O. Ernst, and A. Schmidt, “Sensory Amplification Through Crossmodal Stimulation,” 2017.
    44. N. Marniok, O. Johannsen, and B. Goldluecke, “An Efficient Octree Design for Local Variational Range Image Fusion,” in Pattern Recognition. GCPR 2017. Lecture Notes in Computer Science, vol. 10496, V. Roth and T. Vetter, Eds. Springer International Publishing, 2017, pp. 401–412. doi: 10.1007/978-3-319-66709-6_32.
    45. D. Maurer, M. Stoll, S. Volz, P. Gairing, and A. Bruhn, “A Comparison of Isotropic and Anisotropic Second Order Regularisers for Optical Flow,” in Scale Space and Variational Methods in Computer Vision. SSVM 2017. Lecture Notes in Computer Science, vol. 10302, F. Lauze, Y. Dong, and A. B. Dahl, Eds. Springer International Publishing, 2017, pp. 537–549. doi: 10.1007/978-3-319-58771-4_43.
    46. D. Maurer, A. Bruhn, and M. Stoll, “Order-adaptive and Illumination-aware Variational Optical Flow Refinement,” in Proceedings of the British Machine Vision Conference (BMVC), 2017, pp. 150:1-150:13. doi: 10.5244/C.31.150.
    47. D. Maurer, M. Stoll, and A. Bruhn, “Order-adaptive Regularisation for Variational Optical Flow: Global, Local and in Between.,” in Scale Space and Variational Methods in Computer Vision. SSVM 2017. Lecture Notes in Computer Science, 2017, vol. 10302, pp. 550–562. doi: 10.1007/978-3-319-58771-4_44.
    48. L. Merino et al., “On the Impact of the Medium in the Effectiveness of 3D Software Visualizations,” in Proceedings of the IEEE Working Conference on Software Visualization (VISSOFT), 2017, pp. 11–21. doi: 10.1109/VISSOFT.2017.17.
    49. A. Nesti, K. de Winkel, and H. Bülthoff, “Accumulation of Inertial Sensory Information in the Perception of Whole Body Yaw Rotation,” PloS ONE, vol. 12, no. 1, Art. no. 1, 2017, doi: 10.1371/journal.pone.0170497.
    50. R. Netzel, M. Hlawatsch, M. Burch, S. Balakrishnan, H. Schmauder, and D. Weiskopf, “An Evaluation of Visual Search Support in Maps,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, Art. no. 1, 2017, doi: 10.1109/TVCG.2016.2598898.
    51. R. Netzel, J. Vuong, U. Engelke, S. I. O’Donoghue, D. Weiskopf, and J. Heinrich, “Comparative Eye-tracking Evaluation of Scatterplots and Parallel Coordinates,” Visual Informatics, vol. 1, no. 2, Art. no. 2, 2017, doi: 10.1016/j.visinf.2017.11.001.
    52. H. T. Nim et al., “Design Considerations for Immersive Analytics of Bird Movements Obtained by Miniaturised GPS Sensors,” 2017. doi: 10.2312/vcbm.20171234.
    53. N. Rodrigues et al., “Visualization of Time Series Data with Spatial Context: Communicating the Energy Production of Power Plants,” in Proceedings of the ACM Symposium on Visual Information Communication and Interaction (VINCI), 2017, pp. 37–44. doi: https://doi.org/10.1145/3105971.3105982.
    54. N. Rodrigues, M. Burch, L. Di Silvestro, and D. Weiskopf, “A Visual Analytics Approach for Word Relevances in Multiple Texts,” in Proceedings of the International Conference on Information Visualisation (IV), 2017, pp. 1–7. doi: 10.1109/iV.2017.62.
    55. D. Sacha et al., “Visual Interaction with Dimensionality Reduction: A Structured Literature Analysis,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, Art. no. 1, 2017, doi: 10.1109/TVCG.2016.2598495.
    56. D. Sacha et al., “What You See Is What You Can Change: Human-Centered Machine Learning by Interactive Visualization,” Neurocomputing, vol. 268, pp. 164–175, 2017, doi: 10.1016/j.neucom.2017.01.105.
    57. H. Sattar, A. Bulling, and M. Fritz, “Predicting the Category and Attributes of Visual Search Targets Using Deep Gaze Pooling,” in Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCVW), 2017, pp. 2740–2748. doi: 10.1109/ICCVW.2017.322.
    58. C. Schulz, A. Nocaj, J. Goertler, O. Deussen, U. Brandes, and D. Weiskopf, “Probabilistic Graph Layout for Uncertain Network Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, Art. no. 1, 2017, doi: 10.1109/TVCG.2016.2598919.
    59. C. Schulz, M. Burch, F. Beck, and D. Weiskopf, “Visual Data Cleansing of Low-Level Eye Tracking Data,” in Eye Tracking and Visualization: Foundations, Techniques, and Applications. ETVIS 2015, M. Burch, L. Chuang, B. Fisher, A. Schmidt, and D. Weiskopf, Eds. Springer International Publishing, 2017, pp. 199–216. doi: 10.1007/978-3-319-47024-5_12.
    60. C. Schulz, N. Rodrigues, K. Damarla, A. Henicke, and D. Weiskopf, “Visual Exploration of Mainframe Workloads,” in Proceedings of the SIGGRAPH Asia Symposium on Visualization, Article No. 4, 2017, pp. 4:1-4:7. [Online]. Available: http://dx.doi.org/10.1145/3139295.3139312
    61. V. Schwind, K. Wolf, and N. Henze, “FaceMaker - A Procedural Face Generator to Foster Character Design Research,” vol. Game Dynamics: Best Practices in Procedural and Dynamic Game Content Generation, O. Korn and N. Lee, Eds. Springer International Publishing, 2017, pp. 95–113. doi: 10.1007/978-3-319-53088-8_6.
    62. V. Schwind, P. Knierim, C. Tasci, P. Franczak, N. Haas, and N. Henze, “‘These are not my hands!’: Effect of Gender on the Perception of Avatar Hands in Virtual Reality,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2017, pp. 1577–1582. doi: 10.1145/3025453.3025602.
    63. V. Schwind, P. Knierim, L. L. Chuang, and N. Henze, “‘Where’s Pinky?’: The Effects of a Reduced Number of Fingers in Virtual Reality,” in Proceedings of the Annual Symposium on Computer-Human Interaction in Play (CHI PLAY), 2017, pp. 507–515. doi: 10.1145/3116595.3116596.
    64. C. Schätzle, “Genitiv als Stilmittel in der Novelle,” Scalable Reading. Zeitschrift für Literaturwissenschaft und Linguistik (LiLi), vol. 47, pp. 125–140, 2017, doi: 10.1007/s41244-017-0043-9.
    65. C. Schätzle, M. Hund, F. L. Dennig, M. Butt, and D. A. Keim, “HistoBankVis: Detecting Language Change via Data Visualization,” in Proceedings of the NoDaLiDa 2017 Workshop Processing Historical Language, 2017, pp. 32–39. [Online]. Available: https://www.aclweb.org/anthology/W17-0507
    66. M. Spicker, F. Hahn, T. Lindemeier, D. Saupe, and O. Deussen, “Quantifying Visual Abstraction Quality for Stipple Drawings,” in Proceedings of the Symposium on Non-Photorealistic Animation and Rendering (NPAR), 2017, pp. 8:1-8:10. [Online]. Available: https://doi.org/http://dx.doi.org/10.1145/3092919.3092923
    67. K. Srulijes et al., “Visualization of Eye-Head Coordination While Walking in Healthy Subjects and Patients with Neurodegenerative Diseases,” Poster (reviewed) presented on Symposium of the International Society of Posture and Gait Research (ISPGR), 2017.
    68. M. Stein et al., “Bring it to the Pitch: Combining Video and Movement Data to Enhance Team Sport Analysis,” in IEEE Transactions on Visualization and Computer Graphics, 2017, vol. 24, no. 1, pp. 13–22. doi: 10.1109/TVCG.2017.2745181.
    69. M. Stoll, D. Maurer, S. Volz, and A. Bruhn, “Illumination-aware Large Displacement Optical Flow,” in Energy Minimization Methods in Computer Vision and Pattern Recognition. EMMCVPR 2017. Lecture Notes in Computer Science, vol. 10746, M. Pelillo and E. R. Hancock, Eds. Springer International Publishing, 2017, pp. 139–154. doi: 10.1007/978-3-319-78199-0_10.
    70. M. Stoll, D. Maurer, and A. Bruhn, “Variational Large Displacement Optical Flow Without Feature Matches.,” in Energy Minimization Methods in Computer Vision and Pattern Recognition. EMMCVPR 2017. Lecture Notes in Computer Science, 2017, vol. 10746, pp. 79–92. doi: 10.1007/978-3-319-78199-0_6.
    71. G. Tkachev, S. Frey, C. Müller, V. Bruder, and T. Ertl, “Prediction of Distributed Volume Visualization Performance to Support Render Hardware Acquisition,” in Proceedings of the Eurographics Symposium on Parallel Graphics and Visualization (EGPGV), 2017, pp. 11–20. doi: 10.2312/pgv.20171089.
    72. M. Tonsen, J. Steil, Y. Sugano, and A. Bulling, “InvisibleEye: Mobile Eye Tracking Using Multiple Low-Resolution Cameras and Learning-Based Gaze Estimation,” in Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), 2017, vol. 1, no. 3, pp. 106:1-106:21. doi: https://doi.org/10.1145/3130971.
    73. P. Tutzauer and N. Haala, “Processing of Crawled Urban Imagery for Building Use Classification,” ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XLII-1/W1, pp. 143–149, 2017, doi: 10.5194/isprs-archives-XLII-1-W1-143-2017.
    74. P. Tutzauer, S. Becker, and N. Haala, “Perceptual Rules for Building Enhancements in 3d Virtual Worlds,” i-com, vol. 16, no. 3, Art. no. 3, 2017, doi: 10.1515/icom-2017-0022.
    75. M. van Garderen, B. Pampel, A. Nocaj, and U. Brandes, “Minimum-Displacement Overlap Removal for Geo-referenced Data Visualization,” Computer Graphics Forum, vol. 36, no. 3, Art. no. 3, 2017, doi: 10.1111/cgf.13199.
    76. J. Zagermann, U. Pfeil, C. Acevedo, and H. Reiterer, “Studying the Benefits and Challenges of Spatial Distribution and Physical Affordances in a Multi-device Workspace,” in Proceedings of the International Conference on Mobile and Ubiquitous Multimedia (MUM), 2017, pp. 249–259. doi: https://doi.org/10.1145/3152832.3152855.
    77. J. Zagermann, U. Pfeil, D. Fink, P. von Bauer, and H. Reiterer, “Memory in Motion: The Influence of Gesture- and Touch-based Input Modalities on Spatial Memory,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2017, pp. 1899–1910. doi: 10.1145/3025453.3026001.
    78. X. Zhang, Y. Sugano, M. Fritz, and A. Bulling, “MPIIGaze: Real-World Dataset and Deep Appearance-Based Gaze Estimation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 1, Art. no. 1, 2017, doi: 10.1109/TPAMI.2017.2778103.
    79. X. Zhang, Y. Sugano, and A. Bulling, “Everyday Eye Contact Detection Using Unsupervised Gaze Target Discovery,” in Proceedings of the ACM Symposium on User Interface Software and Technology (UIST), 2017, pp. 193–203. doi: 10.1145/3126594.3126614.
  6. 2016

    1. M. Aupetit and M. Sedlmair, “SepMe: 2002 New Visual Separation Measures.,” in Proceedings of the IEEE Pacific Visualization Symposium (PacificVis), 2016, pp. 1–8. doi: 10.1109/PACIFICVIS.2016.7465244.
    2. A. Barth and A. Stein, “Approximation and simulation of infinite-dimensional Lévy processes,” Stochastics and Partial Differential Equations: Analysis and Computations, vol. 6, no. 2, Art. no. 2, 2016, doi: 10.1007/s40072-017-0109-2.
    3. A. Barth, R. Bürger, I. Kröker, and C. Rohde, “Computational Uncertainty Quantification for a Clarifier-thickener Model with Several Random Perturbations: A Hybrid Stochastic Galerkin Approach,” Computers & Chemical Engineering, vol. 89, pp. 11–26, 2016, doi: 10.1016/j.compchemeng.2016.02.016.
    4. A. Barth and F. G. Fuchs, “Uncertainty Quantification for Hyperbolic Conservation Laws with Flux Coefficients Given by Spatiotemporal Random Fields,” SIAM Journal on Scientific Computing, vol. 38, no. 4, Art. no. 4, 2016, doi: 10.1137/15M1027723.
    5. T. Blascheck, F. Beck, S. Baltes, T. Ertl, and D. Weiskopf, “Visual Analysis and Coding of Data-rich User Behavior,” in Proceedings of the IEEE Conference on Visual Analytics Science and Technology (VAST), 2016, pp. 141–150. doi: 10.1109/VAST.2016.7883520.
    6. V. Bruder, S. Frey, and T. Ertl, “Real-Time Performance Prediction and Tuning for Interactive Volume Raycasting,” in Proceedings of the SIGGRAPH Asia Symposium on Visualization, 2016, pp. 1–8. doi: 10.1145/3002151.3002156.
    7. M. Burch, R. Woods, R. Netzel, and D. Weiskopf, “The Challenges of Designing Metro Maps,” in Proceedings of the Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP), 2016, vol. 2: IVAPP. doi: 10.5220/0005679601950202.
    8. S. Butscher and H. Reiterer, “Applying Guidelines for the Design of Distortions on Focus+Context Interfaces,” in Proceedings of the Working Conference on Advanced Visual Interfaces (AVI), 2016, pp. 244–247. doi: 10.1145/2909132.2909284.
    9. S. Cheng and K. Mueller, “The Data Context Map: Fusing Data and Attributes into a Unified Display.,” IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 1, Art. no. 1, 2016, [Online]. Available: http://dblp.uni-trier.de/db/journals/tvcg/tvcg22.html#ChengM16
    10. M. Correll and J. Heer, “Black Hat Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 1, Art. no. 1, 2016, [Online]. Available: http://idl.cs.washington.edu/files/2017-BlackHatVis-DECISIVe.pdf
    11. T. Dingler, R. Rzayev, V. Schwind, and N. Henze, “RSVP on the go - Implicit Reading Support on Smart Watches Through Eye Tracking,” in Proceedings of the ACM International Symposium on Wearable Computers (ISWC), 2016, pp. 116–119. doi: 10.1145/2971763.2971794.
    12. N. Flad, J. C. Ditz, A. Schmidt, H. H. Bülthoff, and L. L. Chuang, “Data-Driven Approaches to Unrestricted Gaze-Tracking Benefit from Saccade Filtering,” in Proceedings of the Second Workshop on Eye Tracking and Visualization (ETVIS), 2016, pp. 1–5. doi: 10.1109/ETVIS.2016.7851156.