Here we provide a complete list of SFB-TRR 161 publications.

All Publications

  1. J. Bernard, M. Hutter, M. Zeppelzauer, M. Sedlmair, and T. Munzner, “SepEx: Visual Analysis of Class Separation Measures,” in Proceedings of the International Workshop on Visual Analytics (EuroVA), 2020, pp. 1–5, doi: 10.2312/eurova.20201079.
  2. F. Bishop, J. Zagermann, U. Pfeil, G. Sanderson, H. Reiterer, and U. Hinrichs, “Construct-A-Vis: Exploring the Free-Form Visualization Processes of Children,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 1, Art. no. 1, 2020, doi: 10.1109/TVCG.2019.2934804.
  3. M. Blumenschein, L. J. Debbeler, N. C. Lages, B. Renner, D. A. Keim, and M. El-Assady, “v-plots: Designing Hybrid Charts for the Comparative Analysis of Data Distributions,” Computer Graphics Forum, vol. 39, no. 3, Art. no. 3, 2020, doi: 10.1111/cgf.14002.
  4. M. Blumenschein, X. Zhang, D. Pomerenke, D. A. Keim, and J. Fuchs, “Evaluating Reordering Strategies for Cluster Identification in Parallel Coordinates,” Computer Graphics Forum, vol. 39, no. 3, Art. no. 3, 2020, doi: 10.1111/cgf.14000.
  5. M. Borowski, J. Zagermann, C. N. Klokmose, H. Reiterer, and R. Rädle, “Exploring the Benefits and Barriers of Using Computational Notebooks for Collaborative Programming Assignments,” in Proceedings of the ACM Technical Symposium on Computer Science Education (SIGCSE), 2020, pp. 468–474, doi: 10.1145/3328778.3366887.
  6. V. Bruder, C. Müller, S. Frey, and T. Ertl, “On Evaluating Runtime Performance of Interactive Visualizations,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, pp. 2848–2862, 2020, doi: 10.1109/TVCG.2019.2898435.
  7. F. Draxler, A. Labrie, A. Schmidt, and L. L. Chuang, “Augmented Reality to Enable Users in Learning Case Grammar from Their Real-World Interactions,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2020, pp. 410:1-410:12, doi: 10.1145/3313831.3376537.
  8. F. Frieß, M. Braun, V. Bruder, S. Frey, G. Reina, and T. Ertl, “Foveated Encoding for Large High-Resolution Displays,” IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 2, Art. no. 2, 2020, doi: 10.1109/TVCG.2020.3030445.
  9. F. Frieß, C. Müller, and T. Ertl, “Real-Time High-Resolution Visualisation,” in Proceedings of the Eurographics Symposium on Vision, Modeling, and Visualization (VMV), 2020, pp. 127–135, doi: 10.2312/vmv.20201195.
  10. F. Heyen et al., “ClaVis: An Interactive Visual Comparison System for Classifiers,” in Proceedings of the International Conference on Advanced Visual Interfaces, 2020, pp. 9:1-9:9, doi: 10.1145/3399715.3399814.
  11. T. Kosch, A. Schmidt, S. Thanheiser, and L. L. Chuang, “One Does Not Simply RSVP: Mental Workload to Select Speed Reading Parameters Using Electroencephalography,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2020, pp. 637:1-637:13, doi: 10.1145/3313831.3376766.
  12. M. Kraus et al., “Assessing 2D and 3D Heatmaps for Comparative Analysis: An Empirical Study,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2020, pp. 546:1–546:14, doi: 10.1145/3313831.3376675.
  13. K. Kurzhals, F. Göbel, K. Angerbauer, M. Sedlmair, and M. Raubal, “A View on the Viewer: Gaze-Adaptive Captions for Videos,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2020, pp. 139:1–139:12, doi: 10.1145/3313831.3376266.
  14. K. Kurzhals et al., “Visual Analytics and Annotation of Pervasive Eye Tracking Video,” in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), Stuttgart, Germany, 2020, pp. 16:1-16:9, doi: 10.1145/3379155.3391326.
  15. H. Lin, M. Jenadeleh, G. Chen, U. Reips, R. Hamzaoui, and D. Saupe, “Subjective Assessment of Global Picture-Wise Just Noticeable Difference,” in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), 2020, pp. 1–6, doi: 10.1109/ICMEW46912.2020.9106058.
  16. H. Men, V. Hosu, H. Lin, A. Bruhn, and D. Saupe, “Visual Quality Assessment for Interpolated Slow-Motion Videos Based on a Novel Database,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), 2020, pp. 1–6, doi: 10.1109/QoMEX48832.2020.9123096.
  17. L. Merino et al., “Toward Agile Situated Visualization: An Exploratory User Study,” in Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA), 2020, p. LBW087:1–LBW087:7, doi: 10.1145/3334480.3383017.
  18. D. Okanovic et al., “Can a Chatbot Support Software Engineers with Load Testing? Approach and Experiences,” in Proceedings of the ACM/SPEC International Conference on Performance Engineering (ICPE), 2020, pp. 120–129, doi: 10.1145/3358960.3375792.
  19. N. Pathmanathan et al., “Eye vs. Head: Comparing Gaze Methods for Interaction in Augmented Reality,” in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), Stuttgart, Germany, 2020, pp. 50:1-50:5, doi: 10.1145/3379156.3391829.
  20. N. Rodrigues, C. Schulz, A. Lhuillier, and D. Weiskopf, “Cluster-Flow Parallel Coordinates: Tracing Clusters Across Subspaces,” in Proceedings of the Graphics Interface Conference (GI) (forthcoming), 2020, pp. 0:1-0:11, [Online]. Available: https://openreview.net/forum?id=oVHjlwLkl-.
  21. M. Sondag, W. Meulemans, C. Schulz, K. Verbeek, D. Weiskopf, and B. Speckmann, “Uncertainty Treemaps,” in Proceedings of the IEEE Pacific Visualization Symposium (PacificVis), 2020, pp. 111–120, doi: 10.1109/PacificVis48177.2020.7614.
  22. A. Streichert, K. Angerbauer, M. Schwarzl, and M. Sedlmair, “Comparing Input Modalities for Shape Drawing Tasks,” in Proceedings of the Symposium on Eye Tracking Research & Applications-Short Papers (ETRA-SP), 2020, no. 51, pp. 1–5, doi: 10.1145/3379156.3391830.
  23. D. R. Wahl et al., “Why We Eat What We Eat: Assessing Dispositional and In-the-Moment Eating Motives by Using Ecological Momentary Assessment,” JMIR mHealth and uHealth., vol. 8, no. 1, Art. no. 1, 2020, doi: doi:10.2196/13191.
  24. J. Zagermann, U. Pfeil, P. von Bauer, D. Fink, and H. Reiterer, “‘It’s in my other hand!’: Studying the Interplay of Interaction Techniques and Multi-Tablet Activities,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2020, pp. 413:1-413:13, doi: 10.1145/3313831.3376540.
  25. L. Zhou, M. Rivinius, C. R. Johnson, and D. Weiskopf, “Photographic High-Dynamic-Range Scalar Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 6, Art. no. 6, 2020, doi: 10.1109/TVCG.2020.2970522.
  26. S. Öney et al., “Evaluation of Gaze Depth Estimation from Eye Tracking in Augmented Reality,” in Proceedings of the Symposium on Eye Tracking Research & Applications-Short Paper (ETRA-SP), 2020, pp. 49:1-49:5, doi: 10.1145/3379156.3391835.
  27. M. Aupetit, M. Sedlmair, M. M. Abbas, A. Baggag, and H. Bensmail, “Toward Perception-based Evaluation of Clustering Techniques for Visual Analytics,” in Proceedings of the IEEE Visualization Conference (VIS), 2019, pp. 141–145, doi: 10.1109/VISUAL.2019.8933620.
  28. T. M. Benz, B. Riedl, and L. L. Chuang, “Projection Displays Induce Less Simulator Sickness than Head-Mounted Displays in a Real Vehicle Driving Simulator,” in Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI), 2019, pp. 379–387, doi: 10.1145/3342197.3344515.
  29. H. Booth and C. Schätzle, “The Syntactic Encoding of Information Structure in the History of Icelandic,” in Proceedings of the LFG’19 Conference, 2019, pp. 69–89, [Online]. Available: http://web.stanford.edu/group/cslipublications/cslipublications/LFG/LFG-2019/lfg2019-booth-schaetzle.pdf.
  30. V. Bruder, C. Schulz, R. Bauer, S. Frey, D. Weiskopf, and T. Ertl, “Voronoi-Based Foveated Volume Rendering,” in Proceedings of the Eurographics Conference on Visualization - Short Papers (EuroVis), 2019, pp. 67–71, doi: 10.2312/evs.20191172.
  31. V. Bruder, K. Kurzhals, S. Frey, D. Weiskopf, and T. Ertl, “Space-Time Volume Visualization of Gaze and Stimulus,” in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), 2019, pp. 12:1-12:9, doi: 10.1145/3314111.3319812.
  32. V. Bruder et al., “Volume-Based Large Dynamic Graph Analysis Supported by Evolution Provenance,” Multimedia Tools and Applications, vol. 78, no. 23, Art. no. 23, 2019, doi: 10.1007/s11042-019-07878-6.
  33. T. Castermans, M. van Garderen, W. Meulemans, M. Nöllenburg, and X. Yuan, “Short Plane Supports for Spatial Hypergraphs,” in Graph Drawing and Network Visualization. GD 2018. Lecture Notes in Computer Science, vol. 11282, T. Biedl and A. Kerren, Eds. Springer International Publishing, 2019, pp. 53–66.
  34. F. L. Dennig, T. Polk, Z. Lin, T. Schreck, H. Pfister, and M. Behrisch, “FDive: Learning Relevance Models using Pattern-based Similarity Measures,” Proceedings of the IEEE Conference on Visual Analytics Science and Technology (VAST), 2019, doi: 10.1109/VAST47406.2019.8986940.
  35. C. Fan et al., “SUR-Net: Predicting the Satisfied User Ratio Curve for Image Compression with Deep Learning,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), 2019, pp. 1–6, doi: 10.1109/QoMEX.2019.8743204.
  36. J. Görtler, M. Spicker, C. Schulz, D. Weiskopf, and O. Deussen, “Stippling of 2D Scalar Fields,” IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 6, Art. no. 6, 2019, doi: 10.1109/TVCG.2019.2903945.
  37. V. Hosu, B. Goldlücke, and D. Saupe, “Effective Aesthetics Prediction with Multi-level Spatially Pooled Features,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9367–9375, 2019, doi: 10.1109/CVPR.2019.00960.
  38. K. Klein, M. Aichem, B. Sommer, S. Erk, Y. Zhang, and F. Schreiber, “TEAMwISE: Synchronised Immersive Environments for Exploration and Analysis of Movement Data,” in Proceedings of the ACM Symposium on Visual Information Communication and Interaction (VINCI), 2019, pp. 9:1-9:5, doi: 10.1145/3356422.3356450.
  39. H. Lin, V. Hosu, and D. Saupe, “KADID-10k: A Large-scale Artificially Distorted IQA Database,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), 2019, pp. 1–3, doi: 10.1109/QoMEX.2019.8743252.
  40. H. Men, H. Lin, V. Hosu, D. Maurer, A. Bruhn, and D. Saupe, “Visual Quality Assessment for Motion Compensated Frame Interpolation,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), 2019, pp. 1–6, doi: 10.1109/QoMEX.2019.8743221.
  41. M. Miller, X. Zhang, J. Fuchs, and M. Blumenschein, “Evaluating Ordering Strategies of Star Glyph Axes,” in Proceedings of the IEEE Visualization Conference (VIS), 2019, pp. 91–95, doi: 10.1109/VISUAL.2019.8933656.
  42. J. Müller, J. Zagermann, J. Wieland, U. Pfeil, and H. Reiterer, “A Qualitative Comparison Between Augmented and Virtual Reality Collaboration with Handheld Devices,” in Mensch und Computer 2019 – Tagungsband (MuC), 2019, pp. 399–410, doi: 10.1145/3340764.3340773.
  43. R. Netzel, N. Rodrigues, A. Haug, and D. Weiskopf, “Compensation of Simultaneous Orientation Contrast in Superimposed Textures,” in Proceedings of the Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP), 2019, vol. 3: IVAPP, pp. 48–57, doi: 10.5220/0007356800480057.
  44. D. Pomerenke, F. L. Dennig, D. A. Keim, J. Fuchs, and M. Blumenschein, “Slope-Dependent Rendering of Parallel Coordinates to Reduce Density Distortion and Ghost Clusters,” in Proceedings of the IEEE Visualization Conference (VIS), 2019, pp. 86–90, doi: 10.1109/VISUAL.2019.8933706.
  45. K. Schatz et al., “Visual Analysis of Structure Formation in Cosmic Evolution,” in Proceedings of the IEEE Scientific Visualization Conference (SciVis), 2019, pp. 33–41, doi: 10.1109/scivis47405.2019.8968855.
  46. C. Schätzle and H. Booth, “DiaHClust: an Iterative Hierarchical Clustering Approach for Identifying Stages in Language Change,” in Proceedings of the International Workshop on Computational Approaches to Historical Language Change, 2019, pp. 126–135, doi: 10.18653/v1/W19-4716.
  47. C. Schätzle, F. L. Denning, M. Blumenschein, D. A. Keim, and M. Butt, “Visualizing Linguistic Change as Dimension Interactions,” in Proceedings of the International Workshop on Computational Approaches to Historical Language Change, 2019, pp. 272–278, doi: 10.18653/v1/W19-4734.
  48. N. Silva et al., “Eye Tracking Support for Visual Analytics Systems: Foundations, Current Applications, and Research Challenges,” in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), 2019, pp. 11:1-11:9, doi: 10.1145/3314111.3319919.
  49. Y. Wang, Z. Wang, C.-W. Fu, H. Schmauder, O. Deussen, and D. Weiskopf, “Image-Based Aspect Ratio Selection.,” IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 1, Art. no. 1, 2019, doi: 10.1109/TVCG.2018.2865266.
  50. Y. Wang et al., “Improving the Robustness of Scagnostics,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 1, Art. no. 1, 2019, doi: 10.1109/TVCG.2019.2934796.
  51. H. Zhang, S. Frey, H. Steeb, D. Uribe, T. Ertl, and W. Wang, “Visualization of Bubble Formation in Porous Media,” IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 1, Art. no. 1, 2019, doi: 10.1109/TVCG.2018.2864506.
  52. L. Zhou, R. Netzel, D. Weiskopf, and C. R. Johnson, “Spectral Visualization Sharpening.,” in Proceedings of the ACM Symposium on Applied Perception (SAP), 2019, pp. 18:1-18:9, doi: 10.1145/3343036.3343133.
  53. H. Bast, P. Brosi, and S. Storandt, “Efficient Generation of Geographically Accurate Transit Maps,” in Proceedings of the ACM International Conference on Advances in Geographic Information Systems (SIGSPATIAL), 2018, pp. 13–22, doi: 10.1145/3274895.3274955.
  54. M. Behrisch et al., “Quality Metrics for Information Visualization,” Computer Graphics Forum, vol. 37, no. 3, Art. no. 3, 2018, doi: https://doi.org/10.1111/cgf.13446.
  55. H. Ben Lahmar, M. Herschel, M. Blumenschein, and D. A. Keim, “Provenance-based Visual Data Exploration with EVLIN,” in Proceedings of the Conference on Extending Database Technology (EDBT), 2018, pp. 686–689, doi: 10.5441/002/edbt.2018.85.
  56. M. Blumenschein et al., “SMARTexplore: Simplifying High-Dimensional Data Analysis through a Table-Based Visual Analytics Approach,” in Proceedings of the IEEE Conference on Visual Analytics Science and Technology (VAST), 2018, pp. 36–47, doi: 10.1109/VAST.2018.8802486.
  57. S. S. Borojeni, S. C. J. Boll, W. Heuten, H. H. Bülthoff, and L. L. Chuang, “Feel the Movement: Real Motion Influences Responses to Take-Over Requests in Highly Automated Vehicles,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2018, pp. 246:1-246:13, doi: 10.1145/3173574.3173820.
  58. V. Bruder, M. Hlawatsch, S. Frey, M. Burch, D. Weiskopf, and T. Ertl, “Volume-Based Large Dynamic Graph Analytics,” in Proceedings of the International Conference Information Visualisation (IV), 2018, pp. 210–219, doi: 10.1109/iV.2018.00045.
  59. L. L. Chuang and U. Pfeil, “Transparency and Openness Promotion Guidelines for HCI,” in Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA), 2018, p. SIG04:1-SIG04:4, doi: 10.1145/3170427.3185377.
  60. M. de Ridder, K. Klein, and J. Kim, “A Review and Outlook on Visual Analytics for Uncertainties in Functional Magnetic Resonance Imaging,” Brain Informatics, vol. 5, no. 2, Art. no. 2, 2018, doi: 10.1186/s40708-018-0083-0.
  61. L. J. Debbeler, M. Gamp, M. Blumenschein, D. A. Keim, and B. Renner, “Polarized But Illusory Beliefs About Tap and Bottled Water: A Product- and Consumer-Oriented Survey and Blind Tasting Experiment,” Science of the Total Environment, vol. 643, pp. 1400–1410, 2018, doi: 10.1016/j.scitotenv.2018.06.190.
  62. T. Dingler, R. Rzayev, A. S. Shirazi, and N. Henze, “Designing Consistent Gestures Across Device Types: Eliciting RSVP Controls for Phone, Watch, and Glasses,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2018, pp. 419:1–419:12, doi: 10.1145/3173574.3173993.
  63. S. Frey, “Spatio-Temporal Contours from Deep Volume Raycasting,” Computer Graphics Forum, vol. 37, no. 3, Art. no. 3, 2018, doi: 10.1111/cgf.13438.
  64. F. Frieß, M. Landwehr, V. Bruder, S. Frey, and T. Ertl, “Adaptive Encoder Settings for Interactive Remote Visualisation on High-Resolution Displays,” in Proceedings of the IEEE Symposium on Large Data Analysis and Visualization - Short Papers (LDAV), 2018, pp. 87–91, doi: 10.1109/LDAV.2018.8739215.
  65. C. Glatz and L. L. Chuang, “The Time Course of Auditory Looming Cues in Redirecting Visuo-Spatial Attention,” Nature - Scientific Reports, vol. 9, pp. 743:1-743:10, 2018, doi: 10.1038/s41598-018-36033-8.
  66. C. Glatz, S. S. Krupenia, H. H. Bülthoff, and L. L. Chuang, “Use the Right Sound for the Right Job: Verbal Commands and Auditory Icons for a Task-Management System Favor Different Information Processes in the Brain,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2018, pp. 472:1-472:13, doi: 10.1145/3173574.3174046.
  67. J. Görtler, R. Kehlbeck, and O. Deussen, “A Visual Exploration of Gaussian Processes,” 2018, doi: 10.23915/distill.00017.
  68. J. Görtler, C. Schulz, O. Deussen, and D. Weiskopf, “Bubble Treemaps for Uncertainty Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 1, Art. no. 1, 2018, doi: 10.1109/TVCG.2017.2743959.
  69. A. Hautli-Janisz, C. Rohrdantz, C. Schätzle, A. Stoffel, M. Butt, and D. A. Keim, “Visual Analytics in Diachronic Linguistic Investigations,” Linguistic Visualizations, 2018.
  70. V. Hosu, H. Lin, and D. Saupe, “Expertise Screening in Crowdsourcing Image Quality,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), 2018, pp. 276–281, doi: https://dx.doi.org/10.1109/QoMEX.2018.8463427.
  71. S. Hubenschmid, J. Zagermann, S. Butscher, and H. Reiterer, “Employing Tangible Visualisations in Augmented Reality with Mobile Devices,” in Proceedings of the Working Conference on Advanced Visual Interfaces (AVI), 2018, pp. 1–4, [Online]. Available: http://nbn-resolving.de/urn:nbn:de:bsz:352-2-1iooenfo4fofm8.
  72. K. Hänsel, R. Poguntke, H. Haddadi, A. Alomainy, and A. Schmidt, “What to Put on the User: Sensing Technologies for Studies and Physiology Aware Systems,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2018, pp. 145:1-145:14, doi: 10.1145/3173574.3173719.
  73. M. Jenadeleh, M. Pedersen, and D. Saupe, “Realtime Quality Assessment of Iris Biometrics Under Visible Light,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPRW), CVPR Workshops, 2018, pp. 443–452, doi: 10.1109/CVPRW.2018.00085.
  74. J. Karolus, H. Schuff, T. Kosch, P. W. Wozniak, and A. Schmidt, “EMGuitar: Assisting Guitar Playing with Electromyography,” in Proceedings of the Designing Interactive Systems Conference (DIS), 2018, pp. 651–655, doi: 10.1145/3196709.3196803.
  75. M. Klapperstueck et al., “Contextuwall: Multi-site Collaboration Using Display Walls,” Journal of Visual Languages & Computing, vol. 46, pp. 35–42, 2018, doi: 10.1016/j.jvlc.2017.10.002.
  76. P. Knierim, V. Schwind, A. M. Feit, F. Nieuwenhuizen, and N. Henze, “Physical Keyboards in Virtual Reality: Analysis of Typing Performance and Effects of Avatar Hands,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2018, pp. 345:1–345:9, doi: 10.1145/3173574.3173919.
  77. T. Kosch, M. Funk, A. Schmidt, and L. L. Chuang, “Identifying Cognitive Assistance with Mobile Electroencephalography: A Case Study with In-Situ Projections for Manual Assembly.,” Proceedings of the ACM on Human-Computer Interaction (ACMHCI), vol. 2, pp. 11:1-11:20, 2018, doi: 10.1145/3229093.
  78. D. Laupheimer, P. Tutzauer, N. Haala, and M. Spicker, “Neural Networks for the Classification of Building Use from Street-view Imagery,” ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. IV–2, pp. 177–184, 2018, doi: 10.5194/isprs-annals-IV-2-177-2018.
  79. N. Marniok and B. Goldluecke, “Real-time Variational Range Image Fusion and Visualization for Large-scale Scenes using GPU Hash Tables,” in Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV), 2018, pp. 912–920, doi: 10.1109/WACV.2018.00105.
  80. K. Marriott et al., Immersive Analytics, vol. 11190. Springer International Publishing, 2018.
  81. D. Maurer, M. Stoll, and A. Bruhn, “Directional Priors for Multi-Frame Optical Flow,” in Proceedings of the British Machine Vision Conference (BMVC), 2018, pp. 106:1-106:13, [Online]. Available: http://bmvc2018.org/contents/papers/0377.pdf.
  82. D. Maurer and A. Bruhn, “ProFlow: Learning to Predict Optical Flow,” in Proceedings of the British Machine Vision Conference (BMVC), 2018, vol. 86:1-86:13, doi: arXiv:1806.00800.
  83. D. Maurer, N. Marniok, B. Goldluecke, and A. Bruhn, “Structure-from-motion-aware PatchMatch for Adaptive Optical Flow Estimation,” in Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol. 11212, V. Ferrari, M. Hebert, C. Sminchisescu, and Y. Weiss, Eds. Springer International Publishing, 2018, pp. 575–592.
  84. D. Maurer, Y. C. Ju, M. Breuß, and A. Bruhn, “Combining Shape from Shading and Stereo: A Joint Variational Method for Estimating Depth, Illumination and Albedo,” International Journal of Computer Vision, vol. 126, no. 12, Art. no. 12, 2018, doi: 10.1007/s11263-018-1079-1.
  85. H. Men, H. Lin, and D. Saupe, “Spatiotemporal Feature Combination Model for No-Reference Video Quality Assessment,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), 2018, pp. 1–3, doi: 10.1109/QoMEX.2018.8463426.
  86. A. Nesti, G. Rognini, B. Herbelin, H. H. Bülthoff, L. L. Chuang, and O. Blanke, “Modulation of Vection Latencies in the Full-Body Illusion,” PLoS ONE, vol. 13, no. 12, Art. no. 12, 2018, doi: 10.1371/journal.pone.0209189.
  87. S. Oppold and M. Herschel, “Provenance for Entity Resolution,” in Provenance and Annotation of Data and Processes. IPAW 2018. Lecture Notes in Computer Science, vol. 11017, K. Belhajjame, A. Gehani, and P. Alper, Eds. Springer International Publishing, 2018, pp. 226–230.
  88. N. Rodrigues and D. Weiskopf, “Nonlinear Dot Plots,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 1, Art. no. 1, 2018, doi: 10.1109/TVCG.2017.2744018.
  89. N. Rodrigues, R. Netzel, J. Spalink, and D. Weiskopf, “Multiscale Scanpath Visualization and Filtering,” in Proceedings of the Symposium on Eye Tracking and Visualization (ETVIS), 2018, pp. 2:1-2:5, doi: 10.1145/3205929.3205931.
  90. D. Sacha et al., “SOMFlow: Guided Exploratory Cluster Analysis with Self-Organizing Maps and Analytic Provenance,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 1, Art. no. 1, 2018, doi: 10.1109/TVCG.2017.2744805.
  91. M. Scheer, H. H. Bülthoff, and L. L. Chuang, “Auditory Task Irrelevance: A Basis for Inattentional Deafness,” Human Factors, vol. 60, no. 3, Art. no. 3, 2018, doi: 10.1177/0018720818760919.
  92. C. Schulz, K. Schatz, M. Krone, M. Braun, T. Ertl, and D. Weiskopf, “Uncertainty Visualization for Secondary Structures of Proteins,” in Proceedings of the IEEE Pacific Visualization Symposium (PacificVis), 2018, pp. 96–105, doi: 10.1109/PacificVis.2018.00020.
  93. C. Schulz, A. Zeyfang, M. van Garderen, H. Ben Lahmar, M. Herschel, and D. Weiskopf, “Simultaneous Visual Analysis of Multiple Software Hierarchies,” in Proceedings of the IEEE Working Conference on Software Visualization (VISSOFT), 2018, pp. 87–95, doi: 10.1109/VISSOFT.2018.00017.
  94. V. Schwind, K. Leicht, S. Jäger, K. Wolf, and N. Henze, “Is there an Uncanny Valley of Virtual Animals? A Quantitative and Qualitative Investigation,” International Journal of Human-Computer Studies, vol. 111, pp. 49–61, 2018, doi: 10.1016/j.ijhcs.2017.11.003.
  95. C. Schätzle, “Dative Subjects: Historical Change Visualized,” PhD diss., Universität Konstanz, Konstanz, 2018.
  96. T. Spinner, J. Körner, J. Görtler, and O. Deussen, “Towards an Interpretable Latent Space: An Intuitive Comparison of Autoencoders with Variational Autoencoders,” 2018, [Online]. Available: https://thilospinner.com/towards-an-interpretable-latent-space/.
  97. T. Torsney-Weir, S. Afroozeh, M. Sedlmair, and T. Möller, “Risk Fixers and Sweet Spotters: a Study of the Different Approaches to Using Visual Sensitivity Analysis in an Investment Scenario,” in Proceedings of the Eurographics Conference on Visualization (EuroVis), 2018, pp. 119–123, doi: 10.2312/eurovisshort.20181089.
  98. A. C. Valdez, M. Ziefle, and M. Sedlmair, “Priming and Anchoring Effects in Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 1, Art. no. 1, 2018, doi: 10.1109/TVCG.2017.2744138.
  99. D. Varga, D. Saupe, and T. Szirányi, “DeepRN: A Content Preserving Deep Architecture for Blind Image Quality Assessment,” in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), 2018, pp. 1–6, doi: 10.1109/ICME.2018.8486528.
  100. Y. Wang et al., “A Perception-driven Approach to Supervised Dimensionality Reduction for Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 5, Art. no. 5, 2018, doi: 10.1109/TVCG.2017.2701829.
  101. V. Yoghourdjian, T. Dwyer, K. Klein, K. Marriott, and M. Wybrow, “Graph Thumbnails: Identifying and Comparing Multiple Graphs at a Glance,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 12, Art. no. 12, 2018, doi: 10.1109/TVCG.2018.2790961.
  102. J. Zagermann, U. Pfeil, and H. Reiterer, “Studying Eye Movements as a Basis for Measuring Cognitive Load,” Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA), p. LBW095:1-LBW095:6, 2018, doi: 10.1145/3170427.3188628.
  103. Y. Zhu et al., “Genome-scale Metabolic Modeling of Responses to Polymyxins in Pseudomonas Aeruginosa,” GigaScience, vol. 7, no. 4, Art. no. 4, 2018, doi: 10.1093/gigascience/giy021.
  104. Y. Abdelrahman, P. Knierim, P. W. Wozniak, N. Henze, and A. Schmidt, “See Through the Fire: Evaluating the Augmentation of Visual Perception of Firefighters Using Depth and Thermal Cameras,” in Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing and Symposium on Wearable Computers (UbiComp/ISWC), 2017, pp. 693–696, doi: 10.1145/3123024.3129269.
  105. J. Allsop, R. Gray, H. Bülthoff, and L. Chuang, “Eye Movement Planning on Single-Sensor-Single-Indicator Displays is Vulnerable to User Anxiety and Cognitive Load,” Journal of Eye Movement Research, vol. 10, no. 5, Art. no. 5, 2017, doi: 10.16910/jemr.10.5.8.
  106. M. A. Baazizi, H. Ben Lahmar, D. Colazzo, G. Ghelli, and C. Sartiani, “Schema Inference for Massive JSON Datasets,” in Proceedings of the Conference on Extending Database Technology (EDBT), 2017, pp. 222–233, doi: 10.5441/002/edbt.2017.21.
  107. D. Bahrdt et al., “Growing Balls in ℝd,” in Proceedings of the Meeting on Algorithm Engineering and Experiments (ALENEX), 2017, pp. 247–258, doi: 10.1137/1.9781611974768.20.
  108. A. Barth, B. Harrach, N. Hyvönen, and L. Mustonen, “Detecting Stochastic Inclusions in Electrical Impedance Tomography,” Inverse Problems, vol. 33, no. 11, Art. no. 11, 2017, doi: 10.1088/1361-6420/aa8f5c.
  109. M. Behrisch et al., “Magnostics: Image-Based Search of Interesting Matrix Views for Guided Network Exploration,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, Art. no. 1, 2017, doi: 10.1109/TVCG.2016.2598467.
  110. H. Ben Lahmar and M. Herschel, “Provenance-based Recommendations for Visual Data Exploration,” in Proceedings of the USENIX Conference on Theory and Practice of Provenance (TAPP), 2017, pp. 1–7.
  111. H. Booth, C. Schätzle, K. Börjars, and M. Butt, “Dative Subjects and the Rise of Positional Licensing in Icelandic,” in Proceedings of the LFG’17 Conference, 2017, pp. 104–124, [Online]. Available: http://web.stanford.edu/group/cslipublications/cslipublications/LFG/LFG-2017/lfg2017-bsbb.pdf.
  112. V. Bruder, S. Frey, and T. Ertl, “Prediction-Based Load Balancing and Resolution Tuning for Interactive Volume Raycasting,” Visual Informatics, vol. 1, no. 2, Art. no. 2, 2017, doi: 10.1016/j.visinf.2017.09.001.
  113. M. Burch, M. Hlawatsch, and D. Weiskopf, “Visualizing a Sequence of a Thousand Graphs (or Even More),” Computer Graphics Forum, vol. 36, no. 3, Art. no. 3, 2017, doi: 10.1111/cgf.13185.
  114. L. L. Chuang, C. Glatz, and S. S. Krupenia, “Using EEG to Understand why Behavior to Auditory In-vehicle Notifications Differs Across Test Environments,” in Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI), 2017, pp. 123–133, doi: 10.1145/3122986.3123017.
  115. M. Correll and J. Heer, “Surprise! Bayesian Weighting for De-Biasing Thematic Maps.,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, Art. no. 1, 2017, [Online]. Available: http://dblp.uni-trier.de/db/journals/tvcg/tvcg23.html#CorrellH17.
  116. M. de Ridder, K. Klein, and J. Kim, “Temporaltracks: Visual Analytics for Exploration of 4D fMRI Time-series Coactivation,” in Proceedings of the Computer Graphics International Conference (CGI), 2017, pp. 13:1-13:6, doi: 10.1145/3095140.3095153.
  117. K. de Winkel, A. Nesti, H. Ayaz, and H. Bülthoff, “Neural Correlates of Decision Making on Whole Body Yaw Rotation: an fNIRS Study,” Neuroscience Letters, vol. 654, pp. 56–62, 2017, doi: 10.1016/j.neulet.2017.04.053.
  118. O. Deussen, M. Spicker, and Q. Zheng, “Weighted Linde-Buzo-Gray Stippling,” ACM Transactions on Graphics, vol. 36, no. 6, Art. no. 6, 2017, doi: 10.1145/3130800.3130819.
  119. R. Diestelkämper, M. Herschel, and P. Jadhav, “Provenance in DISC Systems: Reducing Space Overhead at Runtime,” in Proceedings of the USENIX Conference on Theory and Practice of Provenance (TAPP), 2017, pp. 1–13, [Online]. Available: https://dl.acm.org/doi/abs/10.5555/3183865.3183883.
  120. T. Dingler, A. Schmidt, and T. Machulla, “Building Cognition-Aware Systems: A Mobile Toolkit for Extracting Time-of-Day Fluctuations of Cognitive Performance,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), vol. 1, no. 3, Art. no. 3, 2017, doi: 10.1145/3132025.
  121. S. Egger-Lampl et al., “Crowdsourcing Quality of Experience Experiments,” in Information Systems and Applications, incl. Internet/Web, and HCI, vol. Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments. Dagstuhl Seminar 15481, Dagstuhl Castle, Germany, November 22 – 27, 2015, Revised Contributions, no. LNCS 10264, D. Archambault, H. Purchase, and T. Hossfeld, Eds. Springer International Publishing, 2017, pp. 154–190.
  122. S. Frey, “Sampling and Estimation of Pairwise Similarity in Spatio-Temporal Data Based on Neural Networks,” in Informatics, 2017, vol. 4, no. 3, p. 27, doi: 10.3390/informatics4030027.
  123. S. Frey and T. Ertl, “Progressive Direct Volume-to-Volume Transformation,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, Art. no. 1, 2017, doi: 10.1109/TVCG.2016.2599042.
  124. S. Frey and T. Ertl, “Flow-Based Temporal Selection for Interactive Volume Visualization,” Computer Graphics Forum, vol. 36, no. 8, Art. no. 8, 2017, doi: 10.1111/cgf.13070.
  125. D. Fritsch, “Photogrammetrische Auswertung digitaler Bilder – Neue Methoden der Kamerakalibration, dichten Bildzuordnung und Interpretation von Punktwolken,” in Photogrammetrie und Fernerkundung, C. Heipke, Ed. Springer Spektrum, 2017, pp. 157–196.
  126. D. Fritsch and M. Klein, “3D and 4D Modeling for AR and VR App Developments,” in Proceedings of the International Conference on Virtual System & Multimedia (VSMM), 2017, pp. 1–8, doi: 10.1109/VSMM.2017.8346270.
  127. S. Funke, T. Mendel, A. Miller, S. Storandt, and M. Wiebe, “Map Simplification with Topology Constraints: Exactly and in Practice,” in Proceedings of the Meeting on Algorithm Engineering and Experiments (ALENEX), 2017, pp. 185–196, doi: 10.1137/1.9781611974768.15.
  128. S. Funke, N. Schnelle, and S. Storandt, “URAN: A Unified Data Structure for Rendering and Navigation,” in Web and Wireless Geographical Information Systems. W2GIS 2017. Lecture Notes in Computer Science, vol. 10181, D. Brosset, C. Claramunt, X. Li, and T. Wang, Eds. 2017, pp. 66–82.
  129. U. Gadiraju et al., “Crowdsourcing Versus the Laboratory: Towards Human-centered Experiments Using the Crowd,” in Information Systems and Applications, incl. Internet/Web, and HCI, vol. Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments. Dagstuhl Seminar 15481, Dagstuhl Castle, Germany, November 22 – 27, 2015, Revised Contributions, no. LNCS 10264, D. Archambault, H. Purchase, and T. Hossfeld, Eds. Springer International Publishing, 2017, pp. 6–26.
  130. P. Gralka, C. Schulz, G. Reina, D. Weiskopf, and T. Ertl, “Visual Exploration of Memory Traces and Call Stacks,” in Proceedings of the IEEE Working Conference on Software Visualization (VISSOFT), 2017, pp. 54–63, doi: 10.1109/VISSOFT.2017.15.
  131. M. Heinemann, V. Bruder, S. Frey, and T. Ertl, “Power Efficiency of Volume Raycasting on Mobile Devices,” 2017, doi: 10.2312/eurp.20171166.
  132. M. Herschel, R. Diestelkämper, and H. Ben Lahmar, “A Survey on Provenance - What for? What form? What from?,” The VLDB Journal, vol. 26, pp. 881–906, 2017, doi: 10.1007/s00778-017-0486-1.
  133. V. Hosu et al., “The Konstanz natural video database (KoNViD-1k).,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), 2017, pp. 1–6, doi: 10.1109/QoMEX.2017.7965673.
  134. J. Iseringhausen et al., “4D Imaging through Spray-on Optics,” ACM Transactions on Graphics, vol. 36, no. 4, Art. no. 4, 2017, doi: 10.1145/3072959.3073589.
  135. O. Johannsen et al., “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Workshops, 2017, pp. 1795–1812, doi: 10.1109/CVPRW.2017.226.
  136. D. Jäckle, F. Stoffel, S. Mittelstädt, D. A. Keim, and H. Reiterer, “Interpretation of Dimensionally-Reduced Crime Data: A Study with Untrained Domain Experts,” in Proceedings of the Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP), 2017, vol. 3, pp. 164–175, doi: http://dx.doi.org/10.5220/0006265101640175.
  137. D. Jäckle, M. Hund, M. Behrisch, D. A. Keim, and T. Schreck, “Pattern Trails: Visual Analysis of Pattern Transitions in Subspaces,” in Proceedings of the IEEE Conference on Visual Analytics Science and Technology (VAST), 2017, pp. 1–12, doi: 10.1109/VAST.2017.8585613.
  138. J. Karolus, P. W. Wozniak, L. L. Chuang, and A. Schmidt, “Robust Gaze Features for Enabling Language Proficiency Awareness,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2017, pp. 2998–3010, doi: 10.1145/3025453.3025601.
  139. P. Knierim et al., “Tactile Drones - Providing Immersive Tactile Feedback in Virtual Reality through Quadcopters,” in Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA), 2017, pp. 433–436, doi: https://doi.org/10.1145/3027063.3050426.
  140. J. Kratt, F. Eisenkeil, M. Spicker, Y. Wang, D. Weiskopf, and O. Deussen, “Structure-aware Stylization of Mountainous Terrains,” in Vision, Modeling & Visualization, M. Hullin, R. Klein, T. Schultz, and A. Yao, Eds. The Eurographics Association, 2017.
  141. M. Krone et al., “Molecular Surface Maps,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, Art. no. 1, 2017, [Online]. Available: http://dx.doi.org/10.1109/TVCG.2016.2598824.
  142. K. Kurzhals, M. Hlawatsch, C. Seeger, and D. Weiskopf, “Visual Analytics for Mobile Eye Tracking,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, Art. no. 1, 2017, doi: 10.1109/TVCG.2016.2598695.
  143. K. Kurzhals, E. Çetinkaya, Y. Hu, W. Wang, and D. Weiskopf, “Close to the Action: Eye-tracking Evaluation of Speaker-following Subtitles,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2017, pp. 6559–6568, doi: https://doi.org/10.1145/3025453.3025772.
  144. K. Kurzhals, M. Stoll, A. Bruhn, and D. Weiskopf, “FlowBrush: Optical Flow Art,” in Symposium on Computational Aesthetics, Sketch-Based Interfaces and Modeling, and Non-Photorealistic Animation and Rendering (EXPRESSIVE, co-located with SIGGRAPH)., 2017, pp. 1:1-1:9, doi: 10.1145/3092912.3092914.
  145. H. V. Le, V. Schwind, P. Göttlich, and N. Henze, “PredicTouch: A System to Reduce Touchscreen Latency using Neural Networks and Inertial Measurement Units,” in Proceedings of the ACM International Conference on Interactive Surfaces and Spaces (ISS), 2017, pp. 230–239, doi: 10.1145/3132272.3134138.
  146. T. Machulla, L. Chuang, F. Kiss, M. O. Ernst, and A. Schmidt, “Sensory Amplification Through Crossmodal Stimulation,” 2017.
  147. N. Marniok, O. Johannsen, and B. Goldluecke, “An Efficient Octree Design for Local Variational Range Image Fusion,” in Pattern Recognition. GCPR 2017. Lecture Notes in Computer Science, vol. 10496, V. Roth and T. Vetter, Eds. Springer International Publishing, 2017, pp. 401–412.
  148. D. Maurer, M. Stoll, S. Volz, P. Gairing, and A. Bruhn, “A Comparison of Isotropic and Anisotropic Second Order Regularisers for Optical Flow,” in Scale Space and Variational Methods in Computer Vision. SSVM 2017. Lecture Notes in Computer Science, vol. 10302, F. Lauze, Y. Dong, and A. B. Dahl, Eds. Springer International Publishing, 2017, pp. 537–549.
  149. D. Maurer, A. Bruhn, and M. Stoll, “Order-adaptive and Illumination-aware Variational Optical Flow Refinement,” in Proceedings of the British Machine Vision Conference (BMVC), 2017, pp. 150:1-150:13, doi: 10.5244/C.31.150.
  150. D. Maurer, M. Stoll, and A. Bruhn, “Order-adaptive Regularisation for Variational Optical Flow: Global, Local and in Between.,” in Scale Space and Variational Methods in Computer Vision. SSVM 2017. Lecture Notes in Computer Science, 2017, vol. 10302, pp. 550–562, doi: 10.1007/978-3-319-58771-4_44.
  151. L. Merino et al., “On the Impact of the Medium in the Effectiveness of 3D Software Visualizations,” in Proceedings of the IEEE Working Conference on Software Visualization (VISSOFT), 2017, pp. 11–21, doi: 10.1109/VISSOFT.2017.17.
  152. A. Nesti, K. de Winkel, and H. Bülthoff, “Accumulation of Inertial Sensory Information in the Perception of Whole Body Yaw Rotation,” PloS ONE, vol. 12, no. 1, Art. no. 1, 2017, doi: 10.1371/journal.pone.0170497.
  153. R. Netzel, M. Hlawatsch, M. Burch, S. Balakrishnan, H. Schmauder, and D. Weiskopf, “An Evaluation of Visual Search Support in Maps,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, Art. no. 1, 2017, doi: 10.1109/TVCG.2016.2598898.
  154. R. Netzel, J. Vuong, U. Engelke, O, D. Weiskopf, and J. Heinrich, “Comparative Eye-tracking Evaluation of Scatterplots and Parallel Coordinates,” Visual Informatics, vol. 1, no. 2, Art. no. 2, 2017, doi: 10.1016/j.visinf.2017.11.001.
  155. H. T. Nim et al., “Design Considerations for Immersive Analytics of Bird Movements Obtained by Miniaturised GPS Sensors,” 2017, doi: 10.2312/vcbm.20171234.
  156. N. Rodrigues et al., “Visualization of Time Series Data with Spatial Context: Communicating the Energy Production of Power Plants,” in Proceedings of the ACM Symposium on Visual Information Communication and Interaction (VINCI), 2017, pp. 37–44, doi: https://doi.org/10.1145/3105971.3105982.
  157. N. Rodrigues, M. Burch, L. Di Silvestro, and D. Weiskopf, “A Visual Analytics Approach for Word Relevances in Multiple Texts,” in Proceedings of the International Conference on Information Visualisation (IV), 2017, pp. 1–7, doi: 10.1109/iV.2017.62.
  158. D. Sacha et al., “Visual Interaction with Dimensionality Reduction: A Structured Literature Analysis,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, Art. no. 1, 2017, doi: 10.1109/TVCG.2016.2598495.
  159. D. Sacha et al., “What You See Is What You Can Change: Human-Centered Machine Learning by Interactive Visualization,” Neurocomputing, vol. 268, pp. 164–175, 2017, doi: 10.1016/j.neucom.2017.01.105.
  160. H. Sattar, A. Bulling, and M. Fritz, “Predicting the Category and Attributes of Visual Search Targets Using Deep Gaze Pooling,” in Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCVW), 2017, pp. 2740–2748, doi: 10.1109/ICCVW.2017.322.
  161. C. Schulz, A. Nocaj, J. Goertler, O. Deussen, U. Brandes, and D. Weiskopf, “Probabilistic Graph Layout for Uncertain Network Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, Art. no. 1, 2017, doi: 10.1109/TVCG.2016.2598919.
  162. C. Schulz, M. Burch, F. Beck, and D. Weiskopf, “Visual Data Cleansing of Low-Level Eye Tracking Data,” in Eye Tracking and Visualization: Foundations, Techniques, and Applications. ETVIS 2015, M. Burch, L. Chuang, B. Fisher, A. Schmidt, and D. Weiskopf, Eds. Springer International Publishing, 2017, pp. 199–216.
  163. C. Schulz, N. Rodrigues, K. Damarla, A. Henicke, and D. Weiskopf, “Visual Exploration of Mainframe Workloads,” in Proceedings of the SIGGRAPH Asia Symposium on Visualization, Article No. 4, 2017, pp. 4:1-4:7, [Online]. Available: http://dx.doi.org/10.1145/3139295.3139312.
  164. V. Schwind, K. Wolf, and N. Henze, “FaceMaker - A Procedural Face Generator to Foster Character Design Research,” vol. Game Dynamics: Best Practices in Procedural and Dynamic Game Content Generation, O. Korn and N. Lee, Eds. Springer International Publishing, 2017, pp. 95–113.
  165. V. Schwind, P. Knierim, C. Tasci, P. Franczak, N. Haas, and N. Henze, “‘These are not my hands!’: Effect of Gender on the Perception of Avatar Hands in Virtual Reality,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2017, pp. 1577–1582, doi: 10.1145/3025453.3025602.
  166. V. Schwind, P. Knierim, L. L. Chuang, and N. Henze, “‘Where’s Pinky?’: The Effects of a Reduced Number of Fingers in Virtual Reality,” in Proceedings of the Annual Symposium on Computer-Human Interaction in Play (CHI PLAY), 2017, pp. 507–515, doi: 10.1145/3116595.3116596.
  167. C. Schätzle, “Genitiv als Stilmittel in der Novelle,” Scalable Reading. Zeitschrift für Literaturwissenschaft und Linguistik (LiLi), vol. 47, pp. 125–140, 2017, doi: 10.1007/s41244-017-0043-9.
  168. C. Schätzle, M. Hund, F. L. Dennig, M. Butt, and D. A. Keim, “HistoBankVis: Detecting Language Change via Data Visualization,” in Proceedings of the NoDaLiDa 2017 Workshop Processing Historical Language, 2017, pp. 32–39, [Online]. Available: https://www.aclweb.org/anthology/W17-0507.
  169. M. Spicker, F. Hahn, T. Lindemeier, D. Saupe, and O. Deussen, “Quantifying Visual Abstraction Quality for Stipple Drawings,” in Proceedings of the Symposium on Non-Photorealistic Animation and Rendering (NPAR), 2017, pp. 8:1-8:10, [Online]. Available: https://doi.org/http://dx.doi.org/10.1145/3092919.3092923.
  170. K. Srulijes et al., “Visualization of Eye-Head Coordination While Walking in Healthy Subjects and Patients with Neurodegenerative Diseases,” Poster (reviewed) presented on Symposium of the International Society of Posture and Gait Research (ISPGR), 2017.
  171. M. Stein et al., “Bring it to the Pitch: Combining Video and Movement Data to Enhance Team Sport Analysis,” in IEEE Transactions on Visualization and Computer Graphics, 2017, vol. 24, no. 1, pp. 13–22, doi: 10.1109/TVCG.2017.2745181.
  172. M. Stoll, D. Maurer, S. Volz, and A. Bruhn, “Illumination-aware Large Displacement Optical Flow,” in Energy Minimization Methods in Computer Vision and Pattern Recognition. EMMCVPR 2017. Lecture Notes in Computer Science, vol. 10746, M. Pelillo and E. R. Hancock, Eds. Springer International Publishing, 2017, pp. 139–154.
  173. M. Stoll, D. Maurer, and A. Bruhn, “Variational Large Displacement Optical Flow Without Feature Matches.,” in Energy Minimization Methods in Computer Vision and Pattern Recognition. EMMCVPR 2017. Lecture Notes in Computer Science, 2017, vol. 10746, pp. 79–92, doi: 10.1007/978-3-319-78199-0_6.
  174. G. Tkachev, S. Frey, C. Müller, V. Bruder, and T. Ertl, “Prediction of Distributed Volume Visualization Performance to Support Render Hardware Acquisition,” in Proceedings of the Eurographics Symposium on Parallel Graphics and Visualization (EGPGV), 2017, pp. 11–20, doi: 10.2312/pgv.20171089.
  175. M. Tonsen, J. Steil, Y. Sugano, and A. Bulling, “InvisibleEye: Mobile Eye Tracking Using Multiple Low-Resolution Cameras and Learning-Based Gaze Estimation,” in Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), 2017, vol. 1, no. 3, pp. 106:1-106:21, doi: https://doi.org/10.1145/3130971.
  176. P. Tutzauer and N. Haala, “Processing of Crawled Urban Imagery for Building Use Classification,” ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XLII-1/W1, pp. 143–149, 2017, doi: 10.5194/isprs-archives-XLII-1-W1-143-2017.
  177. P. Tutzauer, S. Becker, and N. Haala, “Perceptual Rules for Building Enhancements in 3d Virtual Worlds,” i-com, vol. 16, no. 3, Art. no. 3, 2017, doi: 10.1515/icom-2017-0022.
  178. M. van Garderen, B. Pampel, A. Nocaj, and U. Brandes, “Minimum-Displacement Overlap Removal for Geo-referenced Data Visualization,” Computer Graphics Forum, vol. 36, no. 3, Art. no. 3, 2017, doi: 10.1111/cgf.13199.
  179. J. Zagermann, U. Pfeil, C. Acevedo, and H. Reiterer, “Studying the Benefits and Challenges of Spatial Distribution and Physical Affordances in a Multi-device Workspace,” in Proceedings of the International Conference on Mobile and Ubiquitous Multimedia (MUM), 2017, pp. 249–259, doi: https://doi.org/10.1145/3152832.3152855.
  180. J. Zagermann, U. Pfeil, D. Fink, P. von Bauer, and H. Reiterer, “Memory in Motion: The Influence of Gesture- and Touch-based Input Modalities on Spatial Memory,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2017, pp. 1899–1910, doi: 10.1145/3025453.3026001.
  181. X. Zhang, Y. Sugano, M. Fritz, and A. Bulling, “MPIIGaze: Real-World Dataset and Deep Appearance-Based Gaze Estimation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 1, Art. no. 1, 2017, doi: 10.1109/TPAMI.2017.2778103.
  182. X. Zhang, Y. Sugano, and A. Bulling, “Everyday Eye Contact Detection Using Unsupervised Gaze Target Discovery,” in Proceedings of the ACM Symposium on User Interface Software and Technology (UIST), 2017, pp. 193–203, doi: 10.1145/3126594.3126614.
  183. M. Aupetit and M. Sedlmair, “SepMe: 2002 New Visual Separation Measures.,” in Proceedings of the IEEE Pacific Visualization Symposium (PacificVis), 2016, pp. 1–8, doi: 10.1109/PACIFICVIS.2016.7465244.
  184. A. Barth and A. Stein, “Approximation and simulation of infinite-dimensional Lévy processes,” Stochastics and Partial Differential Equations: Analysis and Computations, vol. 6, no. 2, Art. no. 2, 2016, doi: 10.1007/s40072-017-0109-2.
  185. A. Barth, R. Bürger, I. Kröker, and C. Rohde, “Computational Uncertainty Quantification for a Clarifier-thickener Model with Several Random Perturbations: A Hybrid Stochastic Galerkin Approach,” Computers & Chemical Engineering, vol. 89, pp. 11–26, 2016, doi: 10.1016/j.compchemeng.2016.02.016.
  186. A. Barth and F. G. Fuchs, “Uncertainty Quantification for Hyperbolic Conservation Laws with Flux Coefficients Given by Spatiotemporal Random Fields,” SIAM Journal on Scientific Computing, vol. 38, no. 4, Art. no. 4, 2016, doi: 10.1137/15M1027723.
  187. T. Blascheck, F. Beck, S. Baltes, T. Ertl, and D. Weiskopf, “Visual Analysis and Coding of Data-rich User Behavior,” in Proceedings of the IEEE Conference on Visual Analytics Science and Technology (VAST), 2016, pp. 141–150, doi: 10.1109/VAST.2016.7883520.
  188. V. Bruder, S. Frey, and T. Ertl, “Real-Time Performance Prediction and Tuning for Interactive Volume Raycasting,” in Proceedings of the SIGGRAPH Asia Symposium on Visualization, 2016, pp. 1–8, doi: 10.1145/3002151.3002156.
  189. M. Burch, R. Woods, R. Netzel, and D. Weiskopf, “The Challenges of Designing Metro Maps,” in Proceedings of the Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP), 2016, vol. 2: IVAPP, doi: 10.5220/0005679601950202.
  190. S. Butscher and H. Reiterer, “Applying Guidelines for the Design of Distortions on Focus+Context Interfaces,” in Proceedings of the Working Conference on Advanced Visual Interfaces (AVI), 2016, pp. 244–247, doi: 10.1145/2909132.2909284.
  191. S. Cheng and K. Mueller, “The Data Context Map: Fusing Data and Attributes into a Unified Display.,” IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 1, Art. no. 1, 2016, [Online]. Available: http://dblp.uni-trier.de/db/journals/tvcg/tvcg22.html#ChengM16.
  192. M. Correll and J. Heer, “Black Hat Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 1, Art. no. 1, 2016, [Online]. Available: http://idl.cs.washington.edu/files/2017-BlackHatVis-DECISIVe.pdf.
  193. T. Dingler, R. Rzayev, V. Schwind, and N. Henze, “RSVP on the go - Implicit Reading Support on Smart Watches Through Eye Tracking,” in Proceedings of the ACM International Symposium on Wearable Computers (ISWC), 2016, pp. 116–119, doi: 10.1145/2971763.2971794.
  194. N. Flad, J. C. Ditz, A. Schmidt, H. H. Bülthoff, and L. L. Chuang, “Data-Driven Approaches to Unrestricted Gaze-Tracking Benefit from Saccade Filtering,” in Proceedings of the Second Workshop on Eye Tracking and Visualization (ETVIS), 2016, pp. 1–5, doi: 10.1109/ETVIS.2016.7851156.
  195. S. Frey and T. Ertl, “Auto-Tuning Intermediate Representations for In Situ Visualization,” in Proceedings of the New York Scientific Data Summit (NYSDS), 2016, pp. 1–10, doi: 10.1109/NYSDS.2016.7747807.
  196. S. Funke, A. Nusser, and S. Storandt, “On k-Path Covers and their Applications.,” VLDB Journal, vol. 25, no. 1, Art. no. 1, 2016, doi: 10.1007/s00778-015-0392-3.
  197. S. Funke, F. Krumpe, and S. Storandt, “Crushing Disks Efficiently,” in Combinatorial Algorithms. IWOCA 2016. Lecture Notes in Computer Science, vol. 9843, V. Mäkinen, S. J. Puglisi, and L. Salmela, Eds. Springer International Publishing, 2016, pp. 43–54.
  198. M. Greis, P. El.Agroudy, H. Schuff, T. Machulla, and A. Schmidt, “Decision-Making under Uncertainty: How the Amount of Presented Uncertainty Influences User Behavior,” in Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI), 2016, vol. 2016, doi: 10.1145/2971485.2971535.
  199. M. Herschel and M. Hlawatsch, “Provenance: On and Behind the Screens,” in Proceedings of the ACM International Conference on the Management of Data (SIGMOD), 2016, pp. 2213–2217, doi: 10.1145/2882903.2912568.
  200. J. Hildenbrand, A. Nocaj, and U. Brandes, “Flexible Level-of-Detail Rendering for Large Graphs,” vol. Graph Drawing and Network Visualization. GD 2016. Lecture Notes in Computer Science, no. 9801, Y. Hu and M. Nöllenburg, Eds. 2016.
  201. V. Hosu, F. Hahn, O. Wiedemann, S.-H. Jung, and D. Saupe, “Saliency-driven Image Coding Improves Overall Perceived JPEG Quality,” in Proceedings of the Picture Coding Symposium (PCS), 2016, pp. 1–5, doi: 10.1109/PCS.2016.7906397.
  202. V. Hosu, F. Hahn, I. Zingman, and D. Saupe, “Reported Attention as a Promising Alternative to Gaze in IQA Tasks,” in Proceedings of  the 5th ISCA/DEGA Workshop on Perceptual Quality of Systems (PQS 2016), 2016, pp. 117–121, doi: 10.21437/PQS.2016-25.
  203. M. Hund et al., “Visual Analytics for Concept Exploration in Subspaces of Patient Groups,” Brain Informatics, vol. 3, no. 4, Art. no. 4, 2016, doi: 10.1007/s40708-016-0043-5.
  204. M. Hund et al., “Visual Quality Assessment of Subspace Clusterings,” in Proceedings of the KDD Workshop on Interactive Data Exploration and Analytics (IDEA), 2016, pp. 53–62.
  205. O. Johannsen, A. Sulc, N. Marniok, and B. Goldluecke, “Layered Scene Reconstruction from Multiple Light Field Camera Views,” in Computer Vision – ACCV 2016. ACCV 2016. Lecture Notes in Computer Science, vol. 10113, S.-H. Lai, V. Lepetit, K. Nishino, and Y. Sato, Eds. Springer International Publishing, 2016, pp. 3–18.
  206. J. Karolus, P. W. Woźniak, and L. L. Chuang, “Towards Using Gaze Properties to Detect Language Proficiency,” in Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI), New York, NY, USA, 2016, pp. 118:1-118:6, doi: 10.1145/2971485.2996753.
  207. A. Kumar, R. Netzel, M. Burch, D. Weiskopf, and K. Mueller, “Multi-Similarity Matrices of Eye Movement Data,” in Proceedings of the Symposium on Eye Tracking and Visualization (ETVIS), 2016, pp. 26–30, doi: 10.1109/ETVIS.2016.7851161.
  208. K. Kurzhals, M. Hlawatsch, M. Burch, and D. Weiskopf, “Fixation-Image Charts,” in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), 2016, vol. 1, pp. 11–18, [Online]. Available: http://dx.doi.org/10.1145/2857491.2857507.
  209. K. Kurzhals, M. Hlawatsch, F. Heimerl, M. Burch, T. Ertl, and D. Weiskopf, “Gaze Stripes: Image-based Visualization of Eye Tracking Data,” IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 1, Art. no. 1, 2016, doi: 10.1109/TVCG.2015.2468091.
  210. K. Kurzhals, B. Fisher, M. Burch, and D. Weiskopf, “Eye Tracking Evaluation of Visual Analytics,” Information Visualization, vol. 15, no. 4, Art. no. 4, 2016, doi: 10.1177/1473871615609787.
  211. L. Lischke, S. Mayer, K. Wolf, N. Henze, H. Reiterer, and A. Schmidt, “Screen Arrangements and Interaction Areas for Large Display Work Places,” in Proceedings of the ACM International Symposium on Pervasive Displays (PerDis), 2016, vol. 5, pp. 228–234, doi: 10.1145/2914920.2915027.
  212. L. Lischke, V. Schwind, K. Friedrich, A. Schmidt, and N. Henze, “MAGIC-Pointing on Large High-Resolution Displays,” in Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA), 2016, pp. 1706–1712, doi: 10.1145/2851581.2892479.
  213. J. Müller, R. Rädle, and H. Reiterer, “Virtual Objects as Spatial Cues in Collaborative Mixed Reality Environments: How They Shape Communication Behavior and User Task Load,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2016, pp. 1245–1249, doi: 10.1145/2858036.2858043.
  214. R. Netzel and D. Weiskopf, “Hilbert Attention Maps for Visualizing Spatiotemporal Gaze Data,” in Proceedings of the Symposium on Eye Tracking and Visualization (ETVIS), 2016, pp. 21–25, doi: 10.1109/ETVIS.2016.7851160.
  215. R. Netzel, M. Burch, and D. Weiskopf, “Interactive Scanpath-oriented Annotation of Fixations,” Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), pp. 183–187, 2016, doi: 10.1145/2857491.2857498.
  216. R. Netzel, M. Burch, and D. Weiskopf, “User Performance and Reading Strategies for Metro Maps: An Eye Tracking Study,” Spatial Cognition and Computation, Special Issue: Eye Tracking for Spatial Research, 2016, doi: http://dx.doi.org/10.1080/13875868.2016.1226839.
  217. A. Nocaj, M. Ortmann, and U. Brandes, “Adaptive Disentanglement Based on Local Clustering in Small-World Network Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 6, Art. no. 6, 2016, doi: 10.1109/TVCG.2016.2534559.
  218. B. Pfleging, D. K. Fekety, A. Schmidt, and A. L. Kun, “A Model Relating Pupil Diameter to Mental Workload and Lighting Conditions,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2016, pp. 5776–5788, doi: 10.1145/2858036.2858117.
  219. D. Sacha et al., “Human-Centered Machine Learning Through Interactive Visualization: Review and Open Challenges.,” 2016, [Online]. Available: http://dblp.uni-trier.de/db/conf/esann/esann2016.html#SachaSZLWNK16.
  220. D. Saupe, F. Hahn, V. Hosu, I. Zingman, M. Rana, and S. Li, “Crowd Workers Proven Useful: A Comparative Study of Subjective Video Quality Assessment,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), 2016, pp. 1–2, [Online]. Available: https://www.uni-konstanz.de/mmsp/pubsys/publishedFiles/SaHaHo16.pdf.
  221. M. Scheer, H. H. Bülthoff, and L. L. Chuang, “Steering Demands Diminish the Early-P3, Late-P3 and RON Components of the Event-Related Potential of Task-Irrelevant Environmental Sounds,” in Frontiers in Human Neuroscience, 2016, vol. 10, pp. 73:1-73:15, doi: 10.3389/fnhum.2016.00073.
  222. C. Schulz et al., “Generative Data Models for Validation and Evaluation of Visualization Techniques,” in Proceedings of the Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization (BELIV), 2016, pp. 112–124, doi: 10.1145/2993901.2993907.
  223. C. Schätzle and D. Sacha, “Visualizing Language Change: Dative Subjects in Icelandic,” in Proceedings of the LREC 2016 Workshop VisLRII: Visualization as Added Value in the Development, Use and Evaluation of Language Resources, 2016, pp. 8–15, [Online]. Available: http://www.lrec-conf.org/proceedings/lrec2016/workshops/LREC2016Workshop-VisLR%20II_Proceedings.pdf.
  224. P. Tutzauer, S. Becker, T. Niese, O. Deussen, and D. Fritsch, “Understanding Human Perception of Building Categories in Virtual 3d Cities - a User Study,” The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences (ISPRS), vol. XLI-B2, pp. 683–687, 2016, doi: http://dx.doi.org/10.5194/isprs-archives-XLI-B2-683-2016.
  225. P. Tutzauer, S. Becker, D. Fritsch, T. Niese, and O. Deussen, “A Study of the Human Comprehension of Building Categories Based on Different 3D Building Representations,” Photogrammetrie - Fernerkundung - Geoinformation, vol. 2016, no. 5–6, Art. no. 5–6, 2016, doi: 10.1127/pfg/2016/0302.
  226. A. Voit, T. Machulla, D. Weber, V. Schwind, S. Schneegaß, and N. Henze, “Exploring Notifications in Smart Home Environments,” in Proceedings of the International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct (MobileHCI), 2016, pp. 942–947, doi: 10.1145/2957265.2962661.
  227. T. Waltemate et al., “The Impact of Latency on Perceptual Judgments and Motor Performance in Closed-loop Interaction in Virtual Reality,” in Proceedings of the ACM Conference on Virtual Reality Software and Technology (VRST), 2016, pp. 27–35, doi: 10.1145/2993369.2993381.
  228. D. Weiskopf, M. Burch, L. L. Chuang, B. Fischer, and A. Schmidt, Eye Tracking and Visualization: Foundations, Techniques, and Applications. Berlin, Heidelberg: Springer, 2016.
  229. E. Wood, T. Baltrusaitis, L.-P. Morency, P. Robinson, and A. Bulling, “Learning an Appearance-Based Gaze Estimator from One Million Synthesised Images,” in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), 2016, pp. 131–138, doi: 10.1145/2857491.2857492.
  230. E. Wood, T. Baltrusaitis, L.-P. Morency, P. Robinson, and A. Bulling, “A 3D Morphable Eye Region Model for Gaze Estimation,” in Proceedings of the European Conference on Computer Vision (ECCV), 2016, pp. 297–313, doi: 10.1007/978-3-319-46448-0_18.
  231. P. Xu, Y. Sugano, and A. Bulling, “Spatio-Temporal Modeling and Prediction of Visual Attention in Graphical User Interfaces,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2016, pp. 3299–3310, doi: 10.1145/2858036.2858479.
  232. J. Zagermann, U. Pfeil, R. Rädle, H.-C. Jetter, C. N. Klokmose, and H. Reiterer, “When Tablets meet Tabletops: The Effect of Tabletop Size on Around-the-Table Collaboration with Personal Tablets,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2016, pp. 5470–5481, doi: 10.1145/2858036.2858224.
  233. J. Zagermann, U. Pfeil, and H. Reiterer, “Measuring Cognitive Load using Eye Tracking Technology in Visual Computing,” in Proceedings of the Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization (BELIV), 2016, pp. 78–85, doi: 10.1145/2993901.2993908.
  234. X. Zhang, Y. Sugano, M. Fritz, and A. Bulling, “It’s Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2299–2308, doi: 10.1109/CVPRW.2017.284.
  235. I. Zingman, D. Saupe, O. A. B. Penatti, and K. Lambers, “Detection of Fragmented Rectangular Enclosures in Very High Resolution Remote Sensing Images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 8, Art. no. 8, 2016, doi: 10.1109/TGRS.2016.2545919.
  236. T. Chandler et al., “Immersive Analytics,” in Proceedings of the IEEE Symposium on Big Data Visual Analytics (BDVA), 2015, pp. 73–80, doi: 10.1109/BDVA.2015.7314296.
  237. L. L. Chuang and H. H. Bülthoff, “Towards a Better Understanding of Gaze Behavior in the Automobile,” presented at the AutomotiveUI’15, Nottingham, 2015, [Online]. Available: https://www.auto-ui.org/15/p/workshops/2/8_Towards%20a%20Better%20Understanding%20of%20Gaze%20Behavior%20in%20the%20Automobile_Chuang.pdf.
  238. L. L. Chuang, “Error Visualization and Information-Seeking Behavior for Air-Vehicle Control,” in Foundations of Augmented Cognition. AC 2015. Lecture Notes in Computer Science, vol. 9183, D. Schmorrow and C. M. Fidopiastis, Eds. Springer, 2015, pp. 3–11.
  239. N. Flad, T. Fomina, H. H. Bülthoff, and L. L. Chuang, “Unsupervised Clustering of EOG as a Viable Substitute for Optical Eye Tracking,” in Eye Tracking and Visualization: Foundations, Techniques, and Applications, M. Burch, L. L. Chuang, B. D. Fisher, A. Schmidt, and D. Weiskopf, Eds. Springer International Publishing, 2015, pp. 151–167.
  240. S. Frey, F. Sadlo, and T. Ertl, “Balanced Sampling and Compression for Remote Visualization,” in Proceedings of the SIGGRAPH Asia Symposium on High Performance Computing, 2015, pp. 1–4, doi: 10.1145/2818517.2818529.
  241. M. Hund et al., “Subspace Nearest Neighbor Search - Problem Statement, Approaches, and Discussion,” in Similarity Search and Applications. International Conference on Similarity Search and Applications (SISAP). Lecture Notes in Computer Science, vol. 9371, G. Amato, R. Connor, F. Falchi, and C. Gennaro, Eds. Springer, Cham, 2015, pp. 307–313.
  242. K. Kurzhals, M. Burch, T. Pfeiffer, and D. Weiskopf, “Eye Tracking in Computer-based Visualization,” Computing in Science & Engineering, vol. 17, no. 5, Art. no. 5, 2015, doi: 10.1109/MCSE.2015.93.
  243. L. Lischke, P. Knierim, and H. Klinke, “Mid-Air Gestures for Window Management on Large Displays,” in Mensch und Computer 2015 – Tagungsband (MuC), 2015, pp. 439–442, doi: 20.500.12116/7858.
  244. L. Lischke et al., “Using Space: Effect of Display Size on Users’ Search Performance,” in Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA), 2015, pp. 1845–1850, doi: 10.1145/2702613.2732845.
  245. L. Lischke, J. Grüninger, K. Klouche, A. Schmidt, P. Slusallek, and G. Jacucci, “Interaction Techniques for Wall-Sized Screens,” Proceedings of the International Conference on Interactive Tabletops & Surfaces (ITS), pp. 501–504, 2015, doi: 10.1145/2817721.2835071.
  246. C. Schulz, M. Burch, and D. Weiskopf, “Visual Data Cleansing of Eye Tracking Data,” 2015, [Online]. Available: http://etvis.visus.uni-stuttgart.de/etvis2015/papers/etvis15_schulz.pdf.
  247. V. Schwind and S. Jäger, “The Uncanny Valley and the Importance of Eye Contact,” in Mensch und Computer 2015 - Tagungsband, S. Diefenbach and N. H. & M. Pielot, Eds. De Gruyter Oldenbourg, 2015, pp. 153–162.
  248. M. Sedlmair and M. Aupetit, “Data-driven Evaluation of Visual Quality Measures,” Computer Graphics Forum, vol. 34, no. 3, Art. no. 3, 2015, doi: 10.1111/cgf.12632.
  249. M. Spicker, J. Kratt, D. Arellano, and O. Deussen, “Depth-aware Coherent Line Drawings,” in Proceedings of the SIGGRAPH Asia Symposium on Computer Graphics and Interactive Techniques, Technical Briefs, 2015, pp. 1:1-1:5, [Online]. Available: http://doi.acm.org/10.1145/2820903.2820909.