1. M. Spicker, F. Hahn, T. Lindemeier, D. Saupe, and O. Deussen, “Quantifying visual abstraction quality for computer-generated illustrations,” ACM Transactions on Applied Perception (TAP), to appear.
  2. B. V., M. C., F. S., and E. T., “On evaluating runtime performance of interactive visualizations,” IEEE Transactions on Visualization and Computer Graphics, to appear.
  3. J. Görtler, M. Spicker, C. Schulz, D. Weiskopf, and O. Deussen, “Stippling of 2D scalar fields,” IEEE Transactions on Visualization and Computer Graphics (TVCG), vol., no., p. to appear, 2019.
  4. A. Voit, S. Mayer, V. Schwind, and N. Henze, “Online, VR, AR, Lab, and In-Situ: Comparison of Research Methods to Evaluate Smart Artifacts,” in Proceedings of the of the 2019 CHI Conference on Human Factors in Computing Systems, 2019, p. to appear.
  5. V. Schwind, N. Deierlein, R. Poguntke, and N. Henze, “Understanding the Social Acceptability of Mobile Devices using the Stereotype Content Model,” in Proceedings of the 2019 SIGCHI Conference on Human Factors in Computing Systems (CHI), 2019, p. to appear.
  6. S. Jaeger et al., “Challenges for Brain Data Analysis in VR Environments,” in Proceedings of IEEE Pacific Vis 2019, 2019, p. to appear.
  7. H. V. Le, S. Mayer, and N. Henze, “Investigating the Feasibility of Finger Identification on Capacitive Touchscreens using Deep Learning,” in Proceedings of the 24th International Conference on Intelligent User Interfaces (IUI ’19), 2019, p. to appear.
  8. Y. Wang, Z. Wang, C.-W. Fu, H. Schmauder, O. Deussen, and D. Weiskopf, “Image-based aspect ratio selection,” IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 1, 2019.
  9. R. Netzel, N. Rodrigues, A. Haug, and D. Weiskopf, “Compensation of simultaneous orientation contrast in superimposed textures,” in Proc. 10th Intl. Conf. Information Vis. Theory Appl. (IVAPP), 2019, p. to appear.
  10. V. Schwind, P. Knierim, N. Haas, and N. Henze, “Using Presence Questionnaires in Virtual Reality,” in Proceedings of the 2019 SIGCHI Conference on Human Factors in Computing Systems (CHI), 2019, p. to appear.
  11. H. Zhang, S. Frey, H. Steeb, D. Uribe, T. Ertl, and W. Wang, “Visualization of Bubble Formation in Porous Media,” IEEE Transactions on Visualization and Computer Graphics, pp. 1–1, 2019.
  12. D. Maurer, M. Stoll, and A. Bruhn, Directional Priors for Multi-Frame Optical Flow Estimation, vol. Proceedings of the British Machine Vision Conference (BMVC). 2018.
  13. T. Castermans, M. Garderen, W. Meulemans, M. Nöllenburg, and X. Yuan, “Short plane supports for spatial hypergraphs,” in Proceedings of the 26th International Symposium on Graph Drawing and Network Visualization (GD18), 2018, pp. 53–66.
  14. D. Varga, T. Szirányi, and D. Saupe, “DeepRN: a content preserving deep architecture for blind image quality assessment,” in IEEE Int. Conf. Multimedia and Expo (ICME), 2018.
  15. V. Schwind, K. Leicht, S. Jäger, K. Wolf, and N. Henze, “Is there an uncanny valley of virtual animals? A quantitative and qualitative investigation,” International Journal of Human-Computer Studies, vol. 111, pp. 49–61, 2018.
  16. C. Glatz, S. Krupenia, H. Bülthoff, and L. Chuang, “Use the right sound for the right job: verbal commands and auditory icons for a task-management system favor different information processes in the brain,” in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018, pp. 472:1-472:13.
  17. P. Knierim, V. Schwind, A. M. Feit, F. Nieuwenhuizen, and N. Henze, “Physical keyboards in virtual reality: analysis of typing performance and effects of avatar hands,” in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018, pp. 345:1–345:9.
  18. A. Hautli-Janisz, C. Rohrdantz, C. Schätzle, A. Stoffel, M. Butt, and D. A. Keim, “Visual analytics in diachronic linguistic investigations,” Linguistic Visualization, 2018.
  19. F. Frieß, M. Landwehr, V. Bruder, S. Frey, and T. Ertl, “Adaptive encoder settings for interactive remote visualization on high-resolution displays,” in Proceedings of the Symposium on Large Data Analysis and Visualization, 2018.
  20. S. Frey, “Spatio-Temporal Contours from Deep Volume Raycasting,” Computer Graphics Forum, vol. 37, no. 3, pp. 513–524, 2018.
  21. J. Zagermann, U. Pfeil, and H. Reiterer, “Studying eye movements as a basis for measuring,” Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, p. LBW095:1–LBW095:6, 2018.
  22. V. Hosu, H. Lin, and D. Saupe, “‘Expertise screening in crowdsourcing image quality,’” 2018.
  23. C. Müller et al., “Interactive molecular graphics for augmented reality using Hololens,” Journal of Integrative Bioinformatics, vol. 15, no. 2, 2018.
  24. J. Goertler, C. Schulz, O. Deussen, and D. Weiskopf, “Bubble treemaps for uncertainty visualization,” IEEE Transactions on Visualization and Computer Graphics (TVCG), vol. 24, no. 1, pp. 719–728, 2018.
  25. D. Maurer and A. Bruhn, ProFlow: Learning to Predict Optical Flow, vol. Proceedings of the British Machine Vision Conference (BMVC). 2018.
  26. M. Scheer, H. H. Bülthoff, and L. L. Chuang, “Auditory task irrelevance: A basis for inattentional deafness,” Human Factors: The Journal of the Human Factors and Ergonomics Society, vol. 60, no. 3, pp. 428–440, 2018.
  27. D. Laupheimer, P. Tutzauer, N. Haala, and M. Spicker, “Neural networks for the classification of building use from street-view imagery,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, pp. 177–184, 2018.
  28. N. Marniok and B. Goldluecke, “Real-time variational range image fusion and visualization for large-scale scenes using GPU hash tables,” in IEEE Winter Conf. on Applications of Computer Vision (WACV), 2018, pp. 912–920.
  29. J. Karolus, H. Schuff, T. Kosch, P. Wozniak, and A. Schmidt, “EMGuitar: Assisting Guitar Playing with Electromyography,” in Proceedings of the 2018 Designing Interactive Systems Conference, 2018, pp. 651–655.
  30. S. Oppold and M. Herschel, “Provenance for entity resolution,” in Proceedings of the International Provenance and Annotation Workshop, 2018, pp. 226–230.
  31. D. Maurer, N. Marniok, B. Goldlücke, and A. Bruhn, Structure-from-Motion aware PatchMatch for Adaptive Optical Flow Estimation, vol. Lecture Notes in Computer Science, Proceedings of the European Conference on Computer Vision (ECCV), no. 11212. Springer, 2018.
  32. H. V. Le, T. Kosch, P. Bader, S. Mayer, and N. Henze, “PalmTouch: using the palm as an additional input modality on commodity smartphones,” in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018, pp. 360:1--360:13.
  33. J. Görtler, R. Kehlbeck, and O. Deussen, “A visual exploration of Gaussian processes,” in Proceedings of the Workshop on Visualization for AI Explainability (VISxAI), 2018.
  34. L. J. Debbeler, M. Gamp, M. Blumenschein, D. A. Keim, and B. Renner, “Polarized but illusory beliefs about tap and bottled water: A product- and consumer-oriented survey and blind tasting experiment,” Science of the total environment, vol. 643, pp. 1400–1410, 2018.
  35. T. Dingler, R. Rzayev, A. S. Shirazi, and N. Henze, “Designing consistent gestures across device types: eliciting rsvp controls for phone, watch, and glasses,” in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018, pp. 419:1–419:12.
  36. H. V. Le, S. Mayer, P. Bader, and N. Henze, “Fingers’ range and comfortable area for one-handed smartphone interaction beyond the touchscreen,” in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018, pp. 31:1--31:12.
  37. T. Spinner, J. Körner, J. Görtler, and O. Deussen, “Towards an interpretable latent space,” in Workshop Vis. for AI Explainability (VISxAI), IEEE VIS Berlin, 2018.
  38. M. Behrisch et al., “Quality metrics for information visualization.,” Computer Graphics Forum (CGF), vol. 37, no. 3, pp. 625–662, 2018.
  39. S. Mayer, L. Lischke, A. Lanksweirt, H. V. Le, and N. Henze, “How to communicate new input techniques,” in Proceedings of the 10th Nordic Conference on Human-Computer Interaction, 2018.
  40. T. Kosch, M. Funk, A. Schmidt, and L. L. Chuang, “Identifying cognitive assistance with mobile electroencephalography: a case study with in-situ projections for manual assembly,” in Proceedings of the ACM on Human-Computer Interaction, 2018, vol. 2, no. EICS, p. 11.
  41. C. Schulz, K. Schatz, M. Krone, M. Braun, T. Ertl, and D. Weiskopf, “Uncertainty visualization for secondary structures of proteins,” in Proceedings of the IEEE Pacific Visualization Symposium (PacificVis), 2018, pp. 96–105.
  42. S. Borojeni, S. Boll, W. Heuten, H. Bülthoff, and L. Chuang, “Feel the movement: Real motion influences responses to take-over requests in highly automated vehicles,” in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018, p. Paper No. 246.
  43. M. Blumenschein et al., “‘SMARTexplore: Simplifying High-Dimensional Data Analysis through a Table-Based Visual Analytics Approach,’” 2018.
  44. A. Nesti, G. Rognini, B. Herbelin, H. H. Bülthoff, L. L. Chuang, and O. Blanke, “Modulation of vection latencies in the full-body illusion,” PLoS One, vol. 13, no. 12, 2018.
  45. V. Bruder, M. Hlawatsch, S. Frey, M. Burch, D. Weiskopf, and T. Ertl, “Volume-based large dynamic graph analytics,” 2018.
  46. O. Wiedemann, V. Hosu, H. Lin, and D. Saupe, “Disregarding the big picture: towards local image quality assessment,” in Int. Conf. Quality of Multimedia Experience (QoMEX), 2018.
  47. M. Jenadeleh, M. Pedersen, and D. Saupe, “‘Realtime quality assessment of iris biometrics under visible light,’” IEEE Comp. Soc. Workshop on Biometrics, pp. 556–565, 2018.
  48. C. Schulz, A. Zeyfang, M. van Garderen, H. Ben Lahmar, M. Herschel, and D. Weiskopf, “Simultaneous visual analysis of multiple software hierarchies,” in 2018 IEEE Working Conference on Software Visualization (VISSOFT), 2018, pp. 87--95.
  49. N. Rodrigues, R. Netzel, J. Spalink, and D. Weiskopf, “Multiscale scanpath visualization and filtering,” in Proceedings of the 3rd Workshop on Eye Tracking and Visualization, 2018, p. Article No. 2.
  50. D. Maurer, Y. C. Ju, M. Breuß, and A. Bruhn, “Combining Shape from Shading and Stereo: A Joint Variational Method for Estimating Depth, Illumination and Albedo,” International Journal of Computer Vision (IJCV), 2018.
  51. H. Men, H. Lin, and D. Saupe, “Spatiotemporal feature combination model for no-reference video quality assessment,” in Int. Conf. Quality of Multimedia Experience (QoMEX), 2018.
  52. H. Ben Lahmar, M. Herschel, M. Blumenschein, and D. A. Keim, “Provenance-based visual data exploration with EVLIN,” in Proceedings of the International Conference on Extending Database Technology, 2018, pp. 686–689.
  53. S. Frey, “Sampling and Estimation of Pairwise Similarity in Spatio-Temporal Data Based on Neural Networks,” in Informatics, 2017, vol. 4, no. 3, p. 27.
  54. V. Schwind, K. Wolf, and N. Henze, “FaceMaker - A Procedural Face Generator to Foster Character Design Research,” vol. Game Dynamics: Best Practices in Procedural and Dynamic Game Content Generation, O. Korn and N. Lee, Eds. Cham: Springer International Publishing, 2017, pp. 95–113.
  55. R. Netzel, M. Hlawatsch, M. Burch, S. Balakrishnan, H. Schmauder, and D. Weiskopf, “An evaluation of visual search support in maps,” IEEE Transactions on Visualization and Computer Graphics (TVCG), vol. 23, no. 1, pp. 421–430, 2017.
  56. H. Booth, C. Schätzle, K. Börjars, and M. Butt, “Dative subjects and the rise of positional licensing in Icelandic,” in Proceedings of the LFG17 Conference, 2017, pp. 104–124.
  57. P. Tutzauer and N. Haala, “Processing of crawled urban imagery for building use classification,” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci, XLII-1/W1, pp. 143–149, 2017.
  58. V. Hosu et al., “The Konstanz natural video database (KoNViD-1k),” in 9th International Conference on Quality of Multimedia Experience (QoMEX), 2017.
  59. K. Kurzhals, M. Hlawatsch, C. Seeger, and D. Weiskopf, “Visual analytics for mobile eye tracking,” IEEE Transactions on Visualization and Computer Graphics (TVCG), vol. 23, no. 1, pp. 301–310, 2017.
  60. M. Krone et al., “Molecular Surface Maps,” IEEE Transactions on Visualization and Computer Graphics (TVCG), vol. 23, no. 1, 2017.
  61. R. Netzel, B. Ohlhausen, K. Kurzhals, R. Woods, M. Burch, and D. Weiskopf, “User performance and reading strategies for metro maps: An eye tracking study,” Spatial Cognition & Computation, vol. 17, no. 1–2, pp. 39–64, 2017.
  62. D. Jäckle, F. Stoffel, S. Mittelstädt, D. A. Keim, and H. Reiterer, “Interpretation of Dimensionally-Reduced Crime Data: A Study with Untrained Domain Experts.,” in Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Best Student Paper Award, 2017, no. 3, pp. 164–175.
  63. V. Schwind, P. Knierim, C. Tasci, P. Franczak, N. Haas, and N. Henze, “These Are Not My Hands!": Effect of Gender on the Perception of Avatar Hands in Virtual Reality,” in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Denver, Colorado, USA, 2017, vol. CHI ’17, no. 6, pp. 1577--1582.
  64. A. Nesti, K. de Winkel, and H. Bülthoff, “Accumulation of inertial sensory information in the perception of whole body yaw rotation,” One, Plos, 2017.
  65. D. Sacha et al., “SOMFlow: Guided exploratory cluster analysis with self-organizing maps and analytic provenance,” IEEE Conference on Visual Analytics Science and Technology, 2017.
  66. S. Frey and T. Ertl, “Progressive Direct Volume-to-Volume Transformation.,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, pp. 921--930, 2017.
  67. V. Bruder, S. Frey, and T. Ertl, “Prediction-based load balancing and resolution tuning for interactive volume raycasting,” Visual Informatics, vol. 1, no. 2, pp. 106--117, 2017.
  68. O. Deussen, M. Spicker, and Q. Zheng, “Weighted Linde-Buzo-Gray Stippling,” ACM Trans. Graph., vol. 36, no. 6, pp. 233:1--233:12, 2017.
  69. K. de Winkel, A. Nesti, H. Ayaz, and H. Bülthoff, “Neural correlates of decision making on whole body yaw rotation: an fNIRS study,” Neuroscience Letters, 2017.
  70. C. Schätzle, M. Hund, F. L. Dennig, M. Butt, and D. A. Keim, HistoBankVis: Detecting Language Change via Data Visualization, vol. Proceedings of the NoDaLiDa 2017 Workshop on Processing Historical Language (NEALT Proceedings Series 32). 2017.
  71. D. Maurer, M. Stoll, S. Volz, P. Gairing, and A. Bruhn, “A comparison of isotropic and anisotropic second order regularisers for optical flow,” in Proceedings of International Conference on Scale Space and Variational Methods in Computer Vision (SSVM), Berlin, 2017, vol. LNCS 10302, pp. 537–549.
  72. D. Maurer, M. Stoll, and A. Bruhn, “Order-adaptive and illumination-aware variational optical flow refinement,” in British Machine Vision Conference (BMVC), 2017.
  73. M. Stoll, D. Maurer, S. Volz, and A. Bruhn, “Illumination-aware large displacement optical flow,” in Proceedings of the 11th International Conference on Energy Minimization Methods in Computer Vision and Pattern Recognition (EMMCVPR), 2017, pp. 139–154.
  74. K. Kurzhals, E. Cetinkaya, Y. Hu, W. Wang, and D. Weiskopf, “Close to the action: eyetracking evaluation of speaker-following subtitles,” in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 2017, pp. 6559–6568.
  75. C. Schulz, A. Nocaj, J. Goertler, O. Deussen, U. Brandes, and D. Weiskopf, “Probabilistic Graph Layout for Uncertain Network Visualization,” vol. 23, no. 1, 2017.
  76. J. Allsop, R. Gray, H. Bülthoff, and L. Chuang, “Eye movement planning on single-sensor-singleindicator displays is vulnerable to user anxiety and cognitive load,” Journal of Eye Movement Research, vol. 10, no. 5:8, pp. 1–15, 2017.
  77. M. Behrisch et al., “Magnostics: Image-Based Search of Interesting Matrix Views for Guided Network Exploration.,” IEEE Trans. Vis. Comput. Graph., vol. 23, no. 1, pp. 31–40, 2017.
  78. V. Schwind, P. Knierim, L. Chuang, and N. Henze, “‘Where’s Pinky?’: The Effects of a Reduced Number of Fingers in Virtual Reality,” in Proceedings of the 2017 CHI Conference on Computer-Human Interaction in Play, Amsterdam, Netherlands, 2017, vol. CHI PLAY’17, p. 6.
  79. K. Srulijes et al., “Visualization of eye-head coordination while walking in healthy subjects and patients with neurodegenerative diseases.” 2017.
  80. O. Johannsen, “A taxonomy and evaluation of dense light field depth estimation algorithms,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2017, pp. 1795–1812.
  81. H. Ben Lahmar and M. Herschel, “Provenance-based Recommendations for Visual Data Exploration,” in International Workshop on Theory and Practice of Provenance (TAPP), 2017.
  82. M. van Garderen, B. Pampel, A. Nocaj, and U. Brandes, “Minimum-displacement overlap removal for geo-referenced data visualization,” Computer Graphics Forum (CGF), vol. 36, no. 3, pp. 423–433, 2017.
  83. M. Heinemann, V. Bruder, S. Frey, and T. Ertl, “Power Efficiency of Volume Raycasting on Mobile Devices,” in EuroVis 2017 - Posters, 2017.
  84. L. Merino et al., “On the Impact of the Medium in the Effectiveness of 3D Software Visualizations,” in VISSOFT’17: Proceedings of the 5th IEEE Working Conference on Software Visualization, 2017.
  85. C. Schätzle, “Genitiv als Stilmittel in der Novelle,” Scalable Reading. Zeitschrift für Literaturwissenschaft und Linguistik (LiLi), vol. 47, no. 1, pp. 8–15, 2017.
  86. M. Stoll, S. Volz, D. Maurer, and A. Bruhn, “A time-efficient optimisation framework for parameters of optical flow methods,” in Scandinavian Conference on Image Analysis (SCIA)., Berlin, 2017, vol. LNCS 10269, pp. 41–53.
  87. K. Kurzhals, M. Stoll, A. Bruhn, and D. Weiskopf, “FlowBrush: Optical flow art,” in Proceedings of Computational Aesthetics 2017, 2017.
  88. D. Maurer, M. Stoll, and A. Bruhn, “Order-adaptive regularisation for variational optical flow: global, local and in between,” in Proceedings of International Conference on Scale Space and Variational Methods in Computer Vision (SSVM), Berlin, 2017, vol. LNCS 10302, pp. 550–562.
  89. J. Zagermann, U. Pfeil, C. Acevedo, and H. Reiterer, “Studying the Benefits and Challenges of Spatial Distribution and Physical Affordances in a Multi-Device Workspace,” in In Proceedings of the 16th International Conference on Mobile and Ubiquitous Multimedia (MUM)´, 2017.
  90. J. Zagermann, U. Pfeil, D. Fink, P. von Bauer, and H. Reiterer, “Memory in Motion: The Influence of Gesture- and Touch-Based Input Modalities on Spatial Memory,” 2017.
  91. N. Rodrigues et al., “Visualization of time series data with spatial context: Communicating the energy production of power plants,” in Proceedings of the 10th International Symposium on Visual Information Communication and Interaction (VINCI’17), 2017, no. 8, pp. 37--44.
  92. N. Rodrigues and D. Weiskopf, “Nonlinear Dot Plots,” IEEE Transactions on Visualization and Computer Graphics (TVCG), vol. 24, no. 1, pp. 616–625, 2017.
  93. N. Rodrigues, M. Burch, L. Di Silvestro, and D. Weiskopf, “A Visual Analytics Approach for Word Relevances in Multiple Texts,” in IV 2017, 2017.
  94. M. Spicker, F. Hahn, T. Lindemeier, D. Saupe, and O. Deussen, “Quantifying Visual Abstraction Quality for Stipple Drawings,” in Proceedings of NPAR’17, 2017.
  95. M. Herschel, R. Diestelkämper, and H. Ben Lahmar, “A survey on provenance - What for? What form? What from?,” International Journal on Very Large Data Bases (VLDB Journal), vol. 26, no. 6, pp. 881–906, 2017.
  96. H. V. Le, V. Schwind, P. Göttlich, and N. Henze, “PredicTouch: A system to reduce touchscreen latency using neural networks and inertial measurement units,” in Proceedings of the 2017 International Conference on Interactive Surfaces and Spaces, 2017, pp. 230–239.
  97. N. Marniok, O. Johannsen, and B. Goldluecke, “An efficient octree design for local variational range image fusion,” in Proc. of the German Conference on Pattern Recognition (GCPR), 2017, pp. 401–412.
  98. D. Fritsch, “Photogrammetrische Auswertung digitaler Bilder – Neue Methoden der Kamerakalibration, dichten Bildzuordnung und Interpretation von Punktwolken,” Photogrammetrie und Fernerkundung, pp. 157–196, 2017.
  99. M. A. Baazizi, H. Ben Lahmar, D. Colazzo, G. Ghelli, and C. Sartiani, “Schema Inference for Massive JSON Datasets,” in Conference on Extending Database Technology (EDBT), 2017, pp. 222–233.
  100. P. Gralka, C. Schulz, G. Reina, D. Weiskopf, and T. Ertl, “Visual exploration of memory traces and call stacks,” in 2017 IEEE Working Conference on Software Visualization (VISSOFT), 2017, pp. 54–63.
  101. R. Netzel, J. Vuong, U. Engelke, S. O’Donoghue, D. Weiskopf, and J. Heinrich, “Comparative eye-tracking evaluation of scatterplots and parallel coordinates,” Visual Informatics, vol. 1, no. 2, pp. 118–131, 2017.
  102. S. Frey and T. Ertl, “Fast flow-based quantification and interpolation for high-resolution density distribution,” in Proceedings of EuroGraphics Short Papers, 2017.
  103. G. Tkachev, S. Frey, C. Müller, V. Bruder, and T. Ertl, “Prediction of Distributed Volume Visualization Performance to Support Render Hardware Acquisition,” Eurographics Symposium on Parallel Graphics and Visualization, 2017.
  104. D. Fritsch and M. Klein, “3D and 4D modeling for AR and VR app developments,” in 23rd International Conference on Virtual System & Multimedia (VSMM), 2017, pp. 1–8.
  105. J. Karolus, P. W. Woźniak, L. L. Chuang, and A. Schmidt, “Robust Gaze Features for Enabling Language Proficiency Awareness,” in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17), New York, NY, USA, 2017, pp. 2998–3010.
  106. L. Chuang, C. Glatz, and S. Krupenia, “Using EEG to understand why behavior to auditory in-vehicle notifications differs across test environments,” in 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI ’17), 2017, pp. 123--133.
  107. M. Stoll, D. Maurer, and A. Bruhn, “Variational large displacement optical flow without feature matches,” in Proceedings of the 11th International Conference on Energy Minimization Methods in Computer Vision and Pattern Recognition (EMMCVPR). Lecture Notes in Computer Science, 2017, pp. 79–92.
  108. M. Stein et al., “Bring it to the Pitch: Combining Video and Movement Data to Enhance Team Sport Analysis.,” in IEEE Transactions on Visualization and Computer Graphics (TVCG), 2017.
  109. M. Burch, M. Hlawatsch, and D. Weiskopf, “Visualizing a sequence of a thousand graphs (or even more),” Computer Graphics Forum (CGF), vol. 36, no. 3, 2017.
  110. J. Iseringhausen et al., “4D Imaging through Spray-On Optics,” ACM Transactions on Graphics (SIGGRAPH 2017), vol. 36, no. 4, pp. 35:1--35:11, 2017.
  111. J. Kratt, F. Eisenkeil, M. Spicker, Y. Wang, D. Weiskopf, and O. Deussen, “Structure-aware Stylization of Mountainous Terrains,” in Vision, Modeling & Visualization, 2017.
  112. R. Diestelkämper, M. Herschel, and P. Jadhav, “Provenance in DISC Systems: Reducing Space Overhead at Runtime,” in International Workshop on Theory and Practice of Provenance (TAPP), 2017.
  113. U. Gadiraju et al., “Crowdsourcing versus the laboratory: Towards human-centered experiments using the crowd,” in Information Systems and Applications, incl. Internet/Web, and HCI, vol. Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, no. 10264, D. Archambault, H. Purchase, and T. Hossfeld, Eds. Springer International Publishing, 2017, pp. 7–30.
  114. D. Jäckle, M. Hund, M. Behrisch, D. A. Keim, and T. Schreck, “Pattern Trails: Visual Analysis of Pattern Transitions in Subspaces,” in IEEE Conference on Visual Analytics Science and Technology (VAST), 2017.
  115. P. Tutzauer, S. Becker, and N. Haala, “Perceptual rules for building enhancements in 3d virtual worlds,” i-com, vol. 16, no. 3, pp. 205–213, 2017.
  116. C. Schulz, M. Burch, F. Beck, and D. Weiskopf, “Visual Data Cleansing of Low-Level Eye Tracking Data,” in Eye Tracking and Visualization: Foundations, Techniques, and Applications. ETVIS 2015, M. Burch, L. Chuang, B. Fisher, A. Schmidt, and D. Weiskopf, Eds. Springer International Publishing, 2017.
  117. C. Schulz, N. Rodrigues, K. Damarla, A. Henicke, and D. Weiskopf, “Visual exploration of mainframe workloads,” in Proceedings of the SIGGRAPH Asia 2017 Symposium on Visualization, 2017, p. Article No. 4.
  118. L. Lischke, S. Mayer, K. Wolf, N. Henze, H. Reiterer, and A. Schmidt, “Screen arrangements and interaction areas for large display work places,” in PerDis ’16 Proceedings of the 5th ACM International Symposium on Pervasive Displays, 2016, vol. 5, pp. 228–234.
  119. M. Hund et al., “Visual analytics for concept exploration in subspaces of patient groups.,” Brain Informatics, vol. 3, no. 4, pp. 233–247, 2016.
  120. R. Netzel and D. Weiskopf, “Hilbert attention maps for visualizing spatiotemporal gaze data,” in Proceedings of the Second Workshop on Eye Tracking and Visualization (ETVIS), 2016, pp. 21–25.
  121. C. Schätzle and D. Sacha, “Visualizing Language Change: Dative Subjects in Icelandic,” in Proceedings of the Language Resources and Evaluation Conference 2016 (Workshop “VisLRII: Visualization as Added Value in the Development, Use and Evaluation of Language Resources,” 2016, pp. 8–15.
  122. S. Frey and T. Ertl, “Auto-tuning intermediate representations for in situ visualization,” in Scientific Data Summit (NYSDS), 2016, pp. 1–10.
  123. J. Karolus, P. W. Woźniak, and L. L. Chuang, “Towards Using Gaze Properties to Detect Language Proficiency,” in Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI ’16), New York, NY, USA, 2016, no. 118, p. 6.
  124. I. Zingman, D. Saupe, O. Penatti, and K. Lambers, “Detection of Fragmented Rectangular Enclosures in Very High Resolution Remote Sensing Images,” 2016.
  125. D. Maurer, Y.-C. Ju, M. Breuß, and A. Bruhn, “Combining shape from shading and stereo: a variational approach for the joint estimation of depth, illumination and albedo,” in Proceedings of the British Machine Vision Conference (BMVC), 2016, pp. 76:1–76:14.
  126. K. Kurzhals, M. Hlawatsch, M. Burch, and D. Weiskopf, “Fixation-Image Charts,” in Proceedings of the Symposium on Eye Tracking Research & Applications, 2016, vol. 1, pp. 11–18.
  127. D. Saupe, F. Hahn, V. Hosu, I. Zingman, M. Rana, and S. Li, “Crowd workers proven useful: A comparative study of subjective video quality assessment,” 8th International Conference on Quality of Multimedia Experience (QoMEX 2016), Lisbon, Portugal, 2016.
  128. C. Schulz et al., “Generative data models for validation and evaluation of visualization techniques,” in BELIV ’16: Beyond Time And Errors: Novel Evaluation Methods For Visualization, 2016, pp. 112–124.
  129. D. Weiskopf, M. Burch, L. L. Chuang, B. Fischer, and A. Schmidt, Eye Tracking and Visualization: Foundations, Techniques, and Applications. Berlin, Heidelberg: Springer, 2016.
  130. T. Blascheck, F. Beck, S. Baltes, T. Ertl, and D. Weiskopf, “Visual Analysis and Coding of Data-Rich User Behavior,” 2016.
  131. J. Zagermann, U. Pfeil, R. Rädle, H.-C. Jetter, C. Klokmose, and H. Reiterer, “When Tablets meet Tabletops: The Effect of Tabletop Size on Around-the-Table Collaboration with Personal Tablets.,” 2016.
  132. K. Kurzhals, M. Hlawatsch, F. Heimerl, M. Burch, T. Ertl, and D. Weiskopf, “Gaze Stripes: Image-Based Visualization of Eye Tracking Data,” IEEE Transactions on Visualization and Computer Graphics (TVCG), vol. 22, no. 1, pp. 1005–1014, 2016.
  133. J. Zagermann, U. Pfeil, and H. Reiterer, “Measuring Cognitive Load using Eye Tracking Technology in Visual Computing,” 2016, vol. Proceedings of the Sixth Workshop on Beyond Time and Errors on Novel Evaluation Methods for Visualization (BELIV 2016), pp. 78–85.
  134. R. Netzel, M. Burch, and D. Weiskopf, “Interactive Scanpath-Oriented Annotation of Fixations,” Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications, 2016.
  135. L. Lischke, V. Schwind, K. Friedrich, A. Schmidt, and N. Henze, “MAGIC-Pointing on Large High-Resolution Displays,” in Proceedings of the 34rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’16), 2016, pp. 1706–1712.
  136. J. Müller, R. Rädle, and H. Reiterer, Virtual Objects as Spatial Cues in Collaborative Mixed Reality Environments: How They Shape Communication Behavior and User Task Load. ACM, 2016.
  137. A. Nocaj, M. Ortmann, and U. Brandes, “Adaptive Disentanglement based on Local Clustering in Small-World Network Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 6, pp. 1662–1671, 2016.
  138. T. Dingler, R. Rzayev, V. Schwind, and N. Henze, “RSVP on the go - Implicit Reading Support on Smart Watches Through Eye Tracking,” in Proceedings of the 2016 ACM International Symposium on Wearable Computers (ISWC ’16), New York, New York, USA, 2016, pp. 116–119.
  139. P. Tutzauer, S. Becker, T. Niese, O. Deussen, and D. Fritsch, “Understanding Human Perception of Building Categories in Virtual 3d Cities - a User Study,” ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XLI-B2, pp. 683–687, 2016.
  140. V. Hosu, F. Hahn, O. Wiedemann, S.-H. Jung, and D. Saupe, “Saliency-driven image coding improves overall perceived JPEG quality,” in Picture Coding Symposium (PCS), 2016.
  141. M. Scheer, H. H. Bülthoff, and L. L. Chuang, “Steering Demands Diminish the Early-P3, Late-P3 and RON Components of the Event-Related Potential of Task-Irrelevant Environmental Sounds,” 2016, vol. 10, no. 73.
  142. M. Herschel and M. Hlawatsch, “Provenance: On and Behind the Screens.,” in ACM International Conference on the Management of Data (SIGMOD), 2016, pp. 2213–2217.
  143. J. Hildenbrand, A. Nocaj, and U. Brandes, “Flexible level-of-detail rendering for large graphs,” in Proceedings of the 26th International Symposium on Graph Drawing and Network Visualization (GD 2016), 2016, vol. LNCS 9801, pp. 625--627.
  144. P. Tutzauer, S. Becker, D. Fritsch, T. Niese, and O. Deussen, “A Study of the Human Comprehension of Building Categories Based on Different 3D Building Representations,” Photogrammetrie - Fernerkundung - Geoinformation, vol. 2016, no. 5–6, pp. 319-333(15), 2016.
  145. M. Hund et al., “Visual Quality Assessment of Subspace Clusterings,” in KDD 2016 Interactive Data Exploration and Analytics (IDEA), 2016.
  146. V. Hosu, F. Hahn, I. Zingman, and D. Saupe, “Reported Attention as a Promising Alternative to Gaze in IQA Tasks,” 5th International Workshop on Perceptual Quality of Systems 2016 (PQS 2016), Berlin, 2016.
  147. A. Kumar, R. Netzel, M. Burch, D. Weiskopf, and K. Mueller, “Multi-similarity matrices of eye movement data,” in 2016 IEEE Second Workshop on Eye Tracking and Visualization (ETVIS), 2016, pp. 26–30.
  148. S. Frey and T. Ertl, “Flow-Based Temporal Selection for Interactive Volume Visualization,” in Computer Graphics Forum (CGF), 2016, vol. 36, no. 8, pp. 153–165.
  149. V. Schwind and S. Jäger, “The Uncanny Valley and the Importance of Eye Contact.,” i-com, vol. 15, no. 1, pp. 93–104, 2016.
  150. V. Bruder, S. Frey, and T. Ertl, “Real-Time Performance Prediction and Tuning for Interactive Volume Raycasting,” in SIGGRAPH ASIA 2016 Symposium on Visualization, 2016, vol. 2016, pp. 7:1-7:8.
  151. O. Johannsen, A. Sulc, N. Marniok, and B. Goldluecke, “Layered scene reconstruction from multiple light field camera views,” 2016.
  152. M. Burch, R. Woods, R. Netzel, and D. Weiskopf, “The Challenges of Designing Metro Maps,” Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, 2016.
  153. N. Flad, J. Ditz, H. H. Bülthoff, and L. L. Chuang, “Data-driven approaches to unrestricted gaze-tracking benefit from saccade filtering,” Second Workshop on Eye Tracking and Visualization, IEEE Visualization 2016, 2016.
  154. K. Kurzhals, M. Burch, T. Pfeiffer, and D. Weiskopf, “Eye Tracking in Computer-Based Visualization,” Computing in Science & Engineering, vol. 17, no. 5, pp. 64–71, 2015.
  155. C. Schulz, M. Burch, and D. Weiskopf, “Visual Data Cleansing of Eye Tracking Data,” in Eye Tracking and Visualization (Proceedings of ETVIS 2015), 2015.
  156. C. L. L. and B. H. H., “Towards a Better Understanding of Gaze Behavior in the Automobile.,” in Workshop on Practical Experiences in Measuring and Modeling Drivers and Driver-Vehicle Interactions In conjunction with AutomotiveUI 2015, 2015.
  157. S. Frey, F. Sadlo, and T. Ertl, “Balanced sampling and compression for remote visualization,” in SIGGRAPH Asia 2015 Visualization in High Performance Computing, 2015, pp. 1:1-1:4.
  158. N. Flad, T. Fomina, H. H. Bülthoff, and L. L. Chuang, “In press: Unsupervised clustering of EOG as a viable substitute for optical eye-tracking,” First Workshop on Eye Tracking and Visualization at IEEE Visualization, 2015.
  159. M. Hund et al., “Subspace Nearest Neighbor Search - Problem Statement, Approaches, and Discussion,” in Similarity Search and Applications, vol. 1, no. 9371, G. Amato, R. Connor, F. Falchi, and C. Gennaro, Eds. Springer International Publishing, 2015, pp. 307–313.
  160. M. Spicker, J. Kratt, D. Arellano, and O. Deussen, Depth-Aware Coherent Line Drawings. ACM, 2015.
  161. K. Kurzhals, B. Fisher, M. Burch, and D. Weiskopf, “Eye Tracking Evaluation of Visual Analytics,” Information Visualization, vol. 15, no. 4, pp. 340–358, 2015.