1. J. Görtler, M. Spicker, C. Schulz, D. Weiskopf, and O. Deussen, “Stippling of 2D scalar fields,” in IEEE Trans. Vis. Comput. Graph., 2019.
  2. Y. Wang, Z. Wang, C.-W. Fu, H. Schmauder, O. Deussen, and D. Weiskopf, “Image-based aspect ratio selection,” IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 1, 2019.
  3. H. Zhang, S. Frey, H. Steeb, D. Uribe, T. Ertl, and W. Wang, “Visualization of Bubble Formation in Porous Media,” IEEE Transactions on Visualization and Computer Graphics, pp. 1–1, 2019.
  4. D. Maurer, M. Stoll, and A. Bruhn, Directional Priors for Multi-Frame Optical Flow Estimation, vol. Proceedings of the British Machine Vision Conference (BMVC). 2018.
  5. M. Klapperstueck et al., “Contextuwall: multi-site collaboration using display walls,” Journal of Visual Languages & Computing, vol. 46, pp. 35–42, 2018.
  6. T. Castermans, M. Garderen, W. Meulemans, M. Nöllenburg, and X. Yuan, “"Short plane supports for spatial hypergraphs”,” 2018.
  7. D. Varga, T. Szirányi, and D. Saupe, ““DeepRN: a content preserving deep architecture for blind image quality assessment,” 2018.
  8. V. Schwind, K. Leicht, S. Jäger, K. Wolf, and N. Henze, “Is there an Uncanny Valley of Virtual Animals? A Quantitative and Qualitative Investigation,” International Journal of Human-Computer Studies, vol. 111, pp. 49–61, 2018.
  9. C. Glatz, S. Krupenia, H. Bülthoff, and L. Chuang, “Üse the right sound for the right job: verbal commands and auditory icons for a task-management system favor different information processes in the brain”,” in CHI Conference on Human Factors in Computing Systems, 2018, pp. 1–13.
  10. P. Knierim, V. Schwind, A. M. Feit, F. Nieuwenhuizen, and N. Henze, “‘Physical keyboards in virtual reality: analysis of typing performance and effects of avatar hands,’” in CHI Conference on Human Factors in Computing Systems, 2018, pp. 345:1–345:9.
  11. Y. Zhu et al., “‘Genome-scale metabolic modeling of responses to polymyxins in pseudomonas aeruginosa,’” GigaScience, vol. 7, no. 4, 2018.
  12. S. Frey, “Spatio-Temporal Contours from Deep Volume Raycasting,” Computer Graphics Forum, vol. 37, no. 3, pp. 513–524, 2018.
  13. J. Zagermann, U. Pfeil, and H. Reiterer, “Studying Eye Movements As A Basis For Measuring,” In Proceedings of the 36th annual ACM conference on Human factors in computing systems (CHI ’18 Extended Abstracts), 2018.
  14. V. Hosu, H. Lin, and D. Saupe, “‘Expertise screening in crowdsourcing image quality,’” 2018.
  15. J. Goertler, C. Schulz, O. Deussen, and D. Weiskopf, “Bubble Treemaps for Uncertainty Visualization,” IEEE Transactions on Visualization and Computer Graphics, 2018.
  16. D. Maurer and A. Bruhn, ProFlow: Learning to Predict Optical Flow, vol. Proceedings of the British Machine Vision Conference (BMVC). 2018.
  17. M. Scheer, H. H. Bülthoff, and L. L. Chuang, “Auditory task irrelevance: A basis for inattentional deafness.,” Human Factors: The Journal of the Human Factors and Ergonomics Society, pp. 1--13, 2018.
  18. D. Laupheimer, P. Tutzauer, N. Haala, and M. Spicker, “Neural networks for the classification of building use from street-view imagery,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, pp. 177–184, 2018.
  19. N. Marniok and B. Goldluecke, “Real-time Variational Range Image Fusion and Visualization for Large-Scale Scenes using GPU Hash Tables,” in IEEE Winter Conf. on Applications of Computer Vision (WACV), 2018.
  20. J. Karolus, H. Schuff, T. Kosch, P. Wozniak, and A. Schmidt, “‘EMGuitar: Assisting Guitar Playing with Electromyography,’” in Designing Interactive Systems Conference, 2018, pp. 651–655.
  21. S. Oppold and M. Herschel, “‘Provenance for entity resolution,’” in Proceedings of the International Provenance and Annotation Workshop, 2018, pp. 226–230.
  22. D. Maurer, N. Marniok, B. Goldlücke, and A. Bruhn, Structure-from-Motion aware PatchMatch for Adaptive Optical Flow Estimation, vol. Lecture Notes in Computer Science, Proceedings of the European Conference on Computer Vision (ECCV), no. 11212. Springer, 2018.
  23. J. Görtler, R. Kehlbeck, and O. Deussen, “A visual exploration of Gaussian processes,” in Proceedings of the Workshop on Visualization for AI Explainability (VISxAI), 2018.
  24. M. de Ridder, K. Klein, and J. Kim, “‘A review and outlook on visual analytics for uncertainties in functional magnetic resonance imaging,’” Brain Informatics, vol. 5, no. 2, p. 5, 2018.
  25. L. J. Debbeler, M. Gamp, M. Blumenschein, D. A. Keim, and B. Renner, “Polarized but illusory beliefs about tap and bottled water: A product- and consumer-oriented survey and blind tasting experiment,” 2018.
  26. T. Dingler, R. Rzayev, A. S. Shirazi, and N. Henze, “‘Designing consistent gestures across device types: eliciting rsvp controls for phone, watch, and glasses,’” in “Designing consistent gestures across device types: eliciting rsvp controls for phone, watch, and glasses,” 2018, no. 419:1–419:12.
  27. T. Spinner, J. Körner, J. Görtler, and O. Deussen, “Towards an interpretable latent space,” in Workshop Vis. for AI Explainability (VISxAI), IEEE VIS Berlin, 2018.
  28. M. Behrisch et al., “Quality Metrics for Information Visualization.,” EuroVis STAR, 2018.
  29. T. Kosch, M. Funk, A. Schmidt, and L. L. Chuang, “Identifying cognitive assistance with mobile electroencephalography: a case study with in-situ projections for manual assembly,” vol. 2, no. EICS, p. 11. ACM on Human-Computer Interaction, 2018.
  30. C. Schulz, K. Schatz, M. Krone, M. Braun, T. Ertl, and D. Weiskopf, “Uncertainty Visualization for Secondary Structures of Proteins,” in IEEE Pacific Visualization Symposium, 2018, pp. 96–105.
  31. S. Borojeni, S. Boll, W. Heuten, H. Bülthoff, and L. Chuang, “‘Feel the movement: real motion influences responses to take-over requests in highly automated vehicles’,” 2018, pp. 1–13.
  32. M. Blumenschein et al., “‘SMARTexplore: Simplifying High-Dimensional Data Analysis through a Table-Based Visual Analytics Approach,’” 2018.
  33. K. Marriott et al., Immersive Analytics. Springer, 2018.
  34. A. Nesti, G. Rognini, B. Herbelin, H. H. Bülthoff, L. L. Chuang, and O. Blanke, “Modulation of vection latencies in the full-body illusion,” PLoS One, 2018.
  35. V. Bruder, M. Hlawatsch, S. Frey, M. Burch, D. Weiskopf, and T. Ertl, “Volume-based large dynamic graph analytics,” 2018.
  36. O. Wiedemann, V. Hosu, H. Lin, and D. Saupe, “Disregarding the big picture: towards local image quality assessment,” 2018.
  37. M. Jenadeleh, M. Pedersen, and D. Saupe, “‘Realtime quality assessment of iris biometrics under visible light,’” IEEE Comp. Soc. Workshop on Biometrics, pp. 556–565, 2018.
  38. C. Glatz and L. L. Chuang, “The time course of auditory looming cues in redirecting visuo-spatial attention,” Scientific Reports, 2018.
  39. C. Schulz, A. Zeyfang, M. van Garderen, H. Ben Lahmar, M. Herschel, and D. Weiskopf, “Simultaneous Visual Analysis of Multiple Software Hierarchies,” in 2018 IEEE Working Conference on Software Visualization (VISSOFT), 2018, pp. 87--95.
  40. N. Rodrigues, R. Netzel, J. Spalink, and D. Weiskopf, “Multiscale scanpath visualization and filtering,” in Workshop on Eye Tracking and Visualization, 2018, no. 2.
  41. D. Maurer, Y. C. Ju, M. Breuß, and A. Bruhn, “Combining Shape from Shading and Stereo: A Joint Variational Method for Estimating Depth, Illumination and Albedo,” International Journal of Computer Vision (IJCV), 2018.
  42. H. Men, H. Lin, and D. Saupe, “Spatiotemporal feature combination model for no-reference video quality assessment,” 2018.
  43. H. Ben Lahmar, M. Herschel, M. Blumenschein, and D. A. Keim, “‘Provenance-based visual data exploration with EVLIN,’” in Proceedings of the International Conference on Extending Database Technology, 2018, pp. 686–689.
  44. M. de Ridder, K. Klein, and J. Kim, “Temporaltracks: visual analytics for exploration of 4D fMRI time-series coactivation,” in in Proc. Computer Graphics Intl. Conference, 2017, pp. 13:1-13:6.
  45. S. Frey, “Sampling and Estimation of Pairwise Similarity in Spatio-Temporal Data Based on Neural Networks,” in Informatics, 2017, vol. 4, no. 3, p. 27.
  46. V. Schwind, K. Wolf, and N. Henze, “FaceMaker - A Procedural Face Generator to Foster Character Design Research,” vol. Game Dynamics: Best Practices in Procedural and Dynamic Game Content Generation, O. Korn and N. Lee, Eds. Cham: Springer International Publishing, 2017, pp. 95–113.
  47. R. Netzel, M. Hlawatsch, M. Burch, S. Balakrishnan, H. Schmauder, and D. Weiskopf, “An Evaluation of Visual Search Support in Maps,” IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, vol. 23, no. 1, pp. 421–430, 2017.
  48. H. Booth, C. Schätzle, K. Börjars, and M. Butt, “‘Dative subjects and the rise of positional licensing in Icelandic,’” in LFG17 Conference, 2017, pp. 104–124.
  49. P. Tutzauer and N. Haala, “Processing of crawled urban imagery for building use classification,” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci, XLII-1/W1, pp. 143–149, 2017.
  50. V. Hosu et al., “The Konstanz natural video database (KoNViD-1k),” in 9th International Conference on Quality of Multimedia Experience (QoMEX), 2017.
  51. K. Kurzhals, M. Hlawatsch, C. Seeger, and D. Weiskopf, “Visual Analytics for Mobile Eye Tracking,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, 2017.
  52. M. Krone et al., “Molecular Surface Maps,” IEEE Transactions on Visualization and Computer Graphics (Proceedings of the Scientific Visualization 2016), vol. 23, no. 1, 2017.
  53. R. Netzel, B. Ohlhausen, K. Kurzhals, R. Woods, M. Burch, and D. Weiskopf, “User performance and reading strategies for metro maps: An eye tracking study,” SPATIAL COGNITION AND COMPUTATION, vol. 17, no. 1–2, pp. 39–64, 2017.
  54. D. Jäckle, F. Stoffel, S. Mittelstädt, D. A. Keim, and H. Reiterer, “Interpretation of Dimensionally-Reduced Crime Data: A Study with Untrained Domain Experts.,” in Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Best Student Paper Award, 2017, no. 3, pp. 164–175.
  55. V. Schwind, P. Knierim, C. Tasci, P. Franczak, N. Haas, and N. Henze, “These Are Not My Hands!": Effect of Gender on the Perception of Avatar Hands in Virtual Reality,” in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, New York, NY, USA, 2017, vol. CHI ’17, no. 6, pp. 1577--1582.
  56. A. Nesti, K. de Winkel, and H. Bülthoff, “Accumulation of inertial sensory information in the perception of whole body yaw rotation,” One, Plos, 2017.
  57. D. Sacha et al., “SOMFlow: Guided exploratory cluster analysis with self-organizing maps and analytic provenance,” IEEE Conference on Visual Analytics Science and Technology, 2017.
  58. S. Frey and T. Ertl, “Progressive Direct Volume-to-Volume Transformation.,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, pp. 921--930, 2017.
  59. H. T. Nim et al., ““Design Considerations for Immersive Analytics of Bird Movements Obtained by Miniaturised GPS Sensors",” Eurographics WS Visual Computing for Biology and Medicine, vol. online, 2017.
  60. O. Deussen, M. Spicker, and Q. Zheng, “Weighted Linde-Buzo-Gray Stippling,” ACM Trans. Graph., vol. 36, no. 6, pp. 233:1--233:12, 2017.
  61. K. Kurzhals, E. Çetinkaya, Y. Hu, W. Wang, and D. Weiskopf, “Close to the Action: Eye-Tracking Evaluation of Speaker-Following Subtitles,” in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 2017.
  62. K. de Winkel, A. Nesti, H. Ayaz, and H. Bülthoff, “Neural correlates of decision making on whole body yaw rotation: an fNIRS study,” Neuroscience Letters, 2017.
  63. C. Schätzle, M. Hund, F. L. Dennig, M. Butt, and D. A. Keim, HistoBankVis: Detecting Language Change via Data Visualization, vol. Proceedings of the NoDaLiDa 2017 Workshop on Processing Historical Language (NEALT Proceedings Series 32). 2017.
  64. D. Maurer, M. Stoll, S. Volz, P. Gairing, and A. Bruhn, “A comparison of isotropic and anisotropic second order regularisers for optical flow.,” in International Conference on Scale Space and Variational Methods in Computer Vision (SSVM)., Berlin, 2017, vol. Lecture Notes in Computer Science, no. 10302, pp. 537–549.
  65. D. Maurer, M. Stoll, and A. Bruhn, “Order-adaptive and illumination-aware variational optical flow refinement,” in British Machine Vision Conference (BMVC), 2017.
  66. P. Knierim et al., “Tactile Drones - Providing Immersive Tactile Feedback in Virtual Reality Through Quadcopters,” in Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, 2017, vol. CHI EA ’17, pp. 433--436.
  67. M. Stoll, D. Maurer, S. Volz, and A. Bruhn, “Illumination-Aware Large Displacement Optical Flow,” in Proceedings of International Conference on Energy Minimization Methods in Computer Vision and Pattern Recognition (EMMCVPR). Lecture Notes in Computer Science, 2017.
  68. S. Egger-Lampl et al., “Dagstuhl Seminar 15481, Dagstuhl Castle, Germany, November 22 – 27, 2015, Revised Contributions,” in Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, vol. Information Systems and Applications, incl. Internet/Web, and HCI, no. 10264, D. Archambault, H. Purchase, and T. Hossfeld, Eds. Springer International Publishing, 2017, pp. 173–212.
  69. K. Kurzhals, E. Cetinkaya, Y. Hu, W. Wang, and D. Weiskopf, “‘Close to the action: eyetracking evaluation of speaker-following subtitles,’” 2017, pp. 6559–6568.
  70. C. Schulz, A. Nocaj, J. Goertler, O. Deussen, U. Brandes, and D. Weiskopf, “Probabilistic Graph Layout for Uncertain Network Visualization,” vol. 23, no. 1, 2017.
  71. J. Allsop, R. Gray, H. Bülthoff, and L. Chuang, “Eye movement planning on single-sensor-singleindicator displays is vulnerable to user anxiety and cognitive load,” Journal of Eye Movement Research, vol. 10, no. 5:8, pp. 1–15, 2017.
  72. M. Behrisch et al., “Magnostics: Image-Based Search of Interesting Matrix Views for Guided Network Exploration.,” IEEE Trans. Vis. Comput. Graph., vol. 23, no. 1, pp. 31–40, 2017.
  73. V. Schwind, P. Knierim, L. Chuang, and N. Henze, “‘Where’s Pinky?’: The Effects of a Reduced Number of Fingers in Virtual Reality,” in Proceedings of the 2017 CHI Conference on Computer-Human Interaction in Play, New York, NY, USA, 2017, vol. CHI PLAY’17, p. 6.
  74. K. Srulijes et al., “Visualization of eye-head coordination while walking in healthy subjects and patients with neurodegenerative diseases.” 2017.
  75. O. Johannsen, “‘A taxonomy and evaluation of dense light field depth estimation algorithms,’” in Workshop on Light Fields for Computer Vision, 2017.
  76. H. Ben Lahmar and M. Herschel, “Provenance-based Recommendations for Visual Data Exploration,” in International Workshop on Theory and Practice of Provenance (TAPP), 2017.
  77. M. van Garderen, B. Pampel, A. Nocaj, and U. Brandes, “Minimum-Displacement Overlap Removal for Geo-referenced Data Visualization,” The Author(s) Computer Graphics Forum, vol. 36, no. 3, pp. 423–433, 2017.
  78. M. Heinemann, V. Bruder, S. Frey, and T. Ertl, “Power Efficiency of Volume Raycasting on Mobile Devices,” in EuroVis 2017 - Posters, 2017.
  79. L. Merino et al., “On the Impact of the Medium in the Effectiveness of 3D Software Visualizations,” in VISSOFT’17: Proceedings of the 5th IEEE Working Conference on Software Visualization, 2017.
  80. C. Schätzle, “Genitiv als Stilmittel in der Novelle,” Scalable Reading. Zeitschrift für Literaturwissenschaft und Linguistik (LiLi), vol. 47, no. 1, pp. 8–15, 2017.
  81. M. Stoll, S. Volz, D. Maurer, and A. Bruhn, “A time-efficient optimisation framework for parameters of optical flow methods,” in Scandinavian Conference on Image Analysis (SCIA)., Berlin, 2017, vol. Lecture Notes in Computer Science, no. 10269, pp. 41–53.
  82. K. Kurzhals, M. Stoll, A. Bruhn, and D. Weiskopf, “FlowBrush: Optical Flow Art,” in Proceedings of Computational Aesthetics 2017, 2017.
  83. D. Maurer, M. Stoll, and A. Bruhn, “Order-adaptive regularisation for variational optical flow: global, local and in between,” in International Conference on Scale Space and Variational Methods in Computer Vision (SSVM)., Berlin, 2017, vol. Lecture Notes in Computer Science, no. 10302, pp. 550–562.
  84. J. Zagermann, U. Pfeil, C. Acevedo, and H. Reiterer, “Studying the Benefits and Challenges of Spatial Distribution and Physical Affordances in a Multi-Device Workspace,” in In Proceedings of the 16th International Conference on Mobile and Ubiquitous Multimedia (MUM)´, 2017.
  85. J. Zagermann, U. Pfeil, D. Fink, P. von Bauer, and H. Reiterer, “Memory in Motion: The Influence of Gesture- and Touch-Based Input Modalities on Spatial Memory,” 2017.
  86. N. Rodrigues et al., “Visualization of Time Series Data with Spatial Context: Communicating the Energy Production of Power Plants,” in VINCI 2017, 2017.
  87. N. Rodrigues and D. Weiskopf, “Nonlinear Dot Plots,” IEEE Transactions on Visualization and Computer Graphics, vol. 2018, 2017.
  88. N. Rodrigues, M. Burch, L. Di Silvestro, and D. Weiskopf, “A Visual Analytics Approach for Word Relevances in Multiple Texts,” in IV 2017, 2017.
  89. M. Spicker, F. Hahn, T. Lindemeier, D. Saupe, and O. Deussen, “Quantifying Visual Abstraction Quality for Stipple Drawings,” in Proceedings of NPAR’17, 2017.
  90. M. Herschel, R. Diestelkämper, and H. Ben Lahmar, “A survey on provenance - What for? What form? What from?,” the International Journal on Very Large Data Bases (VLDB Journal), 2017.
  91. H. V. Le, V. Schwind, P. Göttlich, and N. Henze, “PredicTouch: A System to Reduce Touchscreen Latency using Neural Networks and Inertial Measurement Units,” in Proceedings of the 2017 International Conference on Interactive Surfaces and Spaces, 2017, vol. 17, pp. 230–239.
  92. N. Marniok, O. Johannsen, and B. Goldluecke, “An Efficient Octree Design for Local Variational Range Image Fusion,” in German Conference on Pattern Recognition (Proc. GCPR), 2017.
  93. D. Fritsch, “‘Photogrammetrische Auswertung digitaler Bilder – Neue Methoden der Kamerakalibration, dichten Bildzuordnung und Interpretation von Punktwolken,’” Photogrammetrie und Fernerkundung, pp. 157–196, 2017.
  94. M. A. Baazizi, H. Ben Lahmar, D. Colazzo, G. Ghelli, and C. Sartiani, “Schema Inference for Massive JSON Datasets,” in Conference on Extending Database Technology (EDBT), 2017, pp. 222–233.
  95. P. Gralka, C. Schulz, G. Reina, D. Weiskopf, and T. Ertl, “Visual Exploration of Memory Traces and Call Stacks,” in 2017 IEEE Working Conference on Software Visualization (VISSOFT), 2017, pp. 54–63.
  96. R. Netzel, J. Vuong, U. Engelke, S. O’Donoghue, D. Weiskopf, and J. Heinrich, “Comparative eye-tracking evaluation of scatterplots and parallel coordinates,” Visual Informatics, vol. 1, no. 2, pp. 118–131, 2017.
  97. G. Tkachev, S. Frey, C. Müller, V. Bruder, and T. Ertl, “Prediction of Distributed Volume Visualization Performance to Support Render Hardware Acquisition,” Eurographics Symposium on Parallel Graphics and Visualization, 2017.
  98. D. Fritsch and M. Klein, “‘3D and 4D modeling for AR and VR app developments,’” in 23rd International Conference on Virtual System & Multimedia, 2017, pp. 1–8.
  99. J. Karolus, P. W. Woźniak, L. L. Chuang, and A. Schmidt, “Robust Gaze Features for Enabling Language Proficiency Awareness,” in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17), New York, NY, USA, 2017, pp. 2998–3010.
  100. L. Chuang, C. Glatz, and S. Krupenia, “Üsing EEG to understand why behavior to auditory in-vehicle notifications differs across test Environments",” in 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, 2017, pp. 123–133.
  101. M. Stoll, D. Maurer, and A. Bruhn, “Variational Large Displacement Optical Flow without Feature Matches,” in Proceedings of International Conference on Energy Minimization Methods in Computer Vision and Pattern Recognition (EMMCVPR). Lecture Notes in Computer Science, 2017.
  102. M. Stein et al., “Bring it to the Pitch: Combining Video and Movement Data to Enhance Team Sport Analysis.,” in IEEE Transactions on Visualization and Computer Graphics (Proceedings of the Visual Analytics Science and Technology), 2017.
  103. M. Burch, M. Hlawatsch, and D. Weiskopf, “Visualizing a Sequence of a Thousand Graphs (or Even More),” Computer Graphics Forum, vol. 36, no. 3, 2017.
  104. J. Iseringhausen et al., “4D Imaging through Spray-On Optics,” ACM Transactions on Graphics (SIGGRAPH 2017), vol. 36, no. 4, pp. 35:1--35:11, 2017.
  105. J. Kratt, F. Eisenkeil, M. Spicker, Y. Wang, D. Weiskopf, and O. Deussen, “Structure-aware Stylization of Mountainous Terrains,” in Vision, Modeling & Visualization, 2017.
  106. R. Diestelkämper, M. Herschel, and P. Jadhav, “Provenance in DISC Systems: Reducing Space Overhead at Runtime,” in International Workshop on Theory and Practice of Provenance (TAPP), 2017.
  107. U. Gadiraju et al., “Crowdsourcing versus the laboratory: Towards human-centered experiments using the crowd,” in Information Systems and Applications, incl. Internet/Web, and HCI, vol. Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, no. 10264, D. Archambault, H. Purchase, and T. Hossfeld, Eds. Springer International Publishing, 2017, pp. 7–30.
  108. D. Jäckle, M. Hund, M. Behrisch, D. A. Keim, and T. Schreck, “Pattern Trails: Visual Analysis of Pattern Transitions in Subspaces,” in IEEE Conference on Visual Analytics Science and Technology (VAST), 2017.
  109. P. Tutzauer, S. Becker, and N. Haala, “Perceptual rules for building enhancements in 3d virtual worlds,” i-com, vol. 16, no. 3, pp. 205–213, 2017.
  110. C. Schulz, M. Burch, F. Beck, and D. Weiskopf, “Visual Data Cleansing of Low-Level Eye Tracking Data,” in Eye Tracking and Visualization: Foundations, Techniques, and Applications. ETVIS 2015, M. Burch, L. Chuang, B. Fisher, A. Schmidt, and D. Weiskopf, Eds. Springer International Publishing, 2017.
  111. C. Schulz, N. Rodrigues, K. Damarla, A. Henicke, and D. Weiskopf, “Visual Exploration of Mainframe Workloads,” in SA ’17 Symposium on Visualization, 2017.
  112. L. Lischke, S. Mayer, K. Wolf, N. Henze, H. Reiterer, and A. Schmidt, “Screen arrangements and interaction areas for large display work places,” in PerDis ’16 Proceedings of the 5th ACM International Symposium on Pervasive Displays, 2016, vol. 5, pp. 228–234.
  113. M. Hund et al., “Visual analytics for concept exploration in subspaces of patient groups.,” Brain Informatics, vol. 3, no. 4, pp. 233–247, 2016.
  114. R. Netzel and D. Weiskopf, “Hilbert Attention Maps for Visualizing Spatiotemporal Gaze Data,” 2016.
  115. C. Schätzle and D. Sacha, “Visualizing Language Change: Dative Subjects in Icelandic,” in Proceedings of the Language Resources and Evaluation Conference 2016 (Workshop “VisLRII: Visualization as Added Value in the Development, Use and Evaluation of Language Resources,” 2016, pp. 8–15.
  116. S. Frey and T. Ertl, “Auto-tuning intermediate representations for in situ visualization,” in Scientific Data Summit (NYSDS), 2016 New York, 2016, pp. 1--10.
  117. J. Karolus, P. W. Woźniak, and L. L. Chuang, “Towards Using Gaze Properties to Detect Language Proficiency,” in Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI ’16), New York, NY, USA, 2016, no. 118, p. 6.
  118. M. Behrisch et al., “Magnostics: Image-based Search of Interesting Matrix Views for Guided Network Exploration.,” 2016, vol. 23, no. 1–1, p. 99.
  119. I. Zingman, D. Saupe, O. Penatti, and K. Lambers, “Detection of Fragmented Rectangular Enclosures in Very High Resolution Remote Sensing Images,” 2016.
  120. D. Maurer, Y.-C. Ju, M. Breuß, and A. Bruhn, “Combining shape from shading and stereo: a variational approach for the joint estimation of depth, illumination and albedo.,” in Proceedings of the British Machine Vision Conference (BMVC), 2016.
  121. K. Kurzhals, M. Hlawatsch, M. Burch, and D. Weiskopf, “Fixation-Image Charts,” in Proceedings of the Symposium on Eye Tracking Research & Applications, 2016, vol. 1.
  122. D. Saupe, F. Hahn, V. Hosu, I. Zingman, M. Rana, and S. Li, “Crowd workers proven useful: A comparative study of subjective video quality assessment,” 8th International Conference on Quality of Multimedia Experience (QoMEX 2016), Lisbon, Portugal, 2016.
  123. A. Voit, T. Machulla, D. Weber, V. Schwind, S. Schneegass, and N. Henze, “Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct - MobileHCI ’16,” in Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct - MobileHCI ’16, 2016, pp. 942--947.
  124. C. Schulz et al., “Generative Data Models for Validation and Evaluation of Visualization Techniques,” in BELIV ’16: Beyond Time And Errors: Novel Evaluation Methods For Visualization, 2016.
  125. D. Weiskopf, M. Burch, L. L. Chuang, B. Fischer, and A. Schmidt, Eye Tracking and Visualization: Foundations, Techniques, and Applications. Berlin, Heidelberg: Springer, 2016.
  126. K. Kurzhals, M. Hlawatsch, F. Heimerl, M. Burch, T. Ertl, and D. Weiskopf, “Gaze Stripes: Image-Based Visualization of Eye Tracking Data,” IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 1, 2016.
  127. J. Zagermann, U. Pfeil, and H. Reiterer, “Measuring Cognitive Load using Eye Tracking Technology in Visual Computing,” 2016, vol. Proceedings of the Sixth Workshop on Beyond Time and Errors on Novel Evaluation Methods for Visualization (BELIV 2016), pp. 78–85.
  128. R. Netzel, M. Burch, and D. Weiskopf, “Interactive Scanpath-Oriented Annotation of Fixations,” Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications, 2016.
  129. L. Lischke, V. Schwind, K. Friedrich, A. Schmidt, and N. Henze, “MAGIC-Pointing on Large High-Resolution Displays,” in CHI EA ’16 Proceedings of the 34rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, 2016, pp. 1706–1712.
  130. A. Hautli-Janisz and V. Lyding, “VisLR II: Visualization as Added Value in the Development, Use and Evaluation of Language Resources,” in Proceedings of the Language Resources and Evaluation Conference 2016 (Workshop “VisLRII: Visualization as Added Value in the Development, Use and Evaluation of Language Resources,” 2016, pp. 8–15.
  131. A. Nocaj, M. Ortmann, and U. Brandes, “Adaptive Disentanglement based on Local Clustering in Small-World Network Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 6, pp. 1662–1671, 2016.
  132. T. Dingler, R. Rzayev, V. Schwind, and N. Henze, “RSVP on the go - Implicit Reading Support on Smart Watches Through Eye Tracking,” in Proceedings of the 2016 ACM International Symposium on Wearable Computers - ISWC ’16, New York, New York, USA, 2016, pp. 116–119.
  133. P. Tutzauer, S. Becker, T. Niese, O. Deussen, and D. Fritsch, “Understanding Human Perception of Building Categories in Virtual 3d Cities - a User Study,” ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XLI-B2, pp. 683–687, 2016.
  134. V. Hosu, F. Hahn, O. Wiedemann, S.-H. Jung, and D. Saupe, “Saliency-driven image coding improves overall perceived JPEG quality,” in Picture Coding Symposium (PCS), 2016.
  135. M. Herschel and M. Hlawatsch, “Provenance: On and Behind the Screens.,” in ACM International Conference on the Management of Data (SIGMOD), 2016, pp. 2213–2217.
  136. J. Hildenbrand, A. Nocaj, and U. Brandes, “Flexible Level-of-Detail Rendering for Large Graphs,” no. 9801 2016, G. Drawing and 24th International Symposium Network Visualization, Eds. 2016.
  137. P. Tutzauer, S. Becker, D. Fritsch, T. Niese, and O. Deussen, “A Study of the Human Comprehension of Building Categories Based on Different 3D Building Representations,” Photogrammetrie - Fernerkundung - Geoinformation, vol. 2016, no. 5–6, pp. 319-333(15), 2016.
  138. V. Hosu, F. Hahn, I. Zingman, and D. Saupe, “Reported Attention as a Promising Alternative to Gaze in IQA Tasks,” 5th International Workshop on Perceptual Quality of Systems 2016 (PQS 2016), Berlin, 2016.
  139. A. Kumar, R. Netzel, M. Burch, D. Weiskopf, and K. Mueller, “Multi-Similarity Matrices of Eye Movement Data,” 2016.
  140. S. Frey and T. Ertl, “Flow-Based Temporal Selection for Interactive Volume Visualization,” in Computer Graphics Forum, 2016.
  141. V. Schwind and S. Jäger, “The Uncanny Valley and the Importance of Eye Contact.,” i-com, vol. 15, no. 1, pp. 93–104, 2016.
  142. V. Bruder, S. Frey, and T. Ertl, “Real-Time Performance Prediction and Tuning for Interactive Volume Raycasting,” in SIGGRAPH ASIA 2016 Symposium on Visualization, 2016, vol. 2016, no. 7.
  143. O. Johannsen, A. Sulc, N. Marniok, and B. Goldluecke, “Layered scene reconstruction from multiple light field camera views,” 2016.
  144. M. Burch, R. Woods, R. Netzel, and D. Weiskopf, “The Challenges of Designing Metro Maps,” Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, 2016.
  145. N. Flad, J. Ditz, H. H. Bülthoff, and L. L. Chuang, “Data-driven approaches to unrestricted gaze-tracking benefit from saccade filtering,” Second Workshop on Eye Tracking and Visualization, IEEE Visualization 2016, 2016.
  146. T. Blascheck, F. Beck, S. Baltes, T. Ertl, and D. Weiskopf, “Visual Analysis and Coding of Data-Rich User Behavior,” 2016.
  147. J. Zagermann, U. Pfeil, R. Rädle, H.-C. Jetter, C. Klokmose, and H. Reiterer, “When Tablets meet Tabletops: The Effect of Tabletop Size on Around-the-Table Collaboration with Personal Tablets.,” 2016.
  148. C. Schulz, M. Burch, F. Beck, and D. Weiskopf, “Visual Data Cleansing of Low-Level Eye Tracking Data,” in Extended Papers of ETVIS 2015, 2016.
  149. J. Müller, R. Rädle, and H. Reiterer, Virtual Objects as Spatial Cues in Collaborative Mixed Reality Environments: How They Shape Communication Behavior and User Task Load. ACM, 2016.
  150. M. Scheer, H. H. Bülthoff, and L. L. Chuang, “Steering Demands Diminish the Early-P3, Late-P3 and RON Components of the Event-Related Potential of Task-Irrelevant Environmental Sounds,” 2016, vol. 10, no. 73.
  151. M. Hund et al., “Visual Quality Assessment of Subspace Clusterings,” in KDD 2016 Interactive Data Exploration and Analytics (IDEA), 2016.
  152. M. Burch, R. Woods, R. Netzel, and D. Weiskopf, “The Challenges of Designing Metro Maps,” 2016, pp. 195–202.
  153. K. Kurzhals, M. Burch, T. Pfeiffer, and D. Weiskopf, “Eye Tracking in Computer-Based Visualization,” Computing in Science & Engineering, vol. 17, no. 5, 2015.
  154. C. Schulz, M. Burch, and D. Weiskopf, “Visual Data Cleansing of Eye Tracking Data,” in Eye Tracking and Visualization (Proceedings of ETVIS 2015), 2015.
  155. C. L. L. and B. H. H., “Towards a Better Understanding of Gaze Behavior in the Automobile.,” in Workshop on Practical Experiences in Measuring and Modeling Drivers and Driver-Vehicle Interactions In conjunction with AutomotiveUI 2015, 2015.
  156. S. Frey, F. Sadlo, and T. Ertl, “Balanced sampling and compression for remote visualization,” in SIGGRAPH Asia 2015 Visualization in High Performance Computing, 2015, pp. 1;1-4.
  157. L. Lischke, P. Knierim, and H. Klinke, “Mid-Air Gestures for Window Management on Large Displays,” in Mensch und Computer 2015 - Tagungsband, Berlin, München, Boston, 2015, pp. 439–442.
  158. M. Hund et al., “Subspace Nearest Neighbor Search - Problem Statement, Approaches, and Discussion - Position Paper.,” in SISAP, 2015, vol. 9371, pp. 307–313.
  159. L. L. Chuang, “Error visualization and information-seeking behavior for air-vehicle control.,” Foundations of Augmented Cognition. Lecture Notes in Artificial Intelligence, vol. 9183, pp. 3–11, 2015.
  160. L. Lischke, J. Grüninger, K. Klouche, A. Schmidt, P. Slusallek, and G. Jacucci, “Interaction Techniques for Wall-Sized Screens,” in Proceedings of the 2015 International Conference on Interactive Tabletops & Surfaces - ITS ’15, 2015, pp. 501–504.
  161. T. Chandler et al., “Immersive Analytics.,” in BDVA, 2015, pp. 1-8 73-80.
  162. N. Flad, T. Fomina, H. H. Bülthoff, and L. L. Chuang, “In press: Unsupervised clustering of EOG as a viable substitute for optical eye-tracking,” First Workshop on Eye Tracking and Visualization at IEEE Visualization, 2015.
  163. K. Kurzhals, B. Fisher, M. Burch, and D. Weiskopf, “Eye Tracking Evaluation of Visual Analytics,” Information Visualization, 2015.
  164. K. Kurzhals, M. Burch, T. Pfeiffer, and D. Weiskopf, “Eye Tracking in Computer-Based Visualization,” Computing in Science and Engineering, vol. 17, no. 5, pp. 64–71, 2015.
  165. M. Hund et al., “Subspace Nearest Neighbor Search - Problem Statement, Approaches, and Discussion,” in Similarity Search and Applications, vol. 1, no. 9371, G. Amato, R. Connor, F. Falchi, and C. Gennaro, Eds. Springer International Publishing, 2015, pp. 307–313.
  166. M. Spicker, J. Kratt, D. Arellano, and O. Deussen, Depth-Aware Coherent Line Drawings. ACM, 2015.
  167. H. Rohn et al., “‘Vanted v2: a framework for systems biology applications,’” BMC Systems Biology, vol. 6, no. 1, p. 139, 2012.