1. M. Spicker, F. Hahn, T. Lindemeier, D. Saupe, and O. Deussen, “Quantifying visual abstraction quality for computer-generated illustrations,” ACM Transactions on Applied Perception (TAP), to appear.
  2. B. V., M. C., F. S., and E. T., “On evaluating runtime performance of interactive visualizations,” IEEE Transactions on Visualization and Computer Graphics, to appear.
  3. A. Voit, S. Mayer, V. Schwind, and N. Henze, “Online, VR, AR, Lab, and In-Situ: Comparison of Research Methods to Evaluate Smart Artifacts,” in Proceedings of the of the 2019 CHI Conference on Human Factors in Computing Systems, 2019, p. to appear.
  4. V. Schwind, N. Deierlein, R. Poguntke, and N. Henze, “Understanding the Social Acceptability of Mobile Devices using the Stereotype Content Model,” in Proceedings of the 2019 SIGCHI Conference on Human Factors in Computing Systems (CHI), 2019, p. to appear.
  5. S. Jaeger et al., “Challenges for Brain Data Analysis in VR Environments,” in Proceedings of IEEE Pacific Vis 2019, 2019, p. to appear.
  6. H. V. Le, S. Mayer, and N. Henze, “Investigating the Feasibility of Finger Identification on Capacitive Touchscreens using Deep Learning,” in Proceedings of the 24th International Conference on Intelligent User Interfaces (IUI ’19), 2019, p. to appear.
  7. Y. Wang, Z. Wang, C.-W. Fu, H. Schmauder, O. Deussen, and D. Weiskopf, “Image-based aspect ratio selection,” IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 1, 2019.
  8. R. Netzel, N. Rodrigues, A. Haug, and D. Weiskopf, “Compensation of simultaneous orientation contrast in superimposed textures,” in Proc. 10th Intl. Conf. Information Vis. Theory Appl. (IVAPP), 2019, p. to appear.
  9. V. Schwind, P. Knierim, N. Haas, and N. Henze, “Using Presence Questionnaires in Virtual Reality,” in Proceedings of the 2019 SIGCHI Conference on Human Factors in Computing Systems (CHI), 2019, p. to appear.
  10. H. Zhang, S. Frey, H. Steeb, D. Uribe, T. Ertl, and W. Wang, “Visualization of Bubble Formation in Porous Media,” IEEE Transactions on Visualization and Computer Graphics, pp. 1–1, 2019.
  11. D. Maurer, M. Stoll, and A. Bruhn, Directional Priors for Multi-Frame Optical Flow Estimation, vol. Proceedings of the British Machine Vision Conference (BMVC). 2018.
  12. A. Hautli-Janisz, C. Rohrdantz, C. Schätzle, A. Stoffel, M. Butt, and D. A. Keim, “Visual analytics in diachronic linguistic investigations,” Linguistic Visualization, 2018.
  13. F. Frieß, M. Landwehr, V. Bruder, S. Frey, and T. Ertl, “Adaptive encoder settings for interactive remote visualization on high-resolution displays,” in Proceedings of the Symposium on Large Data Analysis and Visualization, 2018.
  14. S. Frey, “Spatio-Temporal Contours from Deep Volume Raycasting,” Computer Graphics Forum, vol. 37, no. 3, pp. 513–524, 2018.
  15. V. Hosu, H. Lin, and D. Saupe, “‘Expertise screening in crowdsourcing image quality,’” 2018.
  16. C. Müller et al., “Interactive molecular graphics for augmented reality using Hololens,” Journal of Integrative Bioinformatics, vol. 15, no. 2, 2018.
  17. D. Maurer and A. Bruhn, ProFlow: Learning to Predict Optical Flow, vol. Proceedings of the British Machine Vision Conference (BMVC). 2018.
  18. D. Laupheimer, P. Tutzauer, N. Haala, and M. Spicker, “Neural networks for the classification of building use from street-view imagery,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, pp. 177–184, 2018.
  19. D. Maurer, N. Marniok, B. Goldlücke, and A. Bruhn, Structure-from-Motion aware PatchMatch for Adaptive Optical Flow Estimation, vol. Lecture Notes in Computer Science, Proceedings of the European Conference on Computer Vision (ECCV), no. 11212. Springer, 2018.
  20. H. V. Le, T. Kosch, P. Bader, S. Mayer, and N. Henze, “PalmTouch: using the palm as an additional input modality on commodity smartphones,” in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018, pp. 360:1--360:13.
  21. J. Görtler, R. Kehlbeck, and O. Deussen, “A visual exploration of Gaussian processes,” in Proceedings of the Workshop on Visualization for AI Explainability (VISxAI), 2018.
  22. H. V. Le, S. Mayer, P. Bader, and N. Henze, “Fingers’ range and comfortable area for one-handed smartphone interaction beyond the touchscreen,” in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018, pp. 31:1--31:12.
  23. S. Mayer, L. Lischke, A. Lanksweirt, H. V. Le, and N. Henze, “How to communicate new input techniques,” in Proceedings of the 10th Nordic Conference on Human-Computer Interaction, 2018.
  24. M. Blumenschein et al., “‘SMARTexplore: Simplifying High-Dimensional Data Analysis through a Table-Based Visual Analytics Approach,’” 2018.
  25. V. Bruder, M. Hlawatsch, S. Frey, M. Burch, D. Weiskopf, and T. Ertl, “Volume-based large dynamic graph analytics,” 2018.
  26. M. Jenadeleh, M. Pedersen, and D. Saupe, “‘Realtime quality assessment of iris biometrics under visible light,’” IEEE Comp. Soc. Workshop on Biometrics, pp. 556–565, 2018.
  27. D. Maurer, Y. C. Ju, M. Breuß, and A. Bruhn, “Combining Shape from Shading and Stereo: A Joint Variational Method for Estimating Depth, Illumination and Albedo,” International Journal of Computer Vision (IJCV), 2018.
  28. S. Frey, “Sampling and Estimation of Pairwise Similarity in Spatio-Temporal Data Based on Neural Networks,” in Informatics, 2017, vol. 4, no. 3, p. 27.
  29. V. Schwind, K. Wolf, and N. Henze, “FaceMaker - A Procedural Face Generator to Foster Character Design Research,” vol. Game Dynamics: Best Practices in Procedural and Dynamic Game Content Generation, O. Korn and N. Lee, Eds. Cham: Springer International Publishing, 2017, pp. 95–113.
  30. P. Tutzauer and N. Haala, “Processing of crawled urban imagery for building use classification,” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci, XLII-1/W1, pp. 143–149, 2017.
  31. V. Hosu et al., “The Konstanz natural video database (KoNViD-1k),” in 9th International Conference on Quality of Multimedia Experience (QoMEX), 2017.
  32. M. Krone et al., “Molecular Surface Maps,” IEEE Transactions on Visualization and Computer Graphics (TVCG), vol. 23, no. 1, 2017.
  33. D. Jäckle, F. Stoffel, S. Mittelstädt, D. A. Keim, and H. Reiterer, “Interpretation of Dimensionally-Reduced Crime Data: A Study with Untrained Domain Experts.,” in Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Best Student Paper Award, 2017, no. 3, pp. 164–175.
  34. V. Schwind, P. Knierim, C. Tasci, P. Franczak, N. Haas, and N. Henze, “These Are Not My Hands!": Effect of Gender on the Perception of Avatar Hands in Virtual Reality,” in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Denver, Colorado, USA, 2017, vol. CHI ’17, no. 6, pp. 1577--1582.
  35. A. Nesti, K. de Winkel, and H. Bülthoff, “Accumulation of inertial sensory information in the perception of whole body yaw rotation,” One, Plos, 2017.
  36. D. Sacha et al., “SOMFlow: Guided exploratory cluster analysis with self-organizing maps and analytic provenance,” IEEE Conference on Visual Analytics Science and Technology, 2017.
  37. S. Frey and T. Ertl, “Progressive Direct Volume-to-Volume Transformation.,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, pp. 921--930, 2017.
  38. V. Bruder, S. Frey, and T. Ertl, “Prediction-based load balancing and resolution tuning for interactive volume raycasting,” Visual Informatics, vol. 1, no. 2, pp. 106--117, 2017.
  39. K. de Winkel, A. Nesti, H. Ayaz, and H. Bülthoff, “Neural correlates of decision making on whole body yaw rotation: an fNIRS study,” Neuroscience Letters, 2017.
  40. C. Schätzle, M. Hund, F. L. Dennig, M. Butt, and D. A. Keim, HistoBankVis: Detecting Language Change via Data Visualization, vol. Proceedings of the NoDaLiDa 2017 Workshop on Processing Historical Language (NEALT Proceedings Series 32). 2017.
  41. D. Maurer, M. Stoll, S. Volz, P. Gairing, and A. Bruhn, “A comparison of isotropic and anisotropic second order regularisers for optical flow,” in Proceedings of International Conference on Scale Space and Variational Methods in Computer Vision (SSVM), Berlin, 2017, vol. LNCS 10302, pp. 537–549.
  42. D. Maurer, M. Stoll, and A. Bruhn, “Order-adaptive and illumination-aware variational optical flow refinement,” in British Machine Vision Conference (BMVC), 2017.
  43. C. Schulz, A. Nocaj, J. Goertler, O. Deussen, U. Brandes, and D. Weiskopf, “Probabilistic Graph Layout for Uncertain Network Visualization,” vol. 23, no. 1, 2017.
  44. M. Behrisch et al., “Magnostics: Image-Based Search of Interesting Matrix Views for Guided Network Exploration.,” IEEE Trans. Vis. Comput. Graph., vol. 23, no. 1, pp. 31–40, 2017.
  45. V. Schwind, P. Knierim, L. Chuang, and N. Henze, “‘Where’s Pinky?’: The Effects of a Reduced Number of Fingers in Virtual Reality,” in Proceedings of the 2017 CHI Conference on Computer-Human Interaction in Play, Amsterdam, Netherlands, 2017, vol. CHI PLAY’17, p. 6.
  46. K. Srulijes et al., “Visualization of eye-head coordination while walking in healthy subjects and patients with neurodegenerative diseases.” 2017.
  47. H. Ben Lahmar and M. Herschel, “Provenance-based Recommendations for Visual Data Exploration,” in International Workshop on Theory and Practice of Provenance (TAPP), 2017.
  48. M. van Garderen, B. Pampel, A. Nocaj, and U. Brandes, “Minimum-displacement overlap removal for geo-referenced data visualization,” Computer Graphics Forum (CGF), vol. 36, no. 3, pp. 423–433, 2017.
  49. M. Heinemann, V. Bruder, S. Frey, and T. Ertl, “Power Efficiency of Volume Raycasting on Mobile Devices,” in EuroVis 2017 - Posters, 2017.
  50. L. Merino et al., “On the Impact of the Medium in the Effectiveness of 3D Software Visualizations,” in VISSOFT’17: Proceedings of the 5th IEEE Working Conference on Software Visualization, 2017.
  51. M. Stoll, S. Volz, D. Maurer, and A. Bruhn, “A time-efficient optimisation framework for parameters of optical flow methods,” in Scandinavian Conference on Image Analysis (SCIA)., Berlin, 2017, vol. LNCS 10269, pp. 41–53.
  52. D. Maurer, M. Stoll, and A. Bruhn, “Order-adaptive regularisation for variational optical flow: global, local and in between,” in Proceedings of International Conference on Scale Space and Variational Methods in Computer Vision (SSVM), Berlin, 2017, vol. LNCS 10302, pp. 550–562.
  53. J. Zagermann, U. Pfeil, C. Acevedo, and H. Reiterer, “Studying the Benefits and Challenges of Spatial Distribution and Physical Affordances in a Multi-Device Workspace,” in In Proceedings of the 16th International Conference on Mobile and Ubiquitous Multimedia (MUM)´, 2017.
  54. J. Zagermann, U. Pfeil, D. Fink, P. von Bauer, and H. Reiterer, “Memory in Motion: The Influence of Gesture- and Touch-Based Input Modalities on Spatial Memory,” 2017.
  55. M. Spicker, F. Hahn, T. Lindemeier, D. Saupe, and O. Deussen, “Quantifying Visual Abstraction Quality for Stipple Drawings,” in Proceedings of NPAR’17, 2017.
  56. M. Herschel, R. Diestelkämper, and H. Ben Lahmar, “A survey on provenance - What for? What form? What from?,” International Journal on Very Large Data Bases (VLDB Journal), vol. 26, no. 6, pp. 881–906, 2017.
  57. M. A. Baazizi, H. Ben Lahmar, D. Colazzo, G. Ghelli, and C. Sartiani, “Schema Inference for Massive JSON Datasets,” in Conference on Extending Database Technology (EDBT), 2017, pp. 222–233.
  58. S. Frey and T. Ertl, “Fast flow-based quantification and interpolation for high-resolution density distribution,” in Proceedings of EuroGraphics Short Papers, 2017.
  59. G. Tkachev, S. Frey, C. Müller, V. Bruder, and T. Ertl, “Prediction of Distributed Volume Visualization Performance to Support Render Hardware Acquisition,” Eurographics Symposium on Parallel Graphics and Visualization, 2017.
  60. J. Karolus, P. W. Woźniak, L. L. Chuang, and A. Schmidt, “Robust Gaze Features for Enabling Language Proficiency Awareness,” in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17), New York, NY, USA, 2017, pp. 2998–3010.
  61. M. Stein et al., “Bring it to the Pitch: Combining Video and Movement Data to Enhance Team Sport Analysis.,” in IEEE Transactions on Visualization and Computer Graphics (TVCG), 2017.
  62. J. Iseringhausen et al., “4D Imaging through Spray-On Optics,” ACM Transactions on Graphics (SIGGRAPH 2017), vol. 36, no. 4, pp. 35:1--35:11, 2017.
  63. J. Kratt, F. Eisenkeil, M. Spicker, Y. Wang, D. Weiskopf, and O. Deussen, “Structure-aware Stylization of Mountainous Terrains,” in Vision, Modeling & Visualization, 2017.
  64. R. Diestelkämper, M. Herschel, and P. Jadhav, “Provenance in DISC Systems: Reducing Space Overhead at Runtime,” in International Workshop on Theory and Practice of Provenance (TAPP), 2017.
  65. U. Gadiraju et al., “Crowdsourcing versus the laboratory: Towards human-centered experiments using the crowd,” in Information Systems and Applications, incl. Internet/Web, and HCI, vol. Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, no. 10264, D. Archambault, H. Purchase, and T. Hossfeld, Eds. Springer International Publishing, 2017, pp. 7–30.
  66. D. Jäckle, M. Hund, M. Behrisch, D. A. Keim, and T. Schreck, “Pattern Trails: Visual Analysis of Pattern Transitions in Subspaces,” in IEEE Conference on Visual Analytics Science and Technology (VAST), 2017.
  67. C. Schulz, M. Burch, F. Beck, and D. Weiskopf, “Visual Data Cleansing of Low-Level Eye Tracking Data,” in Eye Tracking and Visualization: Foundations, Techniques, and Applications. ETVIS 2015, M. Burch, L. Chuang, B. Fisher, A. Schmidt, and D. Weiskopf, Eds. Springer International Publishing, 2017.
  68. M. Hund et al., “Visual analytics for concept exploration in subspaces of patient groups.,” Brain Informatics, vol. 3, no. 4, pp. 233–247, 2016.
  69. C. Schätzle and D. Sacha, “Visualizing Language Change: Dative Subjects in Icelandic,” in Proceedings of the Language Resources and Evaluation Conference 2016 (Workshop “VisLRII: Visualization as Added Value in the Development, Use and Evaluation of Language Resources,” 2016, pp. 8–15.
  70. S. Frey and T. Ertl, “Auto-tuning intermediate representations for in situ visualization,” in Scientific Data Summit (NYSDS), 2016, pp. 1–10.
  71. J. Karolus, P. W. Woźniak, and L. L. Chuang, “Towards Using Gaze Properties to Detect Language Proficiency,” in Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI ’16), New York, NY, USA, 2016, no. 118, p. 6.
  72. I. Zingman, D. Saupe, O. Penatti, and K. Lambers, “Detection of Fragmented Rectangular Enclosures in Very High Resolution Remote Sensing Images,” 2016.
  73. K. Kurzhals, M. Hlawatsch, M. Burch, and D. Weiskopf, “Fixation-Image Charts,” in Proceedings of the Symposium on Eye Tracking Research & Applications, 2016, vol. 1, pp. 11–18.
  74. D. Saupe, F. Hahn, V. Hosu, I. Zingman, M. Rana, and S. Li, “Crowd workers proven useful: A comparative study of subjective video quality assessment,” 8th International Conference on Quality of Multimedia Experience (QoMEX 2016), Lisbon, Portugal, 2016.
  75. D. Weiskopf, M. Burch, L. L. Chuang, B. Fischer, and A. Schmidt, Eye Tracking and Visualization: Foundations, Techniques, and Applications. Berlin, Heidelberg: Springer, 2016.
  76. T. Blascheck, F. Beck, S. Baltes, T. Ertl, and D. Weiskopf, “Visual Analysis and Coding of Data-Rich User Behavior,” 2016.
  77. J. Zagermann, U. Pfeil, R. Rädle, H.-C. Jetter, C. Klokmose, and H. Reiterer, “When Tablets meet Tabletops: The Effect of Tabletop Size on Around-the-Table Collaboration with Personal Tablets.,” 2016.
  78. J. Zagermann, U. Pfeil, and H. Reiterer, “Measuring Cognitive Load using Eye Tracking Technology in Visual Computing,” 2016, vol. Proceedings of the Sixth Workshop on Beyond Time and Errors on Novel Evaluation Methods for Visualization (BELIV 2016), pp. 78–85.
  79. L. Lischke, V. Schwind, K. Friedrich, A. Schmidt, and N. Henze, “MAGIC-Pointing on Large High-Resolution Displays,” in Proceedings of the 34rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’16), 2016, pp. 1706–1712.
  80. J. Müller, R. Rädle, and H. Reiterer, Virtual Objects as Spatial Cues in Collaborative Mixed Reality Environments: How They Shape Communication Behavior and User Task Load. ACM, 2016.
  81. A. Nocaj, M. Ortmann, and U. Brandes, “Adaptive Disentanglement based on Local Clustering in Small-World Network Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 6, pp. 1662–1671, 2016.
  82. T. Dingler, R. Rzayev, V. Schwind, and N. Henze, “RSVP on the go - Implicit Reading Support on Smart Watches Through Eye Tracking,” in Proceedings of the 2016 ACM International Symposium on Wearable Computers (ISWC ’16), New York, New York, USA, 2016, pp. 116–119.
  83. P. Tutzauer, S. Becker, T. Niese, O. Deussen, and D. Fritsch, “Understanding Human Perception of Building Categories in Virtual 3d Cities - a User Study,” ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XLI-B2, pp. 683–687, 2016.
  84. V. Hosu, F. Hahn, O. Wiedemann, S.-H. Jung, and D. Saupe, “Saliency-driven image coding improves overall perceived JPEG quality,” in Picture Coding Symposium (PCS), 2016.
  85. M. Scheer, H. H. Bülthoff, and L. L. Chuang, “Steering Demands Diminish the Early-P3, Late-P3 and RON Components of the Event-Related Potential of Task-Irrelevant Environmental Sounds,” 2016, vol. 10, no. 73.
  86. M. Herschel and M. Hlawatsch, “Provenance: On and Behind the Screens.,” in ACM International Conference on the Management of Data (SIGMOD), 2016, pp. 2213–2217.
  87. P. Tutzauer, S. Becker, D. Fritsch, T. Niese, and O. Deussen, “A Study of the Human Comprehension of Building Categories Based on Different 3D Building Representations,” Photogrammetrie - Fernerkundung - Geoinformation, vol. 2016, no. 5–6, pp. 319-333(15), 2016.
  88. M. Hund et al., “Visual Quality Assessment of Subspace Clusterings,” in KDD 2016 Interactive Data Exploration and Analytics (IDEA), 2016.
  89. V. Hosu, F. Hahn, I. Zingman, and D. Saupe, “Reported Attention as a Promising Alternative to Gaze in IQA Tasks,” 5th International Workshop on Perceptual Quality of Systems 2016 (PQS 2016), Berlin, 2016.
  90. S. Frey and T. Ertl, “Flow-Based Temporal Selection for Interactive Volume Visualization,” in Computer Graphics Forum (CGF), 2016, vol. 36, no. 8, pp. 153–165.
  91. V. Schwind and S. Jäger, “The Uncanny Valley and the Importance of Eye Contact.,” i-com, vol. 15, no. 1, pp. 93–104, 2016.
  92. O. Johannsen, A. Sulc, N. Marniok, and B. Goldluecke, “Layered scene reconstruction from multiple light field camera views,” 2016.
  93. M. Burch, R. Woods, R. Netzel, and D. Weiskopf, “The Challenges of Designing Metro Maps,” Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, 2016.
  94. N. Flad, J. Ditz, H. H. Bülthoff, and L. L. Chuang, “Data-driven approaches to unrestricted gaze-tracking benefit from saccade filtering,” Second Workshop on Eye Tracking and Visualization, IEEE Visualization 2016, 2016.
  95. C. L. L. and B. H. H., “Towards a Better Understanding of Gaze Behavior in the Automobile.,” in Workshop on Practical Experiences in Measuring and Modeling Drivers and Driver-Vehicle Interactions In conjunction with AutomotiveUI 2015, 2015.
  96. S. Frey, F. Sadlo, and T. Ertl, “Balanced sampling and compression for remote visualization,” in SIGGRAPH Asia 2015 Visualization in High Performance Computing, 2015, pp. 1:1-1:4.
  97. N. Flad, T. Fomina, H. H. Bülthoff, and L. L. Chuang, “In press: Unsupervised clustering of EOG as a viable substitute for optical eye-tracking,” First Workshop on Eye Tracking and Visualization at IEEE Visualization, 2015.
  98. M. Hund et al., “Subspace Nearest Neighbor Search - Problem Statement, Approaches, and Discussion,” in Similarity Search and Applications, vol. 1, no. 9371, G. Amato, R. Connor, F. Falchi, and C. Gennaro, Eds. Springer International Publishing, 2015, pp. 307–313.
  99. M. Spicker, J. Kratt, D. Arellano, and O. Deussen, Depth-Aware Coherent Line Drawings. ACM, 2015.
  100. K. Kurzhals, B. Fisher, M. Burch, and D. Weiskopf, “Eye Tracking Evaluation of Visual Analytics,” Information Visualization, vol. 15, no. 4, pp. 340–358, 2015.