1. D. Sacha, M. Kraus, J. Bernard, M. Behrisch, T. Schreck, Y. Asano, and D. A. Keim, SOMFlow: Guided exploratory cluster analysis with self-organizing maps and analytic provenance. 2017.
  2. K. de Winkel, A. Nesti, H. Ayaz, and H. Bülthoff, “Neural correlates of decision making on whole body yaw rotation: an fNIRS study,” Neuroscience Letters, 2017.
  3. M. Spicker, F. Hahn, T. Lindemeier, D. Saupe, and O. Deussen, “Quantifying Visual Abstraction Quality for Stipple Drawings,” in Proceedings of NPAR’17, 2017.
  4. P. Tutzauer and N. Haala, “Processing of crawled urban imagery for building use classification,” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci, XLII-1/W1, pp. 143–149, 2017.
  5. V. Hosu, F. Hahn, M. Jenadeleh, H. M. Lin, S. H., S. T., Li, and D. Saupe, “The Konstanz natural video database (KoNViD-1k),” in 9th International Conference on Quality of Multimedia Experience (QoMEX), 2017.
  6. M. A. Baazizi, H. B. Lahmar, D. Colazzo, G. Ghelli, and C. Sartiani, “Schema Inference for Massive JSON Datasets,” in Conference on Extending Database Technology (EDBT), 2017, pp. 222–233.
  7. H. B. Lahmar and M. Herschel, “Provenance-based Recommendations for Visual Data Exploration,” in International Workshop on Theory and Practice of Provenance (TAPP), 2017.
  8. R. Diestelkämper, M. Herschel, and P. Jadhav, “Provenance in DISC Systems: Reducing Space Overhead at Runtime,” in International Workshop on Theory and Practice of Provenance (TAPP), 2017.
  9. C. Schätzle, M. Hund, F. L. Dennig, M. Butt, and D. A. Keim, HistoBankVis: Detecting Language Change via Data Visualization, vol. Proceedings of the NoDaLiDa 2017 Workshop on Processing Historical Language (NEALT Proceedings Series 32). 2017.
  10. J. Zagermann, U. Pfeil, D. Fink, P. von Bauer, and H. Reiterer, “Memory in Motion: The Influence of Gesture- and Touch-Based Input Modalities on Spatial Memory,” 2017.
  11. D. Jäckle, F. Stoffel, S. Mittelstädt, D. A. Keim, and H. Reiterer, “Interpretation of Dimensionally-Reduced Crime Data: A Study with Untrained Domain Experts.,” 2017, no. 2017, pp. 164–175.
  12. A. Nesti, K. de Winkel, and H. Bülthoff, “Accumulation of inertial sensory information in the perception of whole body yaw rotation,” One, Plos, 2017.
  13. C. Schulz, A. Nocaj, J. Goertler, O. Deussen, U. Brandes, and D. Weiskopf, “Probabilistic Graph Layout for Uncertain Network Visualization,” vol. 23, no. 1, 2017.
  14. M. Krone, F. Frieß, K. Scharnowski, G. Reina, S. Fademrecht, T. Kulschewski, J. Pleiss, and T. Ertl, “Molecular Surface Maps,” in IEEE Transactions on Visualization and Computer Graphics (Proceedings of the Scientific Visualization 2016), 2017, vol. 23, no. 1.
  15. V. Hosu, F. Hahn, O. Wiedemann, S.-H. Jung, and D. Saupe, “Saliency-driven image coding improves overall perceived JPEG quality,” in Picture Coding Symposium (PCS), 2016.
  16. M. Herschel and M. Hlawatsch, “Provenance: On and Behind the Screens.,” in ACM International Conference on the Management of Data (SIGMOD), 2016, pp. 2213–2217.
  17. P. Tutzauer, S. Becker, D. Fritsch, T. Niese, and O. Deussen, “A Study of the Human Comprehension of Building Categories Based on Different 3D Building Representations,” Photogrammetrie - Fernerkundung - Geoinformation, vol. 2016, no. 5–6, pp. 319–333(15), 2016.
  18. L. Lischke, S. Mayer, K. Wolf, N. Henze, H. Reiterer, and A. Schmidt, “Screen arrangements and interaction areas for large display work places.,” vol. Proceedings of the 5th ACM International Symposium on Pervasive Displays PerDis 2016, pp. 228–234, 2016.
  19. J. Zagermann, U. Pfeil, and H. Reiterer, “Measuring Cognitive Load using Eye Tracking Technology in Visual Computing,” 2016, vol. Proceedings of the Sixth Workshop on Beyond Time and Errors on Novel Evaluation Methods for Visualization (BELIV 2016), pp. 78–85.
  20. J. Hildenbrand, A. Nocaj, and U. Brandes, “Flexible Level-of-Detail Rendering for Large Graphs,” no. 9801 2016, G. Drawing and 24th International Symposium Network Visualization, Eds. 2016.
  21. A. Hautli-Janisz and V. Lyding, VisLR II: Visualization as Added Value in the Development, Use and Evaluation of Language Resources. 2016.
  22. M. Hund, D. Böhm, W. Sturm, M. Sedlmair, T. Schreck, T. Ullrich, D. A. Keim, L. Majnaric, and A. Holzinger, “Visual analytics for concept exploration in subspaces of patient groups.,” Brain Informatics, vol. 3, no. 4, pp. 233–247, 2016.
  23. M. Behrisch, B. Bach, M. Hund, L. von Rüden, M. Delz, J.-D. Fekete, and T. Schreck, “Magnostics: Image-based Search of Interesting Matrix Views for Guided Network Exploration.,” 2016, vol. 23, no. 1–1, p. 99.
  24. O. Johannsen, A. Sulc, N. Marniok, and B. Goldluecke, “Layered scene reconstruction from multiple light field camera views,” 2016.
  25. I. Zingman, D. Saupe, O. Penatti, and K. Lambers, “Detection of Fragmented Rectangular Enclosures in Very High Resolution Remote Sensing Images,” 2016.
  26. D. Weiskopf, M. Burch, L. L. Chuang, B. Fischer, and A. Schmidt, Eye Tracking and Visualization: Foundations, Techniques, and Applications. Berlin, Heidelberg: Springer, 2016.
  27. N. Flad, J. Ditz, H. H. Bülthoff, and L. L. Chuang, “Data-driven approaches to unrestricted gaze-tracking benefit from saccade filtering,” Second Workshop on Eye Tracking and Visualization, IEEE Visualization 2016, 2016.
  28. A. Nocaj, M. Ortmann, and U. Brandes, “Adaptive Disentanglement based on Local Clustering in Small-World Network Visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 6, pp. 1662–1671, 2016.
  29. D. Saupe, F. Hahn, V. Hosu, I. Zingman, M. Rana, and S. Li, “Crowd workers proven useful: A comparative study of subjective video quality assessment,” 8th International Conference on Quality of Multimedia Experience (QoMEX 2016), Lisbon, Portugal, 2016.
  30. V. Hosu, F. Hahn, I. Zingman, and D. Saupe, “Reported Attention as a Promising Alternative to Gaze in IQA Tasks,” 5th International Workshop on Perceptual Quality of Systems 2016 (PQS 2016), Berlin, 2016.
  31. P. Tutzauer, S. Becker, T. Niese, O. Deussen, and D. Fritsch, “Understanding Human Perception of Building Categories in Virtual 3d Cities - a User Study,” ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XLI–B2, pp. 683–687, 2016.
  32. K. Kurzhals, M. Hlawatsch, M. Burch, and D. Weiskopf, “Fixation-Image Charts,” in ETRA, 2016, no. 1, pp. 11–18.
  33. C. Schulz, A. Nocaj, M. El-Assady, S. Frey, M. Hlawatsch, M. Hund, G. K. Karch, R. Netzel, C. Schätzle, M. Butt, D. Keim, T. Ertl, U. Brandes, and D. Weiskopf, “To appear: Generative Data Models for Validation and Evaluation of Visualization Techniques,” 2016.
  34. R. Netzel, M. Burch, and D. Weiskopf, “Interactive Scanpath-Oriented Annotation of Fixations,” 2016, pp. 183–187.
  35. M. Burch, R. Woods, R. Netzel, and D. Weiskopf, “The Challenges of Designing Metro Maps,” 2016, pp. 195–202.
  36. T. Blascheck, F. Beck, S. Baltes, T. Ertl, and D. Weiskopf, “To appear: Visual Analysis and Coding of Data-Rich User Behavior,” 2016.
  37. C. Schulz, M. Burch, F. Beck, and D. Weiskopf, “To appear: Visual Data Cleansing of Low-Level Eye Tracking Data,” in Extended Papers of ETVIS 2015, 2016.
  38. M. Hund, I. Färber, M. Behrisch, A. Tatu, T. Schreck, D. A. Keim, and T. Seidl, “Visual Quality Assessment of Subspace Clusterings,” 2016.
  39. J. Müller, R. Rädle, and H. Reiterer, Virtual Objects as Spatial Cues in Collaborative Mixed Reality Environments: How They Shape Communication Behavior and User Task Load. ACM, 2016.
  40. J. Zagermann, U. Pfeil, R. Rädle, H.-C. Jetter, C. Klokmose, and H. Reiterer, “When Tablets meet Tabletops: The Effect of Tabletop Size on Around-the-Table Collaboration with Personal Tablets.,” 2016.
  41. M. Scheer, H. H. Bülthoff, and L. L. Chuang, “Steering Demands Diminish the Early-P3, Late-P3 and RON Components of the Event-Related Potential of Task-Irrelevant Environmental Sounds,” 2016, vol. 10, no. 73.
  42. C. L. L. and B. H. H., “Towards a Better Understanding of Gaze Behavior in the Automobile.,” in Workshop on Practical Experiences in Measuring and Modeling Drivers and Driver-Vehicle Interactions In conjunction with AutomotiveUI 2015, 2015.
  43. N. Flad, T. Fomina, H. H. Bülthoff, and L. L. Chuang, “In press: Unsupervised clustering of EOG as a viable substitute for optical eye-tracking,” First Workshop on Eye Tracking and Visualization at IEEE Visualization, 2015.
  44. L. L. Chuang, “Error visualization and information-seeking behavior for air-vehicle control.,” Foundations of Augmented Cognition. Lecture Notes in Artificial Intelligence, vol. 9183, pp. 3–11, 2015.
  45. C. Schulz, M. Burch, and D. Weiskopf, Visual Data Cleansing of Eye Tracking Data. 2015.
  46. K. Kurzhals, B. Fisher, M. Burch, and D. Weiskopf, “Eye Tracking Evaluation of Visual Analytics,” 2015.
  47. M. Spicker, J. Kratt, D. Arellano, and O. Deussen, Depth-Aware Coherent Line Drawings. ACM, 2015.
  48. M. Hund, M. Behrisch, I. Farber, M. Sedlmair, T. Schreck, T. Seidl, and D. Keim, “Subspace Nearest Neighbor Search - Problem Statement, Approaches, and Discussion,” 2015.
  49. K. Kurzhals, M. Burch, T. Pfeiffer, and D. Weiskopf, “Eye Tracking in Computer-Based Visualization,” Computing in Science and Engineering, vol. 17, no. 5, pp. 64–71, 2015.
  50. K. Kurzhals, M. Hlawatsch, F. Heimerl, M. Burch, T. Ertl, and D. Weiskopf, “Gaze Stripes: Image-Based Visualization of Eye Tracking Data,” IEEE Xplore Digital Library, 2015.