A02 | Quantifying Visual Computing Systems

Prof. Thomas Ertl, Universität Stuttgart
Email | Website

Thomas Ertl

Prof. Melanie Herschel, Universität Stuttgart
Email | Website

Melanie Herschel

Valentin Bruder, Universität Stuttgart – Email

Interactive visual computing requires high frame rates and low latency, but these performance aspects are hard to predict for complex algorithms running on highly parallel, heterogeneous hardware configurations. The long-term goal of this project is the development of a general methodology and a flexible framework that allows for the quantification of application performance of visual computing systems for a certain configuration of data set, visual computing technique, system architecture, and display condition. In the first funding period, we will combine such a performance model with measurement data to investigate quantifiable volume rendering and visual analytics applications under a large variety of conditions.

Research Questions

How can we extend performance models from the computer architecture to deal with heterogeneous visual computing architectures and interactive applications?

How can we find the adequate level of abstraction and the relevant parameters for making quantitative performance predictions?

Can our framework support an application to adapt to variable loads and conditions in real time?

Can we extend the model to the adaptive algorithms and the perceptual metrics investigated in other research projects?

How can our models be used to give guarantees for minimal frame rates or maximal interaction latencies for specific data sets?

How can uncertainty be dealt with in terms of both the measurements and the predicted outcome?

Overview showing the inputs and results developed in this project.

Compression quality estimate yielding target transfer times.

Regression-based error estimation of undersampled volume blocks.

Publications

  1. S. Frey, “Sampling and Estimation of Pairwise Similarity in Spatio-Temporal Data Based on Neural Networks,” in Informatics, 2017, vol. 4, no. 3, p. 27.
  2. S. Frey and T. Ertl, “Auto-tuning intermediate representations for in situ visualization,” in Scientific Data Summit (NYSDS), 2016 New York, 2016, pp. 1–10.
  3. S. Frey, F. Sadlo, and T. Ertl, “Balanced sampling and compression for remote visualization,” in SIGGRAPH Asia 2015 Visualization in High Performance Computing, 2015, pp. 1;1–4.
  4. S. Frey and T. Ertl, “Progressive Direct Volume-to-Volume Transformation.,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, pp. 921–930, 2017.
  5. C. Schulz, A. Nocaj, M. El-Assady, S. Frey, M. Hlawatsch, M. Hund, G. K. Karch, R. Netzel, C. Schätzle, M. Butt, D. A. Keim, T. Ertl, U. Brandes, and D. Weiskopf, “Generative Data Models for Validation and Evaluation of Visualization Techniques.,” in BELIV Workshop 2016, 2016, pp. 112–124.
  6. M. Heinemann, V. Bruder, S. Frey, and T. Ertl, “Power Efficiency of Volume Raycasting on Mobile Devices,” in EuroVis 2017 - Posters, 2017.
  7. G. Tkachev, S. Frey, C. Müller, V. Bruder, and T. Ertl, “Prediction of Distributed Volume Visualization Performance to Support Render Hardware Acquisition,” Eurographics Symposium on Parallel Graphics and Visualization, 2017.
  8. S. Frey and T. Ertl, “Flow-Based Temporal Selection for Interactive Volume Visualization,” in Computer Graphics Forum, 2016.
  9. V. Bruder, S. Frey, and T. Ertl, “Real-Time Performance Prediction and Tuning for Interactive Volume Raycasting,” in SIGGRAPH ASIA 2016 Symposium on Visualization, New York, NY, USA, 2016, vol. 2016, no. 7, pp. 7:1–7:8.