V. Bruder, M. Larsen, T. Ertl, H. Childs, and S. Frey, “A Hybrid In Situ Approach for Cost Efficient Image Database Generation,”
IEEE Transactions on Visualization and Computer Graphics, pp. 1–1, 2022, doi:
10.1109/TVCG.2022.3169590.
Abstract
The visualization of results while the simulation is running is increasingly common in extreme scale computing environments. We present a novel approach for in situ generation of image databases to achieve cost savings on supercomputers. Our approach, a hybrid between traditional inline and in transit techniques, dynamically distributes visualization tasks between simulation nodes and visualization nodes, using probing as a basis to estimate rendering cost. Our hybrid design differs from previous works in that it creates opportunities to minimize idle time from four fundamental types of inefficiency: variability, limited scalability, overhead, and rightsizing. We demonstrate our results by comparing our method against both inline and in transit methods for a variety of configurations, including two simulation codes and a scaling study that goes above 19K cores. Our findings show that our approach is superior in many configurations. As in situ visualization becomes increasingly ubiquitous, we believe our technique could lead to significant amounts of reclaimed cycles on supercomputers.BibTeX
H. Tarner, V. Bruder, T. Ertl, S. Frey, and F. Beck, “Visually Comparing Rendering Performance from Multiple Perspectives,” in
Vision, Modeling, and Visualization, 2022. doi:
10.2312/vmv.20221211.
Abstract
Evaluation of rendering performance is crucial when selecting or developing algorithms, but challenging as performance can largely differ across a set of selected scenarios. Despite this, performance metrics are often reported and compared in a highly aggregated way. In this paper we suggest a more fine-grained approach for the evaluation of rendering performance, taking into account multiple perspectives on the scenario: camera position and orientation along different paths, rendering algorithms, image resolution, and hardware. The approach comprises a visual analysis system that shows and contrasts the data from these perspectives. The users can explore combinations of perspectives and gain insight into the performance characteristics of several rendering algorithms. A stylized representation of the camera path provides a base layout for arranging the multivariate performance data as radar charts, each comparing the same set of rendering algorithms while linking the performance data with the rendered images. To showcase our approach, we analyze two types of scientific visualization benchmarks.BibTeX
C. Müller, M. Heinemann, D. Weiskopf, and T. Ertl, “Power Overwhelming: Quantifying the Energy Cost of Visualisation,” in
Proceedings of the 2022 IEEE Workshop on Evaluation and Beyond - Methodological Approaches for Visualization (BELIV), Oct. 2022, pp. 38–46. doi:
10.1109/BELIV57783.2022.00009.Abstract
GPUs are the power-hungry tool of many visualisation researchers. However, their energy consumption has mostly been investigated outside the visualisation community, albeit our algorithms can generate more complex workloads than compute kernels. Additionally, a raising number of web-based visualisations potentially makes consumers other than the GPU more relevant. We present measurement setups for quantifying the energy cost of visualisation, ranging from software sensors over external power meters and micro controller-based setups to using oscilloscopes. These setups cover energy consumption of GPUs, CPUs and other components of a computing system. Using raycasting of spherical glyphs, volume rendering and D3 visualisations as examples, we show that there are viable options for evaluating most kinds of visualisations. We conclude by stating the challenges to a broader application of these techniques and by making recommendations on how to overcome these.BibTeX
S. Frey
et al., “Parameter Adaptation In Situ: Design Impacts and Trade-Offs,” in
In Situ Visualization for Computational Science, Cham, 2022, pp. 159--182. doi:
10.1007/978-3-030-81627-8_8.
Abstract
This chapter presents a study of parameter adaptation in situ, exploring the resulting trade-offs in rendering quality and workload distribution. Four different use cases are analyzed with respect to configuration changes. First, the performance impact of load balancing and resource allocation variants on both simulation and visualization is investigated using the MegaMol framework. Its loose coupling scheme and architecture enable minimally invasive in situ operation without impacting the stability of the simulation with (potentially) experimental visualization code. Second, Volumetric Depth Images (VDIs) are considered: a compact, view-dependent intermediate representation that can efficiently be generated and used for post hoc exploration. A study of their inherent trade-offs regarding size, quality, and generation time provides the basis for parameter optimization. Third, streaming for remote visualization allows a user to monitor the progress of a simulation and to steer visualization parameters. Compression settings are adapted dynamically based on predictions via convolutional neural networks across different parts of images to achieve high frame rates for high-resolution displays like powerwalls. Fourth, different performance prediction models for volume rendering address offline scenarios (like hardware acquisition planning) as well as dynamic adaptation of parameters and load balancing. Finally, the chapter concludes by summarizing overarching approaches and challenges, discussing the potential role that adaptive approaches can play in increasing the efficiency of in situ visualization.BibTeX
K. Schatz
et al., “2019 IEEE Scientific Visualization Contest Winner: Visual Analysis of Structure Formation in Cosmic Evolution,”
IEEE Computer Graphics and Applications, vol. 41, no. 6, Art. no. 6, 2021, doi:
10.1109/MCG.2020.3004613.
Abstract
Simulations of cosmic evolution are a means to explain the formation of the universe as we see it today. The resulting data of such simulations comprise numerous physical quantities, which turns their analysis into a complex task. Here, we analyze such high-dimensional and time-varying particle data using various visualization techniques from the fields of particle visualization, flow visualization, volume visualization, and information visualization. Our approach employs specialized filters to extract and highlight the development of so-called active galactic nuclei and filament structures formed by the particles. Additionally, we calculate X-ray emission of the evolving structures in a preprocessing step to complement visual analysis. Our approach is integrated into a single visual analytics framework to allow for analysis of star formation at interactive frame rates. Finally, we lay out the methodological aspects of our work that led to success at the 2019 IEEE SciVis Contest.BibTeX
F. Frieß, M. Braun, V. Bruder, S. Frey, G. Reina, and T. Ertl, “Foveated Encoding for Large High-Resolution Displays,”
IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 2, Art. no. 2, 2021, doi:
10.1109/TVCG.2020.3030445.
Abstract
Collaborative exploration of scientific data sets across large high-resolution displays requires both high visual detail as wellas low-latency transfer of image data (oftentimes inducing the need to trade one for the other). In this work, we present a system thatdynamically adapts the encoding quality in such systems in a way that reduces the required bandwidth without impacting the detailsperceived by one or more observers. Humans perceive sharp, colourful details, in the small foveal region around the centre of the fieldof view, while information in the periphery is perceived blurred and colourless. We account for this by tracking the gaze of observers,and respectively adapting the quality parameter of each macroblock used by the H.264 encoder, considering the so-called visual acuityfall-off. This allows to substantially reduce the required bandwidth with barely noticeable changes in visual quality, which is crucial forcollaborative analysis across display walls at different locations. We demonstrate the reduced overall required bandwidth and the highquality inside the foveated regions using particle rendering and parallel coordinateBibTeX
V. Bruder, C. Müller, S. Frey, and T. Ertl, “On Evaluating Runtime Performance of Interactive Visualizations,”
IEEE Transactions on Visualization and Computer Graphics, vol. 26, pp. 2848–2862, Sep. 2020, doi:
10.1109/TVCG.2019.2898435.
Abstract
As our field matures, evaluation of visualization techniques has extended from reporting runtime performance to studying user behavior. Consequently, many methodologies and best practices for user studies have evolved. While maintaining interactivity continues to be crucial for the exploration of large data sets, no similar methodological foundation for evaluating runtime performance has been developed. Our analysis of 50 recent visualization papers on new or improved techniques for rendering volumes or particles indicates that only a very limited set of parameters like different data sets, camera paths, viewport sizes, and GPUs are investigated, which make comparison with other techniques or generalization to other parameter ranges at least questionable. To derive a deeper understanding of qualitative runtime behavior and quantitative parameter dependencies, we developed a framework for the most exhaustive performance evaluation of volume and particle visualization techniques that we are aware of, including millions of measurements on ten different GPUs. This paper reports on our insights from statistical analysis of this data discussing independent and linear parameter behavior and non-obvious effects. We give recommendations for best practices when evaluating runtime performance of scientific visualization applications, which can serve as a starting point for more elaborate models of performance quantification.BibTeX
K. Schatz
et al., “Visual Analysis of Structure Formation in Cosmic Evolution,” in
Proceedings of the IEEE Scientific Visualization Conference (SciVis), 2019, pp. 33–41. doi:
10.1109/scivis47405.2019.8968855.
Abstract
The IEEE SciVis 2019 Contest targets the visual analysis of structure formation in the cosmic evolution of the universe from when the universe was five million years old up to now. In our submission, we analyze high-dimensional data to get an overview, then investigate the impact of Active Galactic Nuclei (AGNs) using various visualization techniques, for instance, an adapted filament filtering method for detailed analysis and particle flow in the vicinity of filaments. Based on feedback from domain scientists on these initial visualizations, we also analyzed X-ray emissions and star formation areas. The conversion of star-forming gas to stars and the resulting increasing molecular weight of the particles could be observed.BibTeX
V. Bruder, C. Schulz, R. Bauer, S. Frey, D. Weiskopf, and T. Ertl, “Voronoi-Based Foveated Volume Rendering,” in
Proceedings of the Eurographics Conference on Visualization - Short Papers (EuroVis), 2019, pp. 67–71. doi:
10.2312/evs.20191172.
Abstract
Foveal vision is located in the center of the field of view with a rich impression of detail and color, whereas peripheral visionoccurs on the side with more fuzzy and colorless perception. This visual acuity fall-off can be used to achieve higher frame ratesby adapting rendering quality to the human visual system. Volume raycasting has unique characteristics, preventing a directtransfer of many traditional foveated rendering techniques. We present an approach that utilizes the visual acuity fall-off toaccelerate volume rendering based on Linde-Buzo-Gray sampling and natural neighbor interpolation. First, we measure gazeusing a stationary 1200 Hz eye-tracking system. Then, we adapt our sampling and reconstruction strategy to that gaze. Finally,we apply a temporal smoothing filter to attenuate undersampling artifacts since peripheral vision is particularly sensitive tocontrast changes and movement. Our approach substantially improves rendering performance with barely perceptible changes invisual quality. We demonstrate the usefulness of our approach through performance measurements on various data seBibTeX
V. Bruder, K. Kurzhals, S. Frey, D. Weiskopf, and T. Ertl, “Space-Time Volume Visualization of Gaze and Stimulus,” in
Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), 2019, pp. 12:1-12:9. doi:
10.1145/3314111.3319812.
Abstract
We present a method for the spatio-temporal analysis of gaze data from multiple participants in the context of a video stimulus. For such data, an overview of the recorded patterns is important to identify common viewing behavior (such as attentional synchrony) and outliers. We adopt the approach of space-time cube visualization, which extends the spatial dimensions of the stimulus by time as the third dimension. Previous work mainly handled eye tracking data in the space-time cube as point cloud, providing no information about the stimulus context. This paper presents a novel visualization technique that combines gaze data, a dynamic stimulus, and optical flow with volume rendering to derive an overview of the data with contextual information. With specifically designed transfer functions, we emphasize different data aspects, making the visualization suitable for explorative analysis and for illustrative support of statistical findings alike.BibTeX
C. Müller, M. Braun, and T. Ertl, “Optimised Molecular Graphics on the HoloLens,” in
IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2019, Osaka, Japan, March 23-27, 2019, 2019, pp. 97–102. doi:
10.1109/VR.2019.8798111.
Abstract
The advent of modern and affordable augmented reality head sets like Microsoft HoloLens has sparked new interest in using virtual and augmented reality technology in the analysis of molecular data. For all visualisation in immersive, mixed-reality scenarios, a sufficiently high rendering speed is an important factor, which leads to the issue of limited processing power available on fully untethered devices facing the situation of handling computationally expensive visualisations. Recent research shows that the space-filling model of even small data sets from the Protein Data Bank (PDB) cannot be rendered at desirable frame rates on the HoloLens. In this work, we report on how to improve the rendering speed of atom-based visualisation of proteins and how the rendering of more abstract representations of the molecules compares against it. We complement our findings with in-depth GPU and CPU performance numbers.BibTeX
V. Bruder
et al., “Volume-Based Large Dynamic Graph Analysis Supported by Evolution Provenance,”
Multimedia Tools and Applications, vol. 78, no. 23, Art. no. 23, 2019, doi:
10.1007/s11042-019-07878-6.
Abstract
We present an approach for the visualization and interactive analysis of dynamic graphs that contain a large number of time steps. A specific focus is put on the support of analyzing temporal aspects in the data. Central to our approach is a static, volumetric representation of the dynamic graph based on the concept of space-time cubes that we create by stacking the adjacency matrices of all time steps. The use of GPU-accelerated volume rendering techniques allows us to render this representation interactively. We identified four classes of analytics methods as being important for the analysis of large and complex graph data, which we discuss in detail: data views, aggregation and filtering, comparison, and evolution provenance. Implementations of the respective methods are presented in an integrated application, enabling interactive exploration and analysis of large graphs. We demonstrate the applicability, usefulness, and scalability of our approach by presenting two examples for analyzing dynamic graphs. Furthermore, we let visualization experts evaluate our analytics approach.BibTeX
H. Zhang, S. Frey, H. Steeb, D. Uribe, T. Ertl, and W. Wang, “Visualization of Bubble Formation in Porous Media,”
IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 1, Art. no. 1, 2019, doi:
10.1109/TVCG.2018.2864506.
Abstract
We present a visualization approach for the analysis of CO 2 bubble-induced attenuation in porous rock formations. As a basis for this, we introduce customized techniques to extract CO 2 bubbles and their surrounding porous structure from X-ray computed tomography data (XCT) measurements. To understand how the structure of porous media influences the occurrence and the shape of formed bubbles, we automatically classify and relate them in terms of morphology and geometric features, and further directly support searching for promising porous structures. To allow for the meaningful direct visual comparison of bubbles and their structures, we propose a customized registration technique considering the bubble shape as well as its points of contact with the porous media surface. With our quantitative extraction of geometric bubble features, we further support the analysis as well as the creation of a physical model. We demonstrate that our approach was successfully used to answer several research questions in the domain, and discuss its high practical relevance to identify critical seismic characteristics of fluid-saturated rock that govern its capability to store CO 2.BibTeX
S. Frey, “Spatio-Temporal Contours from Deep Volume Raycasting,”
Computer Graphics Forum, vol. 37, no. 3, Art. no. 3, 2018, doi:
10.1111/cgf.13438.
Abstract
We visualize contours for spatio‐temporal processes to indicate where and when non‐continuous changes occur or spatial bounds are encountered. All time steps are comprised densely in one visualization, with contours allowing to efficiently analyze processes in the data even in case of spatial or temporal overlap. Contours are determined on the basis of deep raycasting that collects samples across time and depth along each ray. For each sample along a ray, its closest neighbors from adjacent rays are identified, considering time, depth, and value in the process. Large distances are represented as contours in image space, using color to indicate temporal occurrence. This contour representation can easily be combined with volume rendering‐based techniques, providing both full spatial detail for individual time steps and an outline of the whole time series in one view. Our view‐dependent technique supports efficient progressive computation, and requires no prior assumptions regarding the shape or nature of processes in the data. We discuss and demonstrate the performance and utility of our approach via a variety of data sets, comparison and combination with an alternative technique, and feedback by a domain scientist.BibTeX
F. Frieß, M. Landwehr, V. Bruder, S. Frey, and T. Ertl, “Adaptive Encoder Settings for Interactive Remote Visualisation on High-Resolution Displays,” in
Proceedings of the IEEE Symposium on Large Data Analysis and Visualization - Short Papers (LDAV), 2018, pp. 87–91. doi:
10.1109/LDAV.2018.8739215.
Abstract
We present an approach that dynamically adapts encoder settings for image tiles to yield the best possible quality for a given bandwidth. This reduces the overall size of the image while preserving details. Our application determines the encoding settings in two steps. In the first step, we predict the quality and size of the tiles for different encoding settings using a convolutional neural network. In the second step, we assign the optimal encoder setting to each tile, so that the overall size of the image is lower than a predetermined threshold. Commonly, for tiles that contain complicated structures, a high quality setting is used in order to prevent major information loss, while quality settings are lowered for others to keep the size below the threshold. We demonstrate that we can reduce the overall size of the image while preserving the details in areas of interest using the example of both particle and volume visualisation applications.BibTeX
V. Bruder, M. Hlawatsch, S. Frey, M. Burch, D. Weiskopf, and T. Ertl, “Volume-Based Large Dynamic Graph Analytics,” in
Proceedings of the International Conference Information Visualisation (IV), 2018, pp. 210–219. doi:
10.1109/iV.2018.00045.
Abstract
We present an approach for interactively analyzing large dynamic graphs consisting of several thousand time steps with a particular focus on temporal aspects. we employ a static representation of the time-varying graph based on the concept of space-time cubes, i.e., we create a volumetric representation of the graph by stacking the adjacency matrices of each of its time steps. To achieve an efficient analysis of complex data, we discuss three classes of analytics methods of particular importance in this context: data views, aggregation and filtering, and comparison. For these classes, we present a GPU-based implementation of respective analysis methods that enable the interactive analysis of large graphs. We demonstrate the utility as well as the scalability of our approach by presenting application examples for analyzing different time-varying data sets.BibTeX
S. Frey, “Sampling and Estimation of Pairwise Similarity in Spatio-Temporal Data Based on Neural Networks,” in
Informatics, 2017, vol. 4, no. 3, p. 27. doi:
10.3390/informatics4030027.
Abstract
Increasingly fast computing systems for simulations and high-accuracy measurement techniques drive the generation of time-dependent volumetric data sets with high resolution in both time and space. To gain insights from this spatio-temporal data, the computation and direct visualization of pairwise distances between time steps not only supports interactive user exploration, but also drives automatic analysis techniques like the generation of a meaningful static overview visualization, the identification of rare events, or the visual analysis of recurrent processes. However, the computation of pairwise differences between all time steps is prohibitively expensive for large-scale data not only due to the significant cost of computing expressive distance between high-resolution spatial data, but in particular owing to the large number of distance computations (O(|T|2)) , with |T| being the number of time steps). Addressing this issue, we present and evaluate different strategies for the progressive computation of similarity information in a time series, as well as an approach for estimating distance information that has not been determined so far. In particular, we investigate and analyze the utility of using neural networks for estimating pairwise distances. On this basis, our approach automatically determines the sampling strategy yielding the best result in combination with trained networks for estimation. We evaluate our approach with a variety of time-dependent 2D and 3D data from simulations and measurements as well as artificially generated data, and compare it against an alternative technique. Finally, we discuss prospects and limitations, and discuss different directions for improvement in future work.BibTeX
S. Frey and T. Ertl, “Progressive Direct Volume-to-Volume Transformation,”
IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, Art. no. 1, 2017, doi:
10.1109/TVCG.2016.2599042.
Abstract
We present a novel technique to generate transformations between arbitrary volumes, providing both expressive distances and smooth interpolates. In contrast to conventional morphing or warping approaches, our technique requires no user guidance, intermediate representations (like extracted features), or blending, and imposes no restrictions regarding shape or structure. Our technique operates directly on the volumetric data representation, and while linear programming approaches could solve the underlying problem optimally, their polynomial complexity makes them infeasible for high-resolution volumes. We therefore propose a progressive refinement approach designed for parallel execution that is able to quickly deliver approximate results that are iteratively improved toward the optimum. On this basis, we further present a new approach for the streaming selection of time steps in temporal data that allows for the reconstruction of the full sequence with a user-specified error bound. We finally demonstrate the utility of our technique for different applications, compare our approach against alternatives, and evaluate its characteristics with a variety of different data sets.BibTeX
V. Bruder, S. Frey, and T. Ertl, “Prediction-Based Load Balancing and Resolution Tuning for Interactive Volume Raycasting,”
Visual Informatics, vol. 1, no. 2, Art. no. 2, 2017, doi:
10.1016/j.visinf.2017.09.001.
Abstract
We present an integrated approach for real-time performance prediction of volume raycasting that we employ for load balancing and sampling resolution tuning. In volume rendering, the usage of acceleration techniques such as empty space skipping and early ray termination, among others, can cause significant variations in rendering performance when users adjust the camera configuration or transfer function. These variations in rendering times may result in unpleasant effects such as jerky motions or abruptly reduced responsiveness during interactive exploration. To avoid those effects, we propose an integrated approach to adapt rendering parameters according to performance needs. We assess performance-relevant data on-the-fly, for which we propose a novel technique to estimate the impact of early ray termination. On the basis of this data, we introduce a hybrid model, to achieve accurate predictions with minimal computational footprint. Our hybrid model incorporates aspects from analytical performance modeling and machine learning, with the goal to combine their respective strengths. We show the applicability of our prediction model for two different use cases: (1) to dynamically steer the sampling density in object and/or image space and (2) to dynamically distribute the workload among several different parallel computing devices. Our approach allows to reliably meet performance requirements such as a user-defined frame rate, even in the case of sudden large changes to the transfer function or the camera orientation.BibTeX
S. Frey and T. Ertl, “Flow-Based Temporal Selection for Interactive Volume Visualization,”
Computer Graphics Forum, vol. 36, no. 8, Art. no. 8, 2017, doi:
10.1111/cgf.13070.
Abstract
We present an approach to adaptively select time steps from time‐dependent volume data sets for an integrated and comprehensive visualization. This reduced set of time steps not only saves cost, but also allows to show both the spatial structure and temporal development in one combined rendering. Our selection optimizes the coverage of the complete data on the basis of a minimum‐cost flow‐based technique to determine meaningful distances between time steps. As both optimal solutions of the involved transport and selection problem are prohibitively expensive, we present new approaches that are significantly faster with only minor deviations. We further propose an adaptive scheme for the progressive incorporation of new time steps. An interactive volume raycaster produces an integrated rendering of the selected time steps, and their computed differences are visualized in a dedicated chart to provide additional temporal similarity information. We illustrate and discuss the utility of our approach by means of different data sets from measurements and simulation.BibTeX
M. Heinemann, V. Bruder, S. Frey, and T. Ertl, “Power Efficiency of Volume Raycasting on Mobile Devices,” 2017. doi:
10.2312/eurp.20171166.
Abstract
Power efficiency is one of the most important factors for the development of compute-intensive applications in the mobile domain. In this work, we evaluate and discuss the power consumption of a direct volume rendering app based on raycasting on a mobile system. For this, we investigate the influence of a broad set of algorithmic parameters, which are relevant for performance and rendering quality, on the energy usage of the system. Additionally, we compare an OpenCL implementation to a variant using OpenGL. By means of a variety of examples, we demonstrate that numerous factors can have a significant impact on power consumption. In particular, we also discuss the underlying reasons for the respective effects.BibTeX
G. Tkachev, S. Frey, C. Müller, V. Bruder, and T. Ertl, “Prediction of Distributed Volume Visualization Performance to Support Render Hardware Acquisition,” in
Proceedings of the Eurographics Symposium on Parallel Graphics and Visualization (EGPGV), 2017, pp. 11–20. doi:
10.2312/pgv.20171089.
Abstract
We present our data-driven, neural network-based approach to predicting the performance of a distributed GPU volume renderer for supporting cluster equipment acquisition. On the basis of timing measurements from a single cluster as well as from individual GPUs, we are able to predict the performance gain of upgrading an existing cluster with additional or faster GPUs, or even purchasing of a new cluster with a comparable network configuration. To achieve this, we employ neural networks to capture complex performance characteristics. However, merely relying on them for the prediction would require the collection of training data on multiple clusters with different hardware, which is impractical in most cases. Therefore, we propose a two-level approach to prediction, distinguishing between node and cluster level. On the node level, we generate performance histograms on individual nodes to capture local rendering performance. These performance histograms are then used to emulate the performance of different rendering hardware for cluster-level measurement runs. Crucially, this variety allows the neural network to capture the compositing performance of a cluster separately from the rendering performance on individual nodes. Therefore, we just need a performance histogram of the GPU of interest to generate a prediction. We demonstrate the utility of our approach using different cluster configurations as well as a range of image and volume resolutions.BibTeX
S. Frey and T. Ertl, “Auto-Tuning Intermediate Representations for In Situ Visualization,” in
Proceedings of the New York Scientific Data Summit (NYSDS), 2016, pp. 1–10. doi:
10.1109/NYSDS.2016.7747807.
Abstract
Advances in high-accuracy measurement techniques and parallel computing systems for simulations lead to a widening gapbetween the rate at which data is generated and the rate at which it can be transferred and stored. In situ visualization directly tackles thisissue by processing—and with this reducing—data as soon as it is generated. This allows to create, transmit and store visualizations at amuch higher resolution than what would be possible otherwise with traditional approaches. So-called hybrid in situ visualization is apopular variant that transforms data into an intermediate visualization representation of reduced size. These intermediate representationscondense the original data by applying visualization techniques, but in contrast to the traditional result of a rendered image, they stillpreserve some degrees of freedom for live and a posteriori exploration and analysis. However, the configuration of the involvedprocessing steps requires careful configuration under the consideration of achieved quality and preserved degrees of freedom againstbandwidth and storage resources.To optimize the generation of intermediate representations for hybrid in situ visualization, we present our approach to (1) analyze andquantify the impact of input parameters, and (2) to auto-tune them on this basis under the consideration of different constraints. Wedemonstrate its application and evaluate respective results at the example of Volumetric Depth Images (VDIs), a view-dependentrepresentation for volumetric data. VDIs can quickly and flexibly be generated via a modified volume raycasting procedure that partitionsand partially composits samples along view rays. In particular, we study the impact of respective input parameters on this process w.r.t.the involved quality-space trade-off. We quantify rendering quality via image quality metrics and space requirements via the compressedsize of the intermediate representation. On this basis, we then automatically determine the parameter settings that yield the best qualityunder different constraints. We demonstrate the utility of our approach by means of a variety of different data sets, and show that weoptimize the achieved results without having to rely on tedious and time-consuming manual tweaking.BibTeX
C. Schulz
et al., “Generative Data Models for Validation and Evaluation of Visualization Techniques,” in
Proceedings of the Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization (BELIV), 2016, pp. 112–124. doi:
10.1145/2993901.2993907.
Abstract
We argue that there is a need for substantially more research on the use of generative data models in the validation and evaluation of visualization techniques. For example, user studies will require the display of representative and uncon-founded visual stimuli, while algorithms will need functional coverage and assessable benchmarks. However, data is often collected in a semi-automatic fashion or entirely hand-picked, which obscures the view of generality, impairs availability, and potentially violates privacy. There are some sub-domains of visualization that use synthetic data in the sense of generative data models, whereas others work with real-world-based data sets and simulations. Depending on the visualization domain, many generative data models are "side projects" as part of an ad-hoc validation of a techniques paper and thus neither reusable nor general-purpose. We review existing work on popular data collections and generative data models in visualization to discuss the opportunities and consequences for technique validation, evaluation, and experiment design. We distill handling and future directions, and discuss how we can engineer generative data models and how visualization research could benefit from more and better use of generative data models.BibTeX
V. Bruder, S. Frey, and T. Ertl, “Real-Time Performance Prediction and Tuning for Interactive Volume Raycasting,” in
Proceedings of the SIGGRAPH Asia Symposium on Visualization, 2016, pp. 1–8. doi:
10.1145/3002151.3002156.
Abstract
We present an integrated approach for the real-time performance prediction and tuning of volume raycasting. The usage of empty space skipping and early ray termination, among others, can induce significant variations in performance when camera configuration and transfer functions are adjusted. For interactive exploration, this can result in various unpleasant effects like abruptly reduced responsiveness or jerky motions. To overcome those effects, we propose an integrated approach to accelerate the rendering and assess performance-relevant data on-the-fly, including a new technique to estimate the impact of early ray termination. On this basis, we introduce a hybrid model, to achieve accurate predictions with only minimal computational footprint. Our hybrid model incorporates both aspects from analytical performance modeling and machine learning, with the goal to combine their respective strengths. Using our model, we dynamically steer the sampling density along rays with our automatic tuning technique. This approach allows to reliably meet performance requirements like a fixed frame rate, even in the case of large sudden changes to the transfer function or the camera. We finally demonstrate the accuracy and utility of our approach by means of a variety of different volume data sets and interaction sequences.BibTeX
S. Frey, F. Sadlo, and T. Ertl, “Balanced Sampling and Compression for Remote Visualization,” in
Proceedings of the SIGGRAPH Asia Symposium on High Performance Computing, 2015, pp. 1–4. doi:
10.1145/2818517.2818529.
Abstract
We present a novel approach for handling sampling and compression in remote visualization in an integrative fashion. As adaptive sampling and compression share the same underlying concepts and criteria, the times spent for visualization and transfer can be balanced directly to optimize the image quality that can be achieved within a prescribed time window. Our dynamic adjustments regarding adaptive sampling, compression, and balancing, employ regression analysis-based error estimation which is carried out individually for each image block of a visualization frame. Our approach is tuned for high parallel efficiency in GPU-based remote visualization. We demonstrate its utility within a prototypical remote volume visualization pipeline by means of different datasets and configurations.BibTeX