D. Bienroth
et al., “Spatially resolved transcriptomics in immersive environments,”
Visual Computing for Industry, Biomedicine, and Art, vol. 5, no. 1, Art. no. 1, 2022, doi:
10.1186/s42492-021-00098-6.
Abstract
Spatially resolved transcriptomics is an emerging class of high-throughput technologies that enable biologists to systematically investigate the expression of genes along with spatial information. Upon data acquisition, one major hurdle is the subsequent interpretation and visualization of the datasets acquired. To address this challenge, VR-Cardiomicsis presented, which is a novel data visualization system with interactive functionalities designed to help biologists interpret spatially resolved transcriptomic datasets. By implementing the system in two separate immersive environments, fish tank virtual reality (FTVR) and head-mounted display virtual reality (HMD-VR), biologists can interact with the data in novel ways not previously possible, such as visually exploring the gene expression patterns of an organ, and comparing genes based on their 3D expression profiles. Further, a biologist-driven use-case is presented, in which immersive environments facilitate biologists to explore and compare the heart expression profiles of different genes.BibTeX
F. Frieß, M. Becher, G. Reina, and T. Ertl, “Amortised Encoding for Large High-Resolution Displays,” in
2021 IEEE 11th Symposium on Large Data Analysis and Visualization (LDAV), 2021, pp. 53–62. doi:
10.1109/LDAV53230.2021.00013.
Abstract
Both visual detail and a low-latency transfer of image data are required for collaborative exploration of scientific data sets across large high-resolution displays. In this work, we present an approach that reduces the resolution before the encoding and uses temporal upscaling to reconstruct the full resolution image, reducing the overall latency and the required bandwidth without significantly impacting the details perceived by observers. Our approach takes advantage of the fact that humans do not perceive the full details of moving objects by providing a perfect reconstruction for static parts of the image, while non-static parts are reconstructed with a lower quality. This strategy enables a substantial reduction of the encoding latency and the required bandwidth with barely noticeable changes in visual quality, which is crucial for collaborative analysis across display walls at different locations. Additionally, our approach can be combined with other techniques aiming to reduce the required bandwidth while keeping the quality as high as possible, such as foveated encoding. We demonstrate the reduced overall latency, the required bandwidth, as well as the high image quality using different visualisations.BibTeX
K. Klein, D. Garkov, S. Rütschlin, T. Böttcher, and F. Schreiber, “QSDB—a graphical Quorum Sensing Database,”
Database, vol. 2021, no. 2021, Art. no. 2021, Nov. 2021, doi:
10.1093/database/baab058.
Abstract
The human microbiome is largely shaped by the chemical interactions of its microbial members, which includes cross-talk via shared signals or quenching of the signalling of other species. Quorum sensing is a process that allows microbes to coordinate their behaviour in dependence of their population density and to adjust gene expression accordingly. We present the Quorum Sensing Database (QSDB), a comprehensive database of all published sensing and quenching relations between organisms and signalling molecules of the human microbiome, as well as an interactive web interface that allows browsing the database, provides graphical depictions of sensing mechanisms as Systems Biology Graphical Notation diagrams and links to other databases.Database URL: QSDB (Quorum Sensing DataBase) is freely available via an interactive web interface and as a downloadable csv file at http://qsdb.org.BibTeX
V. Bruder, C. Müller, S. Frey, and T. Ertl, “On Evaluating Runtime Performance of Interactive Visualizations,”
IEEE Transactions on Visualization and Computer Graphics, vol. 26, pp. 2848–2862, Sep. 2020, doi:
10.1109/TVCG.2019.2898435.
Abstract
As our field matures, evaluation of visualization techniques has extended from reporting runtime performance to studying user behavior. Consequently, many methodologies and best practices for user studies have evolved. While maintaining interactivity continues to be crucial for the exploration of large data sets, no similar methodological foundation for evaluating runtime performance has been developed. Our analysis of 50 recent visualization papers on new or improved techniques for rendering volumes or particles indicates that only a very limited set of parameters like different data sets, camera paths, viewport sizes, and GPUs are investigated, which make comparison with other techniques or generalization to other parameter ranges at least questionable. To derive a deeper understanding of qualitative runtime behavior and quantitative parameter dependencies, we developed a framework for the most exhaustive performance evaluation of volume and particle visualization techniques that we are aware of, including millions of measurements on ten different GPUs. This paper reports on our insights from statistical analysis of this data discussing independent and linear parameter behavior and non-obvious effects. We give recommendations for best practices when evaluating runtime performance of scientific visualization applications, which can serve as a starting point for more elaborate models of performance quantification.BibTeX
F. Frieß, M. Braun, V. Bruder, S. Frey, G. Reina, and T. Ertl, “Foveated Encoding for Large High-Resolution Displays,”
IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 2, Art. no. 2, 2020, doi:
10.1109/TVCG.2020.3030445.
Abstract
Collaborative exploration of scientific data sets across large high-resolution displays requires both high visual detail as wellas low-latency transfer of image data (oftentimes inducing the need to trade one for the other). In this work, we present a system thatdynamically adapts the encoding quality in such systems in a way that reduces the required bandwidth without impacting the detailsperceived by one or more observers. Humans perceive sharp, colourful details, in the small foveal region around the centre of the fieldof view, while information in the periphery is perceived blurred and colourless. We account for this by tracking the gaze of observers,and respectively adapting the quality parameter of each macroblock used by the H.264 encoder, considering the so-called visual acuityfall-off. This allows to substantially reduce the required bandwidth with barely noticeable changes in visual quality, which is crucial forcollaborative analysis across display walls at different locations. We demonstrate the reduced overall required bandwidth and the highquality inside the foveated regions using particle rendering and parallel coordinateBibTeX
F. Frieß, C. Müller, and T. Ertl, “Real-Time High-Resolution Visualisation,” in
Proceedings of the Eurographics Symposium on Vision, Modeling, and Visualization (VMV), 2020, pp. 127–135. doi:
10.2312/vmv.20201195.
Abstract
While visualisation often strives for abstraction, the interactive exploration of large scientific data sets like densely sampled 3Dfields or massive particle data sets still benefits from rendering their graphical representation in large detail on high-resolutiondisplays such as Powerwalls or tiled display walls driven by multiple GPUs or even GPU clusters. Such visualisation systemsare typically rather unique in their setup of hardware and software which makes transferring a visualisation application fromone high-resolution system to another one a complicated task. As more and more such visualisation systems get installed,collaboration becomes desirable in the sense of sharing such a visualisation running on one site in real time with another high-resolution display on a remote site while at the same time communicating via video and audio. Since typical video conferencesolutions or web-based collaboration tools often cannot deal with resolutions exceeding 4K, with stereo displays or with multi-GPU setups, we designed and implemented a new system based on state-of-the-art hardware and software technologies totransmit high-resolution visualisations including video and audio streams via the internet to remote large displays and back.Our system architecture is built on efficient capturing, encoding and transmission of pixel streams and thus supports a multitudeof configurations combining audio and video streams in a generic approacBibTeX
C. Müller, M. Braun, and T. Ertl, “Optimised Molecular Graphics on the HoloLens,” in
IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2019, Osaka, Japan, March 23-27, 2019, 2019, pp. 97–102. doi:
10.1109/VR.2019.8798111.
Abstract
The advent of modern and affordable augmented reality head sets like Microsoft HoloLens has sparked new interest in using virtual and augmented reality technology in the analysis of molecular data. For all visualisation in immersive, mixed-reality scenarios, a sufficiently high rendering speed is an important factor, which leads to the issue of limited processing power available on fully untethered devices facing the situation of handling computationally expensive visualisations. Recent research shows that the space-filling model of even small data sets from the Protein Data Bank (PDB) cannot be rendered at desirable frame rates on the HoloLens. In this work, we report on how to improve the rendering speed of atom-based visualisation of proteins and how the rendering of more abstract representations of the molecules compares against it. We complement our findings with in-depth GPU and CPU performance numbers.BibTeX
F. Frieß, M. Landwehr, V. Bruder, S. Frey, and T. Ertl, “Adaptive Encoder Settings for Interactive Remote Visualisation on High-Resolution Displays,” in
Proceedings of the IEEE Symposium on Large Data Analysis and Visualization - Short Papers (LDAV), 2018, pp. 87–91. doi:
10.1109/LDAV.2018.8739215.
Abstract
We present an approach that dynamically adapts encoder settings for image tiles to yield the best possible quality for a given bandwidth. This reduces the overall size of the image while preserving details. Our application determines the encoding settings in two steps. In the first step, we predict the quality and size of the tiles for different encoding settings using a convolutional neural network. In the second step, we assign the optimal encoder setting to each tile, so that the overall size of the image is lower than a predetermined threshold. Commonly, for tiles that contain complicated structures, a high quality setting is used in order to prevent major information loss, while quality settings are lowered for others to keep the size below the threshold. We demonstrate that we can reduce the overall size of the image while preserving the details in areas of interest using the example of both particle and volume visualisation applications.BibTeX