Project Collaboration Infrastructure (INF) supports the research projects across the consortium primarily in the areas of research data management (RDM) and data curation, shared resources, and virtual collaboration. RDM is essential for repeatable research, and even more for state-of-the-art research projects and consortia, where infrastructure and coordination on a large scale are needed. Providing reliable storage and open access supports long-term outreach, open science, and trust in the scientific field. Shared resources (e.g., hard- and software) allows research groups to work with a large variety of advanced devices often not affordable for a single group, while sharing information (e.g., high-quality streaming of events) supports virtual collaboration and doctoral training across sites.
Fig. 1: Talk at the Powerwall at the University of Konstanz
A. Niarakis et al., “Addressing barriers in comprehensiveness, accessibility, reusability, interoperability and reproducibility of computational models in systems biology,” Briefings in bioinformatics, vol. 23, no. 4, Art. no. 4, 2022, doi: 10.1093/bib/bbac212.
Computational models are often employed in systems biology to study the dynamic behaviours of complex systems. With the rise in the number of computational models, finding ways to improve the reusability of these models and their ability to reproduce virtual experiments becomes critical. Correct and effective model annotation in community-supported and standardised formats is necessary for this improvement. Here, we present recent efforts toward a common framework for annotated, accessible, reproducible and interoperable computational models in biology, and discuss key challenges of the field.
D. Garkov, C. Müller, M. Braun, D. Weiskopf, and F. Schreiber, “Research Data Curation in Visualization: Position Paper,” in Proceedings of the Ninth Workshop on Evaluation and BEyond - methodoLogIcal approaches for Visualization (BELIV), 2022.
Research data curation is the act of carefully preparing research data and artifacts for sharing and long-term preservation. Research data management is centrally implemented and formally defined in a data management plan to enable data curation. In tandem, data curation and management facilitate research repeatability. In contrast to other research fields, data curation and management in visualization are not yet part of the researcher’s compendium. In this position paper, we discuss the unique challenges visualization faces and propose how data curation can be practically realized. We share eight lessons learned in managing data in two large research consortia, outline the larger curation workflow, and define the typical roles. We complement our lessons with minimum criteria for selecting a suitable data repository and five challenging scenarios that occur in practice. We conclude with a vision of how the visualization research community can pave the way for new curation standards.
C. Müller, M. Heinemann, D. Weiskopf, and T. Ertl, “Power Overwhelming: Quantifying the Energy Cost of Visualisation,” 2022.
Modern machines continuously log status reports over long periods of time, which are valuable data to optimize working routines. Data visualization is a commonly used tool to gain insights into these data, mostly in retrospective, e.g. to determine causal dependencies between faults of different machines. We present an approach to bring such visual analyses to the shop floor to support reacting to faults in real time. This approach combines combines spatio-temporal analyses of time series using a handheld touch device with augmented reality for live monitoring. Important information augments machines directly in their real-world context and detailed logs of current and historical events are displayed on the handheld device. In collaboration with an industry partner, we designed and tested our approach on a live production line to obtain feedback from operators. We compare our approach for monitoring and analysis with existing solutions that are currently deployed.
Spatially resolved transcriptomics is an emerging class of high-throughput technologies that enable biologists to systematically investigate the expression of genes along with spatial information. Upon data acquisition, one major hurdle is the subsequent interpretation and visualization of the datasets acquired. To address this challenge, VR-Cardiomicsis presented, which is a novel data visualization system with interactive functionalities designed to help biologists interpret spatially resolved transcriptomic datasets. By implementing the system in two separate immersive environments, fish tank virtual reality (FTVR) and head-mounted display virtual reality (HMD-VR), biologists can interact with the data in novel ways not previously possible, such as visually exploring the gene expression patterns of an organ, and comparing genes based on their 3D expression profiles. Further, a biologist-driven use-case is presented, in which immersive environments facilitate biologists to explore and compare the heart expression profiles of different genes.
Simulations of cosmic evolution are a means to explain the formation of the universe as we see it today. The resulting data of such simulations comprise numerous physical quantities, which turns their analysis into a complex task. Here, we analyze such high-dimensional and time-varying particle data using various visualization techniques from the fields of particle visualization, flow visualization, volume visualization, and information visualization. Our approach employs specialized filters to extract and highlight the development of so-called active galactic nuclei and filament structures formed by the particles. Additionally, we calculate X-ray emission of the evolving structures in a preprocessing step to complement visual analysis. Our approach is integrated into a single visual analytics framework to allow for analysis of star formation at interactive frame rates. Finally, we lay out the methodological aspects of our work that led to success at the 2019 IEEE SciVis Contest.
F. Frieß, M. Becher, G. Reina, and T. Ertl, “Amortised Encoding for Large High-Resolution Displays,” in 2021 IEEE 11th Symposium on Large Data Analysis and Visualization (LDAV), 2021, pp. 53–62. doi: 10.1109/LDAV53230.2021.00013.
Both visual detail and a low-latency transfer of image data are required for collaborative exploration of scientific data sets across large high-resolution displays. In this work, we present an approach that reduces the resolution before the encoding and uses temporal upscaling to reconstruct the full resolution image, reducing the overall latency and the required bandwidth without significantly impacting the details perceived by observers. Our approach takes advantage of the fact that humans do not perceive the full details of moving objects by providing a perfect reconstruction for static parts of the image, while non-static parts are reconstructed with a lower quality. This strategy enables a substantial reduction of the encoding latency and the required bandwidth with barely noticeable changes in visual quality, which is crucial for collaborative analysis across display walls at different locations. Additionally, our approach can be combined with other techniques aiming to reduce the required bandwidth while keeping the quality as high as possible, such as foveated encoding. We demonstrate the reduced overall latency, the required bandwidth, as well as the high image quality using different visualisations.
F. Frieß, M. Braun, V. Bruder, S. Frey, G. Reina, and T. Ertl, “Foveated Encoding for Large High-Resolution Displays,” IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 2, Art. no. 2, 2021, doi: 10.1109/TVCG.2020.3030445.
Collaborative exploration of scientific data sets across large high-resolution displays requires both high visual detail as wellas low-latency transfer of image data (oftentimes inducing the need to trade one for the other). In this work, we present a system thatdynamically adapts the encoding quality in such systems in a way that reduces the required bandwidth without impacting the detailsperceived by one or more observers. Humans perceive sharp, colourful details, in the small foveal region around the centre of the fieldof view, while information in the periphery is perceived blurred and colourless. We account for this by tracking the gaze of observers,and respectively adapting the quality parameter of each macroblock used by the H.264 encoder, considering the so-called visual acuityfall-off. This allows to substantially reduce the required bandwidth with barely noticeable changes in visual quality, which is crucial forcollaborative analysis across display walls at different locations. We demonstrate the reduced overall required bandwidth and the highquality inside the foveated regions using particle rendering and parallel coordinate
The human microbiome is largely shaped by the chemical interactions of its microbial members, which includes cross-talk via shared signals or quenching of the signalling of other species. Quorum sensing is a process that allows microbes to coordinate their behaviour in dependence of their population density and to adjust gene expression accordingly. We present the Quorum Sensing Database (QSDB), a comprehensive database of all published sensing and quenching relations between organisms and signalling molecules of the human microbiome, as well as an interactive web interface that allows browsing the database, provides graphical depictions of sensing mechanisms as Systems Biology Graphical Notation diagrams and links to other databases.Database URL: QSDB (Quorum Sensing DataBase) is freely available via an interactive web interface and as a downloadable csv file at http://qsdb.org.
V. Bruder, C. Müller, S. Frey, and T. Ertl, “On Evaluating Runtime Performance of Interactive Visualizations,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, pp. 2848–2862, Sep. 2020, doi: 10.1109/TVCG.2019.2898435.
As our field matures, evaluation of visualization techniques has extended from reporting runtime performance to studying user behavior. Consequently, many methodologies and best practices for user studies have evolved. While maintaining interactivity continues to be crucial for the exploration of large data sets, no similar methodological foundation for evaluating runtime performance has been developed. Our analysis of 50 recent visualization papers on new or improved techniques for rendering volumes or particles indicates that only a very limited set of parameters like different data sets, camera paths, viewport sizes, and GPUs are investigated, which make comparison with other techniques or generalization to other parameter ranges at least questionable. To derive a deeper understanding of qualitative runtime behavior and quantitative parameter dependencies, we developed a framework for the most exhaustive performance evaluation of volume and particle visualization techniques that we are aware of, including millions of measurements on ten different GPUs. This paper reports on our insights from statistical analysis of this data discussing independent and linear parameter behavior and non-obvious effects. We give recommendations for best practices when evaluating runtime performance of scientific visualization applications, which can serve as a starting point for more elaborate models of performance quantification.
F. Frieß, C. Müller, and T. Ertl, “Real-Time High-Resolution Visualisation,” in Proceedings of the Eurographics Symposium on Vision, Modeling, and Visualization (VMV), 2020, pp. 127–135. doi: 10.2312/vmv.20201195.
While visualisation often strives for abstraction, the interactive exploration of large scientific data sets like densely sampled 3Dfields or massive particle data sets still benefits from rendering their graphical representation in large detail on high-resolutiondisplays such as Powerwalls or tiled display walls driven by multiple GPUs or even GPU clusters. Such visualisation systemsare typically rather unique in their setup of hardware and software which makes transferring a visualisation application fromone high-resolution system to another one a complicated task. As more and more such visualisation systems get installed,collaboration becomes desirable in the sense of sharing such a visualisation running on one site in real time with another high-resolution display on a remote site while at the same time communicating via video and audio. Since typical video conferencesolutions or web-based collaboration tools often cannot deal with resolutions exceeding 4K, with stereo displays or with multi-GPU setups, we designed and implemented a new system based on state-of-the-art hardware and software technologies totransmit high-resolution visualisations including video and audio streams via the internet to remote large displays and back.Our system architecture is built on efficient capturing, encoding and transmission of pixel streams and thus supports a multitudeof configurations combining audio and video streams in a generic approac
C. Müller, M. Braun, and T. Ertl, “Optimised Molecular Graphics on the HoloLens,” in IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2019, Osaka, Japan, March 23-27, 2019, 2019, pp. 97–102. doi: 10.1109/VR.2019.8798111.
The advent of modern and affordable augmented reality head sets like Microsoft HoloLens has sparked new interest in using virtual and augmented reality technology in the analysis of molecular data. For all visualisation in immersive, mixed-reality scenarios, a sufficiently high rendering speed is an important factor, which leads to the issue of limited processing power available on fully untethered devices facing the situation of handling computationally expensive visualisations. Recent research shows that the space-filling model of even small data sets from the Protein Data Bank (PDB) cannot be rendered at desirable frame rates on the HoloLens. In this work, we report on how to improve the rendering speed of atom-based visualisation of proteins and how the rendering of more abstract representations of the molecules compares against it. We complement our findings with in-depth GPU and CPU performance numbers.
F. Frieß, M. Landwehr, V. Bruder, S. Frey, and T. Ertl, “Adaptive Encoder Settings for Interactive Remote Visualisation on High-Resolution Displays,” in Proceedings of the IEEE Symposium on Large Data Analysis and Visualization - Short Papers (LDAV), 2018, pp. 87–91. doi: 10.1109/LDAV.2018.8739215.
We present an approach that dynamically adapts encoder settings for image tiles to yield the best possible quality for a given bandwidth. This reduces the overall size of the image while preserving details. Our application determines the encoding settings in two steps. In the first step, we predict the quality and size of the tiles for different encoding settings using a convolutional neural network. In the second step, we assign the optimal encoder setting to each tile, so that the overall size of the image is lower than a predetermined threshold. Commonly, for tiles that contain complicated structures, a high quality setting is used in order to prevent major information loss, while quality settings are lowered for others to keep the size below the threshold. We demonstrate that we can reduce the overall size of the image while preserving the details in areas of interest using the example of both particle and volume visualisation applications.