T. Ge
et al., “Optimally Ordered Orthogonal Neighbor Joining Trees for Hierarchical Cluster Analysis,”
IEEE Transactions on Visualization and Computer Graphics, pp. 1–13, 2023, doi:
10.1109/TVCG.2023.3284499.
Abstract
We propose to use optimally ordered orthogonal neighbor-joining (O 3 NJ) trees as a new way to visually explore cluster structures and outliers in multi-dimensional data. Neighbor-joining (NJ) trees are widely used in biology, and their visual representation is similar to that of dendrograms. The core difference to dendrograms, however, is that NJ trees correctly encode distances between data points, resulting in trees with varying edge lengths. We optimize NJ trees for their use in visual analysis in two ways. First, we propose to use a novel leaf sorting algorithm that helps users to better interpret adjacencies and proximities within such a tree. Second, we provide a new method to visually distill the cluster tree from an ordered NJ tree. Numerical evaluation and three case studies illustrate the benefits of this approach for exploring multi-dimensional data in areas such as biology or image analysis.BibTeX
F. Petersen, B. Goldluecke, C. Borgelt, and O. Deussen, “GenDR: A Generalized Differentiable Renderer,” in
Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), in Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR). 2022, pp. 3992–4001. doi:
10.1109/CVPR52688.2022.00397.
Abstract
In this work, we present and study a generalized family of differentiable renderers. We discuss from scratch which components are necessary for differentiable rendering and formalize the requirements for each component. We instantiate our general differentiable renderer, which generalizes existing differentiable renderers like SoftRas and DIB-R, with an array of different smoothing distributions to cover a large spectrum of reasonable settings. We evaluate an array of differentiable renderer instantiations on the popular ShapeNet 3D reconstruction benchmark and analyze the implications of our results. Surprisingly, the simple uniform distribution yields the best overall results when averaged over 13 classes; in general, however, the optimal choice of distribution heavily depends on the task.BibTeX
F. Petersen, B. Goldluecke, O. Deussen, and H. Kuehne, “Style Agnostic 3D Reconstruction via Adversarial Style Transfer,” in
2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), in 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). IEEE, Jan. 2022, pp. 2273–2282. doi:
10.1109/WACV51458.2022.00233.
Abstract
Reconstructing the 3D geometry of an object from an image is a major challenge in computer vision. Recently introduced differentiable renderers can be leveraged to learn the 3D geometry of objects from 2D images, but those approaches require additional supervision to enable the renderer to produce an output that can be compared to the input image. This can be scene information or constraints such as object silhouettes, uniform backgrounds, material, texture, and lighting. In this paper, we propose an approach that enables a differentiable rendering-based learning of 3D objects from images with backgrounds without the need for silhouette supervision. Instead of trying to render an image close to the input, we propose an adversarial style-transfer and domain adaptation pipeline that allows to translate the input image domain to the rendered image domain. This allows us to directly compare between a translated image and the differentiable rendering of a 3D object reconstruction in order to train the 3D object reconstruction network. We show that the approach learns 3D geometry from images with backgrounds and provides a better performance than constrained methods for single-view 3D object reconstruction on this task.BibTeX
K. Lu
et al., “Palettailor: Discriminable Colorization for Categorical Data,”
IEEE Transactions on Visualization & Computer Graphics, vol. 27, no. 02, Art. no. 02, 2021, doi:
10.1109/TVCG.2020.3030406.
Abstract
We present an integrated approach for creating and assigning color palettes to different visualizations such as multi-class scatterplots, line, and bar charts. While other methods separate the creation of colors from their assignment, our approach takes data characteristics into account to produce color palettes, which are then assigned in a way that fosters better visual discrimination of classes. To do so, we use a customized optimization based on simulated annealing to maximize the combination of three carefully designed color scoring functions: point distinctness, name difference, and color discrimination. We compare our approach to state-of-the-art palettes with a controlled user study for scatterplots and line charts, furthermore we performed a case study. Our results show that Palettailor, as a fully-automated approach, generates color palettes with a higher discrimination quality than existing approaches. The efficiency of our optimization allows us also to incorporate user modifications into the color selection process.BibTeX
C. Schulz
et al., “Multi-Class Inverted Stippling,”
ACM Trans. Graph., vol. 40, no. 6, Art. no. 6, Dec. 2021, doi:
10.1145/3478513.3480534.
Abstract
We introduce inverted stippling, a method to mimic an inversion technique used by artists when performing stippling. To this end, we extend Linde-Buzo-Gray (LBG) stippling to multi-class LBG (MLBG) stippling with multiple layers. MLBG stippling couples the layers stochastically to optimize for per-layer and overall blue-noise properties. We propose a stipple-based filling method to generate solid color backgrounds for inverting areas. Our experiments demonstrate the effectiveness of MLBG in terms of reducing overlapping and intensity accuracy. In addition, we showcase MLBG with color stippling and dynamic multi-class blue-noise sampling, which is possible due to its support for temporal coherence.BibTeX
K. C. Kwan and H. Fu, “Automatic Image Checkpoint Selection for Guider-Follower Pedestrian Navigation,”
Computer Graphics Forum, vol. 40, no. 1, Art. no. 1, 2021, doi:
https://doi.org/10.1111/cgf.14192.
Abstract
Abstract In recent years guider-follower approaches show a promising solution to the challenging problem of last-mile or indoor pedestrian navigation without micro-maps or indoor floor plans for path planning. However, the success of such guider-follower approaches is highly dependent on a set of manually and carefully chosen image or video checkpoints. This selection process is tedious and error-prone. To address this issue, we first conduct a pilot study to understand how users as guiders select critical checkpoints from a video recorded while walking along a route, leading to a set of criteria for automatic checkpoint selection. By using these criteria, including visibility, stairs and clearness, we then implement this automation process. The key behind our technique is a lightweight, effective algorithm using left-hand-side and right-hand-side objects for path occlusion detection, which benefits both automatic checkpoint selection and occlusion-aware path annotation on selected image checkpoints. Our experimental results show that our automatic checkpoint selection method works well in different navigation scenarios. The quality of automatically selected checkpoints is comparable to that of manually selected ones and higher than that of checkpoints by alternative automatic methods.BibTeX
Y. Chen, K. C. Kwan, L.-Y. Wei, and H. Fu, “Autocomplete Repetitive Stroking with Image Guidance,” in
SIGGRAPH Asia 2021 Technical Communications, in SIGGRAPH Asia 2021 Technical Communications. Tokyo, Japan: Association for Computing Machinery, 2021. doi:
10.1145/3478512.3488595.
Abstract
Image-guided drawing can compensate for the lack of skills but often requires a significant number of repetitive strokes to create textures. Existing automatic stroke synthesis methods are usually limited to predefined styles or require indirect manipulation that may break the spontaneous flow of drawing. We present a method to autocomplete repetitive short strokes during users’ normal drawing process. Users can draw over a reference image as usual. At the same time, our system silently analyzes the input strokes and the reference to infer strokes that follow users’ input style when certain repetition is detected. Our key idea is to jointly analyze image regions and operation history for detecting and predicting repetitions. The proposed system can reduce tedious repetitive inputs while being fully under user control.BibTeX
C. Bu
et al., “SineStream: Improving the Readability of Streamgraphs by Minimizing Sine Illusion Effects,”
IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 2, Art. no. 2, 2021, doi:
10.1109/TVCG.2020.3030404.
Abstract
In this paper, we propose SineStream, a new variant of streamgraphs that improves their readability by minimizing sine illusion effects. Such effects reflect the tendency of humans to take the orthogonal rather than the vertical distance between two curves as their distance. In SineStream, we connect the readability of streamgraphs with minimizing sine illusions and by doing so provide a perceptual foundation for their design. As the geometry of a streamgraph is controlled by its baseline (the bottom-most curve) and the ordering of the layers, we re-interpret baseline computation and layer ordering algorithms in terms of reducing sine illusion effects. For baseline computation, we improve previous methods by introducing a Gaussian weight to penalize layers with large thickness changes. For layer ordering, three design requirements are proposed and implemented through a hierarchical clustering algorithm. Quantitative experiments and user studies demonstrate that SineStream improves the readability and aesthetics of streamgraphs compared to state-of-the-art methods.BibTeX
Y. Wang
et al., “Improving the Robustness of Scagnostics,”
IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 1, Art. no. 1, 2019, doi:
10.1109/TVCG.2019.2934796.
Abstract
In this paper, we examine the robustness of scagnostics through a series of theoretical and empirical studies. First, we investigate the sensitivity of scagnostics by employing perturbing operations on more than 60M synthetic and real-world scatterplots. We found that two scagnostic measures, Outlying and Clumpy, are overly sensitive to data binning. To understand how these measures align with human judgments of visual features, we conducted a study with 24 participants, which reveals that i) humans are not sensitive to small perturbations of the data that cause large changes in both measures, and ii) the perception of clumpiness heavily depends on per-cluster topologies and structures. Motivated by these results, we propose Robust Scagnostics (RScag) by combining adaptive binning with a hierarchy-based form of scagnostics. An analysis shows that RScag improves on the robustness of original scagnostics, aligns better with human judgments, and is equally fast as the traditional scagnostic measures.BibTeX
J. Görtler, M. Spicker, C. Schulz, D. Weiskopf, and O. Deussen, “Stippling of 2D Scalar Fields,”
IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 6, Art. no. 6, 2019, doi:
10.1109/TVCG.2019.2903945.
Abstract
We propose a technique to represent two-dimensional data using stipples. While stippling is often regarded as an illustrative method, we argue that it is worth investigating its suitability for the visualization domain. For this purpose, we generalize the Linde-Buzo-Gray stippling algorithm for information visualization purposes to encode continuous and discrete 2D data. Our proposed modifications provide more control over the resulting distribution of stipples for encoding additional information into the representation, such as contours. We show different approaches to depict contours in stipple drawings based on locally adjusting the stipple distribution. Combining stipple-based gradients and contours allows for simultaneous assessment of the overall structure of the data while preserving important local details. We discuss the applicability of our technique using datasets from different domains and conduct observation-validating studies to assess the perception of stippled representationsBibTeX
D. Laupheimer, P. Tutzauer, N. Haala, and M. Spicker, “Neural Networks for the Classification of
Building Use from Street-view Imagery,”
ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. IV–2, pp. 177–184, 2018, doi:
10.5194/isprs-annals-IV-2-177-2018.
Abstract
Within this paper we propose an end-to-end approach for classifying terrestrial images of building facades into five different utility classes (commercial, hybrid, residential, specialUse, underConstruction) by using Convolutional Neural Networks (CNNs). For our examples we use images provided by Google Street View. These images are automatically linked to a coarse city model, including the outlines of the buildings as well as their respective use classes. By these means an extensive dataset is available for training and evaluation of our Deep Learning pipeline. The paper describes the implemented end-to-end approach for classifying street-level images of building facades and discusses our experiments with various CNNs. In addition to the classification results, so-called Class Activation Maps (CAMs) are evaluated. These maps give further insights into decisive facade parts that are learned as features during the training process. Furthermore, they can be used for the generation of abstract presentations which facilitate the comprehension of semantic image content. The abstract representations are a result of the stippling method, an importance-based image rendering.BibTeX
M. Spicker, F. Hahn, T. Lindemeier, D. Saupe, and O. Deussen, “Quantifying Visual Abstraction Quality for Stipple Drawings,” in
Proceedings of the Symposium on Non-Photorealistic Animation and Rendering (NPAR), ACM, Ed., in Proceedings of the Symposium on Non-Photorealistic Animation and Rendering (NPAR). Association for Computing Machinery, 2017, pp. 8:1-8:10. [Online]. Available:
https://doi.org/http://dx.doi.org/10.1145/3092919.3092923Abstract
We investigate how the perceived abstraction quality of stipple illustrations is related to the number of points used to create them. Since it is difficult to find objective functions that quantify the visual quality of such illustrations, we gather comparative data by a crowdsourcing user study and employ a paired comparison model to deduce absolute quality values. Based on this study we show that it is possible to predict the perceived quality of stippled representations based on the properties of an input image. Our results are related to Weber-Fechner's law from psychophysics and indicate a logarithmic relation between numbers of points and perceived abstraction quality. We give guidance for the number of stipple points that is typically enough to represent an input image well.BibTeX
O. Deussen, M. Spicker, and Q. Zheng, “Weighted Linde-Buzo-Gray Stippling,”
ACM Transactions on Graphics, vol. 36, no. 6, Art. no. 6, Nov. 2017, doi:
10.1145/3130800.3130819.
BibTeX
J. Kratt, F. Eisenkeil, M. Spicker, Y. Wang, D. Weiskopf, and O. Deussen, “Structure-aware Stylization of Mountainous Terrains,” in
Vision, Modeling & Visualization, M. Hullin, R. Klein, T. Schultz, and A. Yao, Eds., in Vision, Modeling & Visualization. , The Eurographics Association, 2017. doi:
10.2312/vmv.20171255.
BibTeX
P. Tutzauer, S. Becker, T. Niese, O. Deussen, and D. Fritsch, “Understanding Human Perception of Building Categories in Virtual 3d Cities - a User Study,”
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences (ISPRS), vol. XLI-B2, pp. 683–687, 2016, doi:
http://dx.doi.org/10.5194/isprs-archives-XLI-B2-683-2016.
Abstract
Virtual 3D cities are becoming increasingly important as a means of visually communicating diverse urban-related information. To get a deeper understanding of a human’s cognitive experience of virtual 3D cities, this paper presents a user study on the human ability to perceive building categories (e.g. residential home, office building, building with shops etc.) from geometric 3D building representations. The study reveals various dependencies between geometric properties of the 3D representations and the perceptibility of the building categories. Knowledge about which geometries are relevant, helpful or obstructive for perceiving a specific building category is derived. The importance and usability of such knowledge is demonstrated based on a perception-guided 3D building abstraction process.BibTeX
M. Spicker, J. Kratt, D. Arellano, and O. Deussen, “Depth-aware Coherent Line Drawings,” in
Proceedings of the SIGGRAPH Asia Symposium on Computer Graphics and Interactive Techniques, Technical Briefs, in Proceedings of the SIGGRAPH Asia Symposium on Computer Graphics and Interactive Techniques, Technical Briefs. ACM, 2015, pp. 1:1-1:5. [Online]. Available:
http://doi.acm.org/10.1145/2820903.2820909BibTeX