This material is based upon work supported by the National Science Foundation: IIS-1617101
Disclaimer: Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
Uncertainties in observational data compounded with errors introduced as a part of modeling (e.g., truncation, quantization) are common sources of uncertainty that adversely affect the reliability of simulation results. Modern UQ techniques provide probability distributions that describe the variability of simulation results. The integration of this variability to the data analysis and visualization algorithms allows for decision making in presence of uncertainty. However, these algorithms, themselves, introduce non-linear transformations to the uncertainty in the data (e.g., error bars or distributions) that further complicate the quantification of uncertainty at the end of the visual analysis process. Moreover, common operations such as data filtering, contouring and classification need to be redefined in a probabilistic setting to allow for the integration and propagation of data uncertainty. This project addresses fundamental challenges that arise in the analysis and quantification of uncertainty as the data propagates throughout various stages of the visualization pipeline.
We present a study of linear interpolation when applied to uncertain data. Linear interpolation is a key step for isosurface extraction algorithms, and the uncertainties in the data lead to non-linear variations in the geometry of the extracted isosurface. We present an approach for deriving the probability density function of a random variable modeling the positional uncertainty in the isosurface extraction. When the uncertainty is quantified by a uniform distribution, our approach provides a closed-form characterization of the mentioned random variable. This allows us to derive, in closed form, the expected value as well as the variance of the level-crossing position. While the former quantity is used for constructing a stable isosurface for uncertain data, the latter is used for visualizing the positional uncertainties in the expected isosurface level crossings on the underlying grid [18], [2].
Classification based on mean field |
Vertex-based classification method |
Edge-based classification method |
The research goal of the project “A Statistical Direct Volume Rendering Framework for Visualization of Uncertain Data” is to develop a statistical framework for quantification of uncertainty and its propagation in the main stages of the visualization pipeline. This project is published in [4]. We introduce a probabilistic transfer function classification model that allows for incorporating probability density functions into the volume rendering integral. Our statistical framework allows for incorporating distributions from various sources of uncertainty which makes it suitable in a wide range of visualization applications. We demonstrate effectiveness of our approach in visualization of ensemble data, visualizing large datasets at reduced scale, iso-surface extraction, and visualization of noisy data.
The research goal of our project “Volumetric Feature-Based Classification and Visibility Analysis for Transfer Function Design” is to ease the laborious transfer function (TF) design process in direct volume rendering. This project is published in [9]. The source code is publicly available on Github (https://github.com/scumabo/TransferFunctionDVR). The proposed research approach is mainly composed of three parts: 1. Hierarchical cell-based feature similarity map: we propose an efficient feature similarity method for the analysis of isosurfaces (1D) and isovalue-gradient features (2D) present in volumetric datasets. We provide source code and shell scripts to reproduce all the experiments in the paper. 2. Feature classification: we provide an interactive interface to aid the identification of distinct volumetric structures in the data. 3. Feature visibility for TF design: we improve the conventional visibility measurement and propose feature visibility for TF specification. We have integrated our approach in an open source visualization package, Voreen [19], for automatic TF generation.
The research goal of our project, titled “Quality Assessment of Volume Compression Approaches Using Isovalue Clustering”, is to provide a new method to assess the quality of compressed volumetric data. This project is published in [3]. The proposed approach uses representative isosurfaces as benchmark structures to evaluate the visual quality of compressed 3D scalar fields. We examine a number of widely used compression approaches, namely, discrete wavelet transform, discrete cosine transform, and tensor approximation, to establish the utility of our volume quality assessment approach. The source code is available on Github (https://github.com/scumabo/VQA), which includes scripts for compressing volumetric data and evaluating the quality of volume compressions. This project provides means for selecting representative isovalues from a volumetric dataset. These representative isovalues are used to design transfer functions (e.g., as in [9] in above).
Numerical Weather Prediction (NWP) ensembles are commonly used to assess the uncertainty and confidence in weather forecasts. Spaghetti plots are conventional tools for meteorologists to directly examine the uncertainty exhibited by ensembles, where they simultaneously visualize isocontours of all ensemble members. To avoid visual clutter in practical usages, one needs to select a small number of informative isovalues for visual analysis. Moreover, due to the complex topology and variation of ensemble isocontours, it is often a challenging task to interpret the spaghetti plot for even a single isovalue in large ensembles. In this paper, we propose an interactive framework for uncertainty visualization of weather forecast ensembles that significantly improves and expands the utility of spaghetti plots in ensemble analysis. Complementary to state-of-the-art methods, our approach provides a complete framework for visual exploration of ensemble isocontours, including isovalue selection, interactive isocontour variability exploration, and interactive sub-region selection and re-analysis.
Our framework is built upon the high-density clustering paradigm, where the mode structure of the density function is represented as a hierarchy of nested subsets of the data. We generalize the high-density clustering for isocontours and propose a bandwidth selection method for estimating the density function of ensemble isocontours. We present novel visualizations based on high-density clustering results, called the mode plot and the simplified spaghetti plot. The proposed mode plot visually encodes the structure provided by the high-density clustering result and summarizes the distribution of ensemble isocontours. It also enables the selection of subsets of interesting isocontours, which are interactively highlighted in a linked spaghetti plot for providing spatial context. To provide an interpretable overview of the positional variability of isocontours, our system allows for selection of informative isovalues from the simplified spaghetti plot. Due to the spatial variability of ensemble isocontours, the system allows for interactive selection and focus on sub-regions for local uncertainty and clustering re-analysis. We examine a number of ensemble datasets to establish the utility of our approach and discuss its advantages over state-of-the-art visual analysis tools for ensemble data.
Here is a brief overview of our system.
Last updated 27.jun.19