Uncertainty quantification (UQ) is the science of quantitative characterization and reduction of uncertainties in applications. It tries to determine how likely certain outcomes are if some aspects of the system are not exactly known. UQ is key for achieving validated predictive computations in a wide range of scientific and engineering applications. The field relies on a broad range of mathematics and statistics groundwork, with associated algorithmic and computational development.

muView: A Visual Analysis System for Exploring Uncertainty in Myocardial Ischemia SimulationsP. Rosen, B. Burton, K. Potter, C.R. Johnson. In VMSL Book, pp. (to appear). 2014. |

Uncertainty Visualization in HARDI based on Ensembles of ODFsF. Jiao, J.M. Phillips, Y. Gur, C.R. Johnson. In Proceedings of 2013 IEEE Pacific Visualization Symposium, pp. 193--200. 2013. PubMed ID: 24466504 In this paper, we propose a new and accurate technique for uncertainty analysis and uncertainty visualization based on fiber orientation distribution function (ODF) glyphs, associated with high angular resolution diffusion imaging (HARDI). Our visualization applies volume rendering techniques to an ensemble of 3D ODF glyphs, which we call SIP functions of diffusion shapes, to capture their variability due to underlying uncertainty. This rendering elucidates the complex heteroscedastic structural variation in these shapes. Furthermore, we quantify the extent of this variation by measuring the fraction of the volume of these shapes, which is consistent across all noise levels, the certain volume ratio. Our uncertainty analysis and visualization framework is then applied to synthetic data, as well as to HARDI human-brain data, to study the impact of various image acquisition parameters and background noise levels on the diffusion shapes. |

Contour Boxplots: A Method for Characterizing Uncertainty in Feature Sets from Simulation EnsemblesR.T. Whitaker, M. Mirzargar, R.M. Kirby. In IEEE Transactions on Visualization and Computer Graphics, Vol. 19, No. 12, pp. 2713--2722. December, 2013. DOI: 10.1109/TVCG.2013.143 PubMed ID: 24051838 Ensembles of numerical simulations are used in a variety of applications, such as meteorology or computational solid mechanics, in order to quantify the uncertainty or possible error in a model or simulation. Deriving robust statistics and visualizing the variability of an ensemble is a challenging task and is usually accomplished through direct visualization of ensemble members or by providing aggregate representations such as an average or pointwise probabilities. In many cases, the interesting quantities in a simulation are not dense fields, but are sets of features that are often represented as thresholds on physical or derived quantities. In this paper, we introduce a generalization of boxplots, called contour boxplots, for visualization and exploration of ensembles of contours or level sets of functions. Conventional boxplots have been widely used as an exploratory or communicative tool for data analysis, and they typically show the median, mean, confidence intervals, and outliers of a population. The proposed contour boxplots are a generalization of functional boxplots, which build on the notion of data depth. Data depth approximates the extent to which a particular sample is centrally located within its density function. This produces a center-outward ordering that gives rise to the statistical quantities that are essential to boxplots. Here we present a generalization of functional data depth to contours and demonstrate methods for displaying the resulting boxplots for two-dimensional simulation data in weather forecasting and computational fluid dynamics. |

DTI Quality Control Assessment via Error Estimation From Monte Carlo SimulationsM. Farzinfar, Y. Li, A.R. Verde, I. Oguz, G. Gerig, M.A. Styner. In Proceedings of SPIE 8669, Medical Imaging 2013: Image Processing, 86692C, pp. (to appear). 2013. DOI: 10.1117/12.2006925 Diffusion Tensor Imaging (DTI) is currently the state of the art method for characterizing the microscopic tissue structure of white matter in normal or diseased brain in vivo. DTI is estimated from a series of Diffusion Weighted Imaging (DWI) volumes. DWIs suffer from a number of artifacts which mandate stringent Quality Control (QC) schemes to eliminate lower quality images for optimal tensor estimation. Conventionally, QC procedures exclude artifact-affected DWIs from subsequent computations leading to a cleaned, reduced set of DWIs, called DWI-QC. Often, a rejection threshold is heuristically/empirically chosen above which the entire DWI-QC data is rendered unacceptable and thus no DTI is computed. In this work, we have devised a more sophisticated, Monte-Carlo (MC) simulation based method for the assessment of resulting tensor properties. This allows for a consistent, error-based threshold definition in order to reject/accept the DWI-QC data. Specifically, we propose the estimation of two error metrics related to directional distribution bias of Fractional Anisotropy (FA) and the Principal Direction (PD). The bias is modeled from the DWI-QC gradient information and a Rician noise model incorporating the loss of signal due to the DWI exclusions. Our simulations further show that the estimated bias can be substantially different with respect to magnitude and directional distribution depending on the degree of spatial clustering of the excluded DWIs. Thus, determination of diffusion properties with minimal error requires an evenly distributed sampling of the gradient directions before and |

Generalised Polynomial Chaos for a Class of Linear Conservation LawsR. Pulch, D. Xiu. In Journal of Scientific Computing, Vol. 51, No. 2, pp. 293--312. 2012. DOI: 10.1007/s10915-011-9511-5 Mathematical modelling of dynamical systems often yields partial differential equations (PDEs) in time and space, which represent a conservation law possibly including a source term. Uncertainties in physical parameters can be described by random variables. To resolve the stochastic model, the Galerkin technique of the generalised polynomial chaos results in a larger coupled system of PDEs. We consider a certain class of linear systems of conservation laws, which exhibit a hyperbolic structure. Accordingly, we analyse the hyperbolicity of the corresponding coupled system of linear conservation laws from the polynomial chaos. Numerical results of two illustrative examples are presented. |

Stochastic Collocation Methods on Unstructured Grids in High Dimensions via InterpolationA. Narayan, D. Xiu. In SIAM Journal on Scientific Computing, Vol. 34, No. 3, pp. A1729–-A1752. 2012. DOI: 10.1137/110854059 In this paper we propose a method for conducting stochastic collocation on arbitrary sets of nodes. To accomplish this, we present the framework of least orthogonal interpolation, which allows one to construct interpolation polynomials based on arbitrarily located grids in arbitrary dimensions. These interpolation polynomials are constructed as a subspace of the family of orthogonal polynomials corresponding to the probability distribution function on stochastic space. This feature enables one to conduct stochastic collocation simulations in practical problems where one cannot adopt some popular node selections such as sparse grids or cubature nodes. We present in detail both the mathematical properties of the least orthogonal interpolation and its practical implementation algorithm. Numerical benchmark problems are also presented to demonstrate the efficacy of the method. |

Stochastic Collocation for Optimal Control Problems with Stochastic PDE ConstraintsH. Tiesler, R.M. Kirby, D. Xiu, T. Preusser. In SIAM Journal on Control and Optimization, Vol. 50, No. 5, pp. 2659--2682. 2012. DOI: 10.1137/110835438 We discuss the use of stochastic collocation for the solution of optimal control problems which are constrained by stochastic partial differential equations (SPDE). Thereby the constraining, SPDE depends on data which is not deterministic but random. Assuming a deterministic control, randomness within the states of the input data will propagate to the states of the system. For the solution of SPDEs there has recently been an increasing effort in the development of efficient numerical schemes based upon the mathematical concept of generalized polynomial chaos. Modal-based stochastic Galerkin and nodal-based stochastic collocation versions of this methodology exist, both of which rely on a certain level of smoothness of the solution in the random space to yield accelerated convergence rates. In this paper we apply the stochastic collocation method to develop a gradient descent as well as a sequential quadratic program (SQP) for the minimization of objective functions constrained by an SPDE. The stochastic function involves several higher-order moments of the random states of the system as well as classical regularization of the control. In particular we discuss several objective functions of tracking type. Numerical examples are presented to demonstrate the performance of our new stochastic collocation minimization approach. |

Sequential Data Assimilation with Multiple ModelsA. Narayan, Y. Marzouk, D. Xiu. In Journal of Computational Physics, Vol. 231, No. 19, pp. 6401--6418. 2012. DOI: 10.1016/j.jcp.2012.06.002 Data assimilation is an essential tool for predicting the behavior of real physical systems given approximate simulation models and limited observations. For many complex systems, there may exist several models, each with different properties and predictive capabilities. It is desirable to incorporate multiple models into the assimilation procedure in order to obtain a more accurate prediction of the physics than any model alone can provide. In this paper, we propose a framework for conducting sequential data assimilation with multiple models and sources of data. The assimilated solution is a linear combination of all model predictions and data. One notable feature is that the combination takes the most general form with matrix weights. By doing so the method can readily utilize different weights in different sections of the solution state vectors, allow the models and data to have different dimensions, and deal with the case of a singular state covariance. We prove that the proposed assimilation method, termed direct assimilation, minimizes a variational functional, a generalized version of the one used in the classical Kalman filter. We also propose an efficient iterative assimilation method that assimilates two models at a time until all models and data are assimilated. The mathematical equivalence of the iterative method and the direct method is established. Numerical examples are presented to demonstrate the effectiveness of the new method. |

Computation of Failure Probability Subject to Epistemic UncertaintyJ. Li, D. Xiu. In SIAM Journal on Scientific Computing, Vol. 34, No. 6, pp. A2946--A2964. 2012. DOI: 10.1137/120864155 Computing failure probability is a fundamental task in many important practical problems. The computation, its numerical challenges aside, naturally requires knowledge of the probability distribution of the underlying random inputs. On the other hand, for many complex systems it is often not possible to have complete information about the probability distributions. In such cases the uncertainty is often referred to as epistemic uncertainty, and straightforward computation of the failure probability is not available. In this paper we develop a method to estimate both the upper bound and the lower bound of the failure probability subject to epistemic uncertainty. The bounds are rigorously derived using the variational formulas for relative entropy. We examine in detail the properties of the bounds and present numerical algorithms to efficiently compute them. |

Uncertainty in the Development and Use of Equation of State ModelsV.G. Weirs, N. Fabian, K. Potter, L. McNamara, T. Otahal. In International Journal for Uncertainty Quantification, pp. 255--270. 2012. DOI: 10.1615/Int.J.UncertaintyQuantification.2012003960 In this paper we present the results from a series of focus groups on the visualization of uncertainty in Equation-Of-State (EOS) models. The initial goal was to identify the most effective ways to present EOS uncertainty to analysts, code developers, and material modelers. Four prototype visualizations were developed to presented EOS surfaces in a three-dimensional, thermodynamic space. Focus group participants, primarily from Sandia National Laboratories, evaluated particular features of the various techniques for different use cases and discussed their individual workflow processes, experiences with other visualization tools, and the impact of uncertainty to their work. Related to our prototypes, we found the 3D presentations to be helpful for seeing a large amount of information at once and for a big-picture view; however, participants also desired relatively simple, two-dimensional graphics for better quantitative understanding, and because these plots are part of the existing visual language for material models. In addition to feedback on the prototypes, several themes and issues emerged that are as compelling as the original goal and will eventually serve as a starting point for further development of visualization and analysis tools. In particular, a distributed workflow centered around material models was identified. Material model stakeholders contribute and extract information at different points in this workflow depending on their role, but encounter various institutional and technical barriers which restrict the flow of information. An effective software tool for this community must be cognizant of this workflow and alleviate the bottlenecks and barriers within it. Uncertainty in EOS models is defined and interpreted differently at the various stages of the workflow. In this context, uncertainty propagation is difficult to reduce to the mathematical problem of estimating the uncertainty of an output from uncertain inputs. |

Interactive visualization of probability and cumulative density functionsK. Potter, R.M. Kirby, D. Xiu, C.R. Johnson. In International Journal of Uncertainty Quantification, Vol. 2, No. 4, pp. 397--412. 2012. DOI: 10.1615/Int.J.UncertaintyQuantification.2012004074 PubMed ID: 23543120 The probability density function (PDF), and its corresponding cumulative density function (CDF), provide direct statistical insight into the characterization of a random process or field. Typically displayed as a histogram, one can infer probabilities of the occurrence of particular events. When examining a field over some two-dimensional domain in which at each point a PDF of the function values is available, it is challenging to assess the global (stochastic) features present within the field. In this paper, we present a visualization system that allows the user to examine two-dimensional data sets in which PDF (or CDF) information is available at any position within the domain. The tool provides a contour display showing the normed difference between the PDFs and an ansatz PDF selected by the user, and furthermore allows the user to interactively examine the PDF at any particular position. Canonical examples of the tool are provided to help guide the reader into the mapping of stochastic information to visual cues along with a description of the use of the tool for examining data generated from a uncertainty quantification exercise accomplished within the field of electrophysiology. |

From Quantification to Visualization: A Taxonomy of Uncertainty Visualization ApproachesK. Potter, P. Rosen, C.R. Johnson. In Uncertainty Quantification in Scientific Computing, IFIP Advances in Information and Communication Technology Series, Vol. 377, Edited by Andrew Dienstfrey and Ronald Boisvert, Springer, pp. 226--249. 2012. DOI: 10.1007/978-3-642-32677-6_15 Quantifying uncertainty is an increasingly important topic across many domains. The uncertainties present in data come with many diverse representations having originated from a wide variety of domains. Communicating these uncertainties is a task often left to visualization without clear connection between the quantification and visualization. In this paper, we first identify frequently occurring types of uncertainty. Second, we connect those uncertainty representations to ones commonly used in visualization. We then look at various approaches to visualizing this uncertainty by partitioning the work based on the dimensionality of the data and the dimensionality of the uncertainty. We also discuss noteworthy exceptions to our taxonomy along with future research directions for the uncertainty visualization community. |

Variance-based Global Sensitivity Analysis via Sparse-grid Interpolation and CubatureG.T. Buzzard, D. Xiu. In Communications in Computational Physics, Vol. 9, No. 3, pp. 542--567. 2011. DOI: 10.4208/cicp.230909.160310s The stochastic collocation method using sparse grids has become a popular choice for performing stochastic computations in high dimensional (random) parameter space. In addition to providing highly accurate stochastic solutions, the sparse grid collocation results naturally contain sensitivity information with respect to the input random parameters. In this paper, we use the sparse grid interpolation and cubature methods of Smolyak together with combinatorial analysis to give a computationally efficient method for computing the global sensitivity values of Sobol'. This method allows for approximation of all main effect and total effect values from evaluation of f on a single set of sparse grids. We discuss convergence of this method, apply it to several test cases and compare to existing methods. As a result which may be of independent interest, we recover an explicit formula for evaluating a Lagrange basis interpolating polynomial associated with the Chebyshev extrema. This allows one to manipulate the sparse grid collocation results in a highly efficient manner. |

Distributional Sensitivity for Uncertainty QuantificationA. Narayan, D. Xiu. In Communications in Computational Physics, Vol. 10, No. 1, pp. 140--160. 2011. DOI: 10.4208/cicp.160210.300710a In this work we consider a general notion of distributional sensitivity, which measures the variation in solutions of a given physical/mathematical system with respect to the variation of probability distribution of the inputs. This is distinctively different from the classical sensitivity analysis, which studies the changes of solutions with respect to the values of the inputs. The general idea is measurement of sensitivity of outputs with respect to probability distributions, which is a well-studied concept in related disciplines. We adapt these ideas to present a quantitative framework in the context of uncertainty quantification for measuring such a kind of sensitivity and a set of efficient algorithms to approximate the distributional sensitivity numerically. A remarkable feature of the algorithms is that they do not incur additional computational effort in addition to a one-time stochastic solver. Therefore, an accurate stochastic computation with respect to a prior input distribution is needed only once, and the ensuing distributional sensitivity computation for different input distributions is a post-processing step. We prove that an accurate numericalmodel leads to accurate calculations of this sensitivity, which applies not just to slowly-converging Monte-Carlo estimates, but also to exponentially convergent spectral approximations. We provide computational examples to demonstrate the ease of applicability and verify the convergence claims. |

Characterization of Discontinuities in High-dimensional Stochastic Problmes on Adaptive Sparse GridsJ. Jakeman, R. Archibald, D. Xiu. In Journal of Computational Physics, Vol. 230, No. 10, pp. 3977--3997. 2011. DOI: 10.1016/j.jcp.2011.02.022 In this paper we present a set of efficient algorithms for detection and identification of discontinuities in high dimensional space. The method is based on extension of polynomial annihilation for discontinuity detection in low dimensions. Compared to the earlier work, the present method poses significant improvements for high dimensional problems. The core of the algorithms relies on adaptive refinement of sparse grids. It is demonstrated that in the commonly encountered cases where a discontinuity resides on a small subset of the dimensions, the present method becomes "optimal", in the sense that the total number of points required for function evaluations depends linearly on the dimensionality of the space. The details of the algorithms will be presented and various numerical examples are utilized to demonstrate the efficacy of the method. |

A Fast Method for High-frequency Acoustic Scattering from Random ScatterersP. Tsuji, D. Xiu, L. Ying. In International Journal for Uncertainty Quantification, Vol. 1, No. 2, pp. 99--117. 2011. DOI: 10.1615/IntJUncertaintyQuantification.v1.i2.10 This paper is concerned with the uncertainty quantification of high-frequency acoustic scattering from objects with random shape in two-dimensional space. Several new methods are introduced to efficiently estimate the mean and variance of the random radar cross section in all directions. In the physical domain, the scattering problem is solved using the boundary integral formulation and Nystrom discretization; recently developed fast algorithms are adapted to accelerate the computation of the integral operator and the evaluation of the radar cross section. In the random domain, it is discovered that due to the highly oscillatory nature of the solution, the stochastic collocation method based on sparse grids does not perform well. For this particular problem, satisfactory results are obtained by using quasi-Monte Carlo methods. Numerical results are given for several test cases to illustrate the properties of the proposed approach. |