# Schedule

**Important Dates**

- Papers Due: September 2, 2011
- Notification of Papers: September 16, 2011
- Final Papers Due: September 30, 2011
- Revised Paper for IJUQ Due: December 31, 2011

### Monday, October 24

8:00 - 8:30 | Breakfast/Poster setup | |

8:30-10:00 | Session 1 (Chair: Robert S. Laramee) | Working with Uncertainty |
---|---|---|

8:30 - 9:00 | Dongbin Xiu | Uncertainty Analysis for Complex Systems: Algorithms and Challenges (invited)
The field of uncertainty quantification has received increasing amount of attention recently. Extensive research efforts have been devoted to it and many novel numerical techniques have been developed. These techniques aim to conduct stochastic simulations for large-scale complex systems. Most notably, the algorithms based on the idea of generalized polynomial chaos (gPC), particularly the non-intrusive stochastic collocation methods, have found their use in many practical simulations. These methods utilize rigorous mathematical theory and deliver superior performance in practice. However, challenges exist, especially for complex systems with multiple scales or multiple physics. In this talk we will review the features of these algorithms, their strength and weakness. We will also review some of the most prominent challenges, such as curse-of-dimensionality, and the solid, albeit limited, progresses that have been made. [Presentation Slides] |

9:00 - 9:30 | Dan Maljovec, Bei Wang, Ana Kupresanin, Gardard Johannesson, Valerio Pascucci and Peer-Timo Bremer | Adaptive Sampling with Topological Scores
Understanding and describing expensive black box functions such as physical simulations is a common problem in many application areas. One example is the recent interest in uncertainty quantification with the goal of discovering the relationship between a potentially large number of input parameters and the output of a simulation. Typically, the simulation of interest is expensive to evaluate and thus the sampling of the parameter space is necessarily small. As a result choosing a good set of samples at which to evaluate is crucial to glean as much information as possible from the fewest samples. While space-filling sampling designs such as Latin Hypercubes provide a good initial cover of the entire domain more detailed studies typically rely on adaptive sampling. The core of most adaptive sampling methods is the scoring function which, given an initial set of training points ranks a large set of candiate points to determine the most valuable one for evaluation. Traditional scoring functions are based on well know geometric metrics such as distance in function space. Instead, we propose topology based techniques that aim to recover the global structure of the simulation response. In particular, we propose three new topological scoring function and demonstrate that especially for complicated functions and higher dimensions these outperform traditional techniques. [Presentation Slides] |

9:30 - 10:00 | Aaron Knoll, Kah Chun Lau, Bin Liu, Aslihan Sumer, Maria K.Y. Chan, Lei Cheng, Julius Jellinek, Jeffrey Greeley, Larry Curtiss, Mark Hereld, and Michael Papka | Uncertainty Classification of Molecular Interfaces
Molecular surfaces at atomic and subatomic scales are inherently ill-defined. In many computational chemistry problems, interfaces are better represented as volumetric regions than as discrete surfaces. The geometry of this interface is largely defined by electron density and electrostatic potential fields. While experimental measurements such as chemical bond and Van der Waals radii do not directly specify the interface, they are physically relevant in modeling molecular structure. Rather than use these radial values to directly determine surface geometry, we use them to define an uncertainty interval in an electron density distribution, which then guides classification of volume data. This results in a strategy for representing, analyzing and rendering molecular structure and interface. [Presentation Slides] |

10:00 - 10:30 | Coffee break / posters | |

10:30 - 12:00 | Session 2 (Chair: Kristin Potter) | Uncertainty Quantification and Propagation |

10:30 - 11:00 | Pierre Lermusiaux | Non-Gaussian Data Assimilation with Stochastic PDEs: Visualizing Probability Densities of Ocean Fields
Uncertainty prediction and data assimilation schemes for ocean and fluid flows are derived and illustrated within the context of Dynamically Orthogonal (DO) field equations and their adaptive error subspaces. These stochastic partial differential equations (SPDEs) provide prior probabilities for a semiparametric data assimilation framework using Gaussian mixture models, the Expectation-Maximization algorithm and the Bayesian Information Criterion. Bayes' Law is then efficiently carried out analytically within the evolving stochastic subspace. The use of this non-Gaussian data assimilation scheme is illustrated for adaptive sampling, i.e. for predicting the optimal sampling plans. Examples are provided using time-dependent ocean and fluid flows. The visualization of uncertainties in ocean fields is critical for many scientific and societal applications. Now that the above theory and schemes are available for uncertainty predictions, non-Gaussian data assimilation and adaptive sampling, new opportunities and challenges abound in visualization. Some of these new fundamental challenges will be discussed and preliminary visualization results presented. Co-authors from our MSEAS-group at MIT: Thomas Sondergaard, Matt Ueckermann, Tapovan Lolla and Themis Sapsis [Presentation Slides] |

11:00 - 11:30 | Sidharth Thakur, Laura Tateosian, Helena Mitasova, and Eric Hardin | Summary Visualizations for Coastal Spatial-Temporal Dynamics
Digital scans of dynamic terrains such as coastal regions are now being gathered at high resolution and span an unprecedented number of years. Although standard tools based on Geographic Information Systems (GIS) are indispensable for analyzing geospatial data, they have limited support to display time-dependent changes in data and information such as statistical distributions and uncertainty in data. The ability to display error and uncertainty can provide important clues to domain scientists about geomorphology of at-risk regions, which can be vital to understand natural processes and to avert or minimize damage to life and property. We report our preliminary investigations to visualize the dynamics of geomorphological features such as land slides and coastal dunes. Our approach is to visualize summary statistics of important data attributes and risk or vulnerability indices as functions of both spatial and temporal dimensions in our data. We combine standard techniques such as surface-mapping and imagery with summary visualizations using both standard and novel visual representations. This approach enables us to not only retain important geographical context in our visualizations, but also helps to reduce clutter due to direct plotting of statistical data in displays of geospatial information. Finally, we address some issues pertaining to visualization of summary statistics for geographical regions at varying scales. [Presentation Slides] |

11:30 - 12:00 | Kai Pöthkow, Christoph Petz, and Hans-Christian Hege | Approximate Level-Crossing Probabilities for Interactive Visualization of Uncertain Isocontours
All measurements and results of numerical simulations are uncertain to some degree. An important method for visualizing uncertain spatial data is extraction of uncertain counterparts to isolines and isosurfaces. We consider the case where the input data are modeled as discretized Gaussian fields with the spatial correlations. For this situation we want to compute level-crossing probabilities associated to grid cells. To avoid the high computational cost of the Monte Carlo integration and direction-dependencies of raycasting methods, we formulate two approximate measures for these probabilities. They can be utilized during rendering by looking-up univariate and bivariate distribution functions that have been computed in a pre-processing step. The maximum edge crossing probability considers pairwise correlations at a time. The linked-pairs method considers joint and conditional probabilities between vertices along paths of a spanning tree over the n vertices of each cell. With each possible tree an n-dimensional approximate distribution is associated. Minimizing the Bhattacharyya distance to the original distribution guides the choice of the tree. We perform a quantitative and qualitative evaluation of the approximation errors on synthetic data and show the utility of the measures for results from climate simulations. [Presentation Slides] |

12:00 - 1:30 | Lunch break / posters | |

1:30 - 3:30 | Session 3 (Chair: Hans-Christian Hege ) | Uncertainty Visualization and Communication |

1:30 - 2:00 | Paul Han | Visualizing Uncertainty in Health Care: Present Needs and Future Directions (invited)
Objectives: - Identify key uncertainties in health care that need to be communicated to decision makers.
- Describe recent efforts to develop novel representations for visualizing uncertainty in clinical risk prediction.
- Outline potential directions for future uncertainty visualization efforts in health care.
[Presentation Slides] |

2:00 - 2:30 | Carlos Correa and Peter Lindstrom | The Mutual Information Diagram for Uncertainty Visualization
We present a variant of the Taylor diagram, a type of 2D plot that succinctly shows the relationship between two or more distributions based on their variance and correlation. The Taylor diagram has been adopted by the climate and geophysics communities to produce insightful visualizations, e.g., for intercomparison studies. Our variant, which we call the Mutual Information Diagram, represents the relationship between distributions in terms of their entropy and mutual information, and naturally maps well-known statistical quantities to their information-theoretic counterparts. Our new diagram is able to describe non-linear relationships where linear correlation may fail; it allows for categorical and multi-variate data to be compared; and it incorporates the notion of uncertainty, key in the study of large ensembles of data. [Presentation Slides] |

2:30 - 3:00 | Keqin Wu and Song Zhang | A Topology Based Visualization for Exploring Data with Uncertainty
Uncertainty is a common and crucial issue in scientific data. The exploration and analysis of large 2D and 3D data with uncertainty information demands an effective visualization which is augmented with both interaction capability and relevant context about where to look at. The contour tree (CT) has been exploited as an efficient data structure to guide exploratory visualization. This paper proposes a highly interactive visualization tool for exploring data with intuitive and quantitative uncertainty representation. First, we introduce a balanced planar hierarchical CT layout with few self-intersections. Then, the new CT displays is integrated with tree view graph interaction, allowing users to fast navigate between level of details for large data. Further, attaching uncertainty information to a planar CT layout is key to avoiding the visual cluttering and occlusion of viewing uncertainty in volume data or complicated 2D data. For the first time, the uncertainty information is fully explored, as data-level uncertainty which represents the uncertain numerical value of the data, as contour-level uncertainty which quantify the uncertain shape of contours of the data, and as topology-level uncertainty which reveals the underling uncertainty of the data pattern. This information provides a new insight into how the uncertainty exists with and affects the underlining data. The current results show that the new visualization facilitates a quick and accurate selection of prominent contours with high uncertainty. [Presentation Slides] |

3:00 - 3:30 | Marcel Hlawatsch, Filip Sadlo, and Daniel Weiskopf | Predictability-Based Adaptive Mouse Interaction for Visual Flow Exploration
Flow fields are often investigated by adopting a Lagrangian view, for example, by particle tracing of integral curves such as streamlines and path lines or by computing delocalized quantities. For visual exploration, mouse interaction is predominantly used to define starting points for Lagrangian methods. This paper focuses on the uncertainty of mouse input and its impact on the visualization process. In typical cases, the interaction is achieved by mouse motion, exhibiting uncertainty in the range of a screen pixel. From the perspective of dynamical systems, an integral curve represents an initial value problem, the uncertainty a perturbation of its initial condition, and the uncertainty of the visualization procedure a predictability problem. Predictability analysis is concerned with the growth of perturbations under the action of flow. In our case, it is not unusual that the perturbations grow from single pixels to substantial deviations. We therefore present an interaction scheme based on the largest finite-time Lyapunov exponent field, providing accurate, smooth, and easy-to-use flow exploration. This scheme employs data-driven adaptation of mouse speed and direction as well as optional augmentation by an adaptive zoom-lens with consistent magnification. We compare our approach to non-adaptive mouse interaction and demonstrate it for several examples of datasets. [Presentation Slides] [Video clip] |

3:30 - 4:00 | Coffee break / posters | |

4:00 - 5:30 | Session 4 (Chair: Alex Pang) | Representation, Quantification, Propagation, Visualization and Communication of Uncertainty |

Dongbin Xiu, Pierre Lermusiaux, Paul Han, and Chris Johnson | Panel |