Designed especially for neurobiologists, FluoRender is an interactive tool for multi-channel fluorescence microscopy data visualization and analysis.
Deep brain stimulation
BrainStimulator is a set of networks that are used in SCIRun to perform simulations of brain stimulation such as transcranial direct current stimulation (tDCS) and magnetic transcranial stimulation (TMS).
Developing software tools for science has always been a central vision of the SCI Institute.

Image Analysis

SCI's imaging work addresses fundamental questions in 2D and 3D image processing, including filtering, segmentation, surface reconstruction, and shape analysis. In low-level image processing, this effort has produce new nonparametric methods for modeling image statistics, which have resulted in better algorithms for denoising and reconstruction. Work with particle systems has led to new methods for visualizing and analyzing 3D surfaces. Our work in image processing also includes applications of advanced computing to 3D images, which has resulted in new parallel algorithms and real-time implementations on graphics processing units (GPUs). Application areas include medical image analysis, biological image processing, defense, environmental monitoring, and oil and gas.


ross

Ross Whitaker

Segmentation
sarang

Sarang Joshi

Shape Statistics
Segmentation
Brain Atlasing
tolga

Tolga Tasdizen

Image Processing
Machine Learning
chris

Chris Johnson

Diffusion Tensor Analysis
shireen

Shireen Elhabian

Image Analysis
Computer Vision


Funded Research Projects:



Publications in Image Analysis:


Subject-Motion Correction in HARDI Acquisitions: Choices and Consequences,
S. Elhabian, Y. Gur, J. Piven, M. Styner, I. Leppert, G.B. Pike, G. Gerig. In Proceeding of the 2014 Joint Annual Meeting ISMRM-ESMRMB, pp. (accepted). 2014.
DOI: 10.3389/fneur.2014.00240

Unlike anatomical MRI where subject motion can most often be assessed by quick visual quality control, the detection, characterization and evaluation of the impact of motion in diffusion imaging are challenging issues due to the sensitivity of diffusion weighted imaging (DWI) to motion originating from vibration, cardiac pulsation, breathing and head movement. Post-acquisition motion correction is widely performed, e.g. using the open-source DTIprep software [1,2] or TORTOISE [3], but in particular in high angular resolution diffusion imaging (HARDI), users often do not fully understand the consequences of different types of correction schemes on the final analysis, and whether those choices may introduce confounding factors when comparing populations. Although there is excellent theoretical work on the number of directional DWI and its effect on the quality and crossing fiber resolution of orientation distribution functions (ODF), standard users lack clear guidelines and recommendations in practical settings. This research investigates motion correction using transformation and interpolation of affected DWI directions versus the exclusion of subsets of DWI’s, and its effects on diffusion measurements on the reconstructed fiber orientation diffusion functions and on the estimated fiber orientations. The various effects are systematically studied via a newly developed synthetic phantom and also on real HARDI data.



Normative Modeling of Early Brain Maturation from Longitudinal DTI Reveals Twin-Singleton Differences
N. Sadeghi, J.H. Gilmore, W. Lin, G. Gerig. In Proceeding of the 2014 Joint Annual Meeting ISMRM-ESMRMB, pp. (accepted). 2014.

Early brain development of white matter is characterized by rapid organization and structuring. Magnetic Resonance diffusion tensor imaging (MR-DTI) provides the possibility of capturing these changes non-invasively by following individuals longitudinally in order to better understand departures from normal brain development in subjects at risk for mental illness [1]. Longitudinal imaging of individuals suggests the use of 4D (3D, time) image analysis and longitudinal statistical modeling [3].



UNC-Utah NA-MIC framework for DTI fiber tract analysis
A.R. Verde, F. Budin, J.-B. Berger, A. Gupta, M. Farzinfar, A. Kaiser, M. Ahn, H. Johnson, J. Matsui, H.C. Hazlett, A. Sharma, C. Goodlett, Y. Shi, S. Gouttard, C. Vachet, J. Piven, H. Zhu, G. Gerig, M. Styner. In Frontiers in Neuroinformatics, Vol. 7, No. 51, January, 2014.
DOI: 10.3389/fninf.2013.00051

Diffusion tensor imaging has become an important modality in the field of neuroimaging to capture changes in micro-organization and to assess white matter integrity or development. While there exists a number of tractography toolsets, these usually lack tools for preprocessing or to analyze diffusion properties along the fiber tracts. Currently, the field is in critical need of a coherent end-to-end toolset for performing an along-fiber tract analysis, accessible to non-technical neuroimaging researchers. The UNC-Utah NA-MIC DTI framework represents a coherent, open source, end-to-end toolset for atlas fiber tract based DTI analysis encompassing DICOM data conversion, quality control, atlas building, fiber tractography, fiber parameterization, and statistical analysis of diffusion properties. Most steps utilize graphical user interfaces (GUI) to simplify interaction and provide an extensive DTI analysis framework for non-technical researchers/investigators. We illustrate the use of our framework on a small sample, cross sectional neuroimaging study of eight healthy 1-year-old children from the Infant Brain Imaging Study (IBIS) Network. In this limited test study, we illustrate the power of our method by quantifying the diffusion properties at 1 year of age on the genu and splenium fiber tracts.

Keywords: neonatal neuroimaging, white matter pathways, magnetic resonance imaging, diffusion tensor imaging, diffusion imaging quality control, DTI atlas building



DTIPrep: Quality Control of Diffusion-Weighted Images
I. Oguz, M. Farzinfar, J. Matsui, F. Budin, Z. Liu, G. Gerig, H.J. Johnson, M.A. Styner. In Frontiers in Neuroinformatics, Vol. 8, No. 4, 2014.
DOI: 10.3389/fninf.2014.00004

In the last decade, diffusion MRI (dMRI) studies of the human and animal brain have been used to investigate a multitude of pathologies and drug-related effects in neuroscience research. Study after study identifies white matter (WM) degeneration as a crucial biomarker for all these diseases. The tool of choice for studying WM is dMRI. However, dMRI has inherently low signal-to-noise ratio and its acquisition requires a relatively long scan time; in fact, the high loads required occasionally stress scanner hardware past the point of physical failure. As a result, many types of artifacts implicate the quality of diffusion imagery. Using these complex scans containing artifacts without quality control (QC) can result in considerable error and bias in the subsequent analysis, negatively affecting the results of research studies using them. However, dMRI QC remains an under-recognized issue in the dMRI community as there are no user-friendly tools commonly available to comprehensively address the issue of dMRI QC. As a result, current dMRI studies often perform a poor job at dMRI QC.

Thorough QC of diffusion MRI will reduce measurement noise and improve reproducibility, and sensitivity in neuroimaging studies; this will allow researchers to more fully exploit the power of the dMRI technique and will ultimately advance neuroscience. Therefore, in this manuscript, we present our open-source software, DTIPrep, as a unified, user friendly platform for thorough quality control of dMRI data. These include artifacts caused by eddy-currents, head motion, bed vibration and pulsation, venetian blind artifacts, as well as slice-wise and gradient-wise intensity inconsistencies. This paper summarizes a basic set of features of DTIPrep described earlier and focuses on newly added capabilities related to directional artifacts and bias analysis.

Keywords: diffusion MRI, Diffusion Tensor Imaging, Quality control, Software, open-source, preprocessing



Multi-atlas segmentation of subcortical brain structures via the AutoSeg software pipeline
J. Wang, C. Vachet, A. Rumple, S. Gouttard, C. Ouzie, E. Perrot, G. Du, X. Huang, G. Gerig, M.A. Styner. In Frontiers in Neuroinformatics, Vol. 8, No. 7, 2014.
DOI: 10.3389/fninf.2014.00007

Automated segmenting and labeling of individual brain anatomical regions, in MRI are challenging, due to the issue of individual structural variability. Although atlas-based segmentation has shown its potential for both tissue and structure segmentation, due to the inherent natural variability as well as disease-related changes in MR appearance, a single atlas image is often inappropriate to represent the full population of datasets processed in a given neuroimaging study. As an alternative for the case of single atlas segmentation, the use of multiple atlases alongside label fusion techniques has been introduced using a set of individual “atlases” that encompasses the expected variability in the studied population. In our study, we proposed a multi-atlas segmentation scheme with a novel graph-based atlas selection technique. We first paired and co-registered all atlases and the subject MR scans. A directed graph with edge weights based on intensity and shape similarity between all MR scans is then computed. The set of neighboring templates is selected via clustering of the graph. Finally, weighted majority voting is employed to create the final segmentation over the selected atlases. This multi-atlas segmentation scheme is used to extend a single-atlas-based segmentation toolkit entitled AutoSeg, which is an open-source, extensible C++ based software pipeline employing BatchMake for its pipeline scripting, developed at the Neuro Image Research and Analysis Laboratories of the University of North Carolina at Chapel Hill. AutoSeg performs N4 intensity inhomogeneity correction, rigid registration to a common template space, automated brain tissue classification based skull-stripping, and the multi-atlas segmentation. The multi-atlas-based AutoSeg has been evaluated on subcortical structure segmentation with a testing dataset of 20 adult brain MRI scans and 15 atlas MRI scans. The AutoSeg achieved mean Dice coefficients of 81.73% for the subcortical structures.

Keywords: segmentation, Registration, MRI, Atlas, Brain, Insight Toolkit



Generalized HARDI Invariants by Method of Tensor Contraction
Y. Gur, C.R. Johnson. In Proceedings of the 2014 IEEE International Symposium on Biomedical Imaging (ISBI), pp. 718--721. April, 2014.

We propose a 3D object recognition technique to construct rotation invariant feature vectors for high angular resolution diffusion imaging (HARDI). This method uses the spherical harmonics (SH) expansion and is based on generating rank-1 contravariant tensors using the SH coefficients, and contracting them with covariant tensors to obtain invariants. The proposed technique enables the systematic construction of invariants for SH expansions of any order using simple mathematical operations. In addition, it allows construction of a large set of invariants, even for low order expansions, thus providing rich feature vectors for image analysis tasks such as classification and segmentation. In this paper, we use this technique to construct feature vectors for eighth-order fiber orientation distributions (FODs) reconstructed using constrained spherical deconvolution (CSD). Using simulated and in vivo brain data, we show that these invariants are robust to noise, enable voxel-wise classification, and capture meaningful information on the underlying white matter structure.

Keywords: Diffusion MRI, HARDI, invariants



Parametric Regression Scheme for Distributions: Analysis of DTI Fiber Tract Diffusion Changes in Early Brain Development
A. Sharma, P.T. Fletcher, J.H. Gilmore, M.L. Escolar, A. Gupta, M. Styner, G. Gerig. In Proceedings of the 2014 IEEE International Symposium on Biomedical Imaging (ISBI), pp. (accepted). 2014.

Temporal modeling frameworks often operate on scalar variables by summarizing data at initial stages as statistical summaries of the underlying distributions. For instance, DTI analysis often employs summary statistics, like mean, for regions of interest and properties along fiber tracts for population studies and hypothesis testing. This reduction via discarding of variability information may introduce significant errors which propagate through the procedures. We propose a novel framework which uses distribution-valued variables to retain and utilize the local variability information. Classic linear regression is adapted to employ these variables for model estimation. The increased stability and reliability of our proposed method when compared with regression using single-valued statistical summaries, is demonstrated in a validation experiment with synthetic data. Our driving application is the modeling of age-related changes along DTI white matter tracts. Results are shown for the spatiotemporal population trajectory of genu tract estimated from 45 healthy infants and compared with a Krabbe's patient.

Keywords: linear regression, distribution-valued data, spatiotemporal growth trajectory, DTI, early neurodevelopment



Geodesic Regression of Image and Shape Data for Improved Modeling of 4D Trajectories
J. Fishbaugh, M. Prastawa, G. Gerig, S. Durrleman. In Proceedings of the 2014 IEEE International Symposium on Biomedical Imaging (ISBI), pp. (accepted). 2014.

A variety of regression schemes have been proposed on images or shapes, although available methods do not handle them jointly. In this paper, we present a framework for joint image and shape regression which incorporates images as well as anatomical shape information in a consistent manner. Evolution is described by a generative model that is the analog of linear regression, which is fully characterized by baseline images and shapes (intercept) and initial momenta vectors (slope). Further, our framework adopts a control point parameterization of deformations, where the dimensionality of the deformation is determined by the complexity of anatomical changes in time rather than the sampling of the image and/or the geometric data. We derive a gradient descent algorithm which simultaneously estimates baseline images and shapes, location of control points, and momenta. Experiments on real medical data demonstrate that our framework effectively combines image and shape information, resulting in improved modeling of 4D (3D space + time) trajectories.



Improved Segmentation of White Matter Tracts with Adaptive Riemannian Metrics
X. Hao, K. Zygmunt, R.T. Whitaker, P.T. Fletcher. In Medical Image Analysis, Vol. 18, No. 1, pp. 161--175. Jan, 2014.
DOI: 10.1016/j.media.2013.10.007
PubMed ID: 24211814

We present a novel geodesic approach to segmentation of white matter tracts from diffusion tensor imaging (DTI). Compared to deterministic and stochastic tractography, geodesic approaches treat the geometry of the brain white matter as a manifold, often using the inverse tensor field as a Riemannian metric. The white matter pathways are then inferred from the resulting geodesics, which have the desirable property that they tend to follow the main eigenvectors of the tensors, yet still have the flexibility to deviate from these directions when it results in lower costs. While this makes such methods more robust to noise, the choice of Riemannian metric in these methods is ad hoc. A serious drawback of current geodesic methods is that geodesics tend to deviate from the major eigenvectors in high-curvature areas in order to achieve the shortest path. In this paper we propose a method for learning an adaptive Riemannian metric from the DTI data, where the resulting geodesics more closely follow the principal eigenvector of the diffusion tensors even in high-curvature regions. We also develop a way to automatically segment the white matter tracts based on the computed geodesics. We show the robustness of our method on simulated data with different noise levels. We also compare our method with tractography methods and geodesic approaches using other Riemannian metrics and demonstrate that the proposed method results in improved geodesics and segmentations using both synthetic and real DTI data.

Keywords: Conformal factor, Diffusion tensor imaging, Front-propagation, Geodesic, Riemannian manifold



A Joint Framework for 4D Segmentation and Estimation of Smooth Temporal Appearance Changes
Y. Gao, M. Prastawa, M. Styner, J. Piven, G. Gerig. In Proceedings of the 2014 IEEE International Symposium on Biomedical Imaging (ISBI), pp. (accepted). 2014.

Medical imaging studies increasingly use longitudinal images of individual subjects in order to follow-up changes due to development, degeneration, disease progression or efficacy of therapeutic intervention. Repeated image data of individuals are highly correlated, and the strong causality of information over time lead to the development of procedures for joint segmentation of the series of scans, called 4D segmentation. A main aim was improved consistency of quantitative analysis, most often solved via patient-specific atlases. Challenging open problems are contrast changes and occurance of subclasses within tissue as observed in multimodal MRI of infant development, neurodegeneration and disease. This paper proposes a new 4D segmentation framework that enforces continuous dynamic changes of tissue contrast patterns over time as observed in such data. Moreover, our model includes the capability to segment different contrast patterns within a specific tissue class, for example as seen in myelinated and unmyelinated white matter regions in early brain development. Proof of concept is shown with validation on synthetic image data and with 4D segmentation of longitudinal, multimodal pediatric MRI taken at 6, 12 and 24 months of age, but the methodology is generic w.r.t. different application domains using serial imaging.



4D Active Cut: An Interactive Tool for Pathological Anatomy Modeling
Bo Wang, W. Liu, M. Prastawa, A. Irimia, P.M. Vespa, J.D. van Horn, P.T. Fletcher, G. Gerig. In Proceedings of the 2014 IEEE International Symposium on Biomedical Imaging (ISBI), pp. (accepted). 2014.

4D pathological anatomy modeling is key to understanding complex pathological brain images. It is a challenging problem due to the difficulties in detecting multiple appearing and disappearing lesions across time points and estimating dynamic changes and deformations between them. We propose a novel semi-supervised method, called 4D active cut, for lesion recognition and deformation estimation. Existing interactive segmentation methods passively wait for user to refine the segmentations which is a difficult task in 3D images that change over time. 4D active cut instead actively selects candidate regions for querying the user, and obtains the most informative user feedback. A user simply answers 'yes' or 'no' to a candidate object without having to refine the segmentation slice by slice. Compared to single-object detection of the existing methods, our method also detects multiple lesions with spatial coherence using Markov random fields constraints. Results show improvement on the lesion detection, which subsequently improves deformation estimation.

Keywords: Active learning, graph cuts, longitudinal MRI, Markov Random Fields, semi-supervised learning



A Preliminary Study on the Effect of Motion Correction On HARDI Reconstruction
S. Elhabian, Y. Gur, C. Vachet, J. Piven, M. Styner, I. Leppert, G.B. Pike, G. Gerig. In Proceedings of the 2014 IEEE International Symposium on Biomedical Imaging (ISBI), pp. (accepted). 2014.

Post-acquisition motion correction is widely performed in diffusionweighted imaging (DWI) to guarantee voxel-wise correspondence between DWIs. Whereas this is primarily motivated to save as many scans as possible if corrupted by motion, users do not fully understand the consequences of different types of interpolation schemes on the final analysis. Nonetheless, interpolation might increase the partial volume effect while not preserving the volume of the diffusion profile, whereas excluding poor DWIs may affect the ability to resolve crossing fibers especially with small separation angles. In this paper, we investigate the effect of interpolating diffusion measurements as well as the elimination of bad directions on the reconstructed fiber orientation diffusion functions and on the estimated fiber orientations. We demonstrate such an effect on synthetic and real HARDI datasets. Our experiments demonstrate that the effect of interpolation is more significant with small fibers separation angles where the exclusion of motion-corrupted directions decreases the ability to resolve such crossing fibers.

Keywords: Diffusion MRI, HARDI, motion correction, interpolation



Four‐dimensional tissue deformation reconstruction (4D TDR) validation using a real tissue phantom
M. Szegedi, J. Hinkle, P. Rassiah, V. Sarkar, B. Wang, S. Joshi, B. Salter. In Journal of Applied Clinical Medical Physics, Vol. 14, No. 1, pp. 115-132. 2013.
DOI: 10.1120/jacmp.v14i1.4012

Calculation of four‐dimensional (4D) dose distributions requires the remapping of dose calculated on each available binned phase of the 4D CT onto a reference phase for summation. Deformable image registration (DIR) is usually used for this task, but unfortunately almost always considers only endpoints rather than the whole motion path. A new algorithm, 4D tissue deformation reconstruction (4D TDR), that uses either CT projection data or all available 4D CT images to reconstruct 4D motion data, was developed. The purpose of this work is to verify the accuracy of the fit of this new algorithm using a realistic tissue phantom. A previously described fresh tissue phantom with implanted electromagnetic tracking (EMT) fiducials was used for this experiment. The phantom was animated using a sinusoidal and a real patient‐breathing signal. Four‐dimensional computer tomography (4D CT) and EMT tracking were performed. Deformation reconstruction was conducted using the 4D TDR and a modified 4D TDR which takes real tissue hysteresis (4D TDRHysteresis) into account. Deformation estimation results were compared to the EMT and 4D CT coordinate measurements. To eliminate the possibility of the high contrast markers driving the 4D TDR, a comparison was made using the original 4D CT data and data in which the fiducials were electronically masked. For the sinusoidal animation, the average deviation of the 4D TDR compared to the manually determined coordinates from 4D CT data was 1.9 mm, albeit with as large as 4.5 mm deviation. The 4D TDR calculation traces matched 95% of the EMT trace within 2.8 mm. The motion hysteresis generated by real tissue is not properly projected other than at endpoints of motion. Sinusoidal animation resulted in 95% of EMT measured locations to be within less than 1.2 mm of the measured 4D CT motion path, enabling accurate motion characterization of the tissue hysteresis. The 4D TDRHysteresis calculation traces accounted well for the hysteresis and matched 95% of the EMT trace within 1.6 mm. An irregular (in amplitude and frequency) recorded patient trace applied to the same tissue resulted in 95% of the EMT trace points within less than 4.5 mm when compared to both the 4D CT and 4D TDRHysteresis motion paths. The average deviation of 4D TDRHysteresis compared to 4D CT datasets was 0.9 mm under regular sinusoidal and 1.0 mm under irregular patient trace animation. The EMT trace data fit to the 4D TDRHysteresis was within 1.6 mm for sinusoidal and 4.5 mm for patient trace animation. While various algorithms have been validated for end‐to‐end accuracy, one can only be fully confident in the performance of a predictive algorithm if one looks at data along the full motion path. The 4D TDR, calculating the whole motion path rather than only phase‐ or endpoints, allows us to fully characterize the accuracy of a predictive algorithm, minimizing assumptions. This algorithm went one step further by allowing for the inclusion of tissue hysteresis effects, a real‐world effect that is neglected when endpoint‐only validation is performed. Our results show that the 4D TDRHysteresis correctly models the deformation at the endpoints and any intermediate points along the motion path.

PACS numbers: 87.55.km, 87.55.Qr, 87.57.nf, 87.85.Tu



Three-dimensional alignment and merging of confocal microscopy stacks
N. Ramesh, T. Tasdizen. In 2013 IEEE International Conference on Image Processing, IEEE, September, 2013.
DOI: 10.1109/icip.2013.6738297

We describe an efficient, robust, automated method for image alignment and merging of translated, rotated and flipped con-focal microscopy stacks. The samples are captured in both directions (top and bottom) to increase the SNR of the individual slices. We identify the overlapping region of the two stacks by using a variable depth Maximum Intensity Projection (MIP) in the z dimension. For each depth tested, the MIP images gives an estimate of the angle of rotation between the stacks and the shifts in the x and y directions using the Fourier Shift property in 2D. We use the estimated rotation angle, shifts in the x and y direction and align the images in the z direction. A linear blending technique based on a sigmoidal function is used to maximize the information from the stacks and combine them. We get maximum information gain as we combine stacks obtained from both directions.



Uncertainty Visualization in HARDI based on Ensembles of ODFs
F. Jiao, J.M. Phillips, Y. Gur, C.R. Johnson. In Proceedings of 2013 IEEE Pacific Visualization Symposium, pp. 193--200. 2013.
PubMed ID: 24466504
PubMed Central ID: PMC3898522

In this paper, we propose a new and accurate technique for uncertainty analysis and uncertainty visualization based on fiber orientation distribution function (ODF) glyphs, associated with high angular resolution diffusion imaging (HARDI). Our visualization applies volume rendering techniques to an ensemble of 3D ODF glyphs, which we call SIP functions of diffusion shapes, to capture their variability due to underlying uncertainty. This rendering elucidates the complex heteroscedastic structural variation in these shapes. Furthermore, we quantify the extent of this variation by measuring the fraction of the volume of these shapes, which is consistent across all noise levels, the certain volume ratio. Our uncertainty analysis and visualization framework is then applied to synthetic data, as well as to HARDI human-brain data, to study the impact of various image acquisition parameters and background noise levels on the diffusion shapes.



Constrained Spectral Clustering for Image Segmentation
J. Sourati, D.H. Brooks, J.G. Dy, E. Erdogmus. In IEEE International Workshop on Machine Learning for Signal Processing, pp. 1--6. 2013.
DOI: 10.1109/MLSP

Constrained spectral clustering with affinity propagation in its original form is not practical for large scale problems like image segmentation. In this paper we employ novelty selection sub-sampling strategy, besides using efficient numerical eigen-decomposition methods to make this algorithm work efficiently for images. In addition, entropy-based active learning is also employed to select the queries posed to the user more wisely in an interactive image segmentation framework. We evaluate the algorithm on general and medical images to show that the segmentation results will improve using constrained clustering even if one works with a subset of pixels. Furthermore, this happens more efficiently when pixels to be labeled are selected actively.



Topology analysis of time-dependent multi-fluid data using the Reeb graph
F. Chen, H. Obermaier, H. Hagen, B. Hamann, J. Tierny, V. Pascucci. In Computer Aided Geometric Design, Vol. 30, No. 6, pp. 557--566. 2013.
DOI: 10.1016/j.cagd.2012.03.019

Liquid–liquid extraction is a typical multi-fluid problem in chemical engineering where two types of immiscible fluids are mixed together. Mixing of two-phase fluids results in a time-varying fluid density distribution, quantitatively indicating the presence of liquid phases. For engineers who design extraction devices, it is crucial to understand the density distribution of each fluid, particularly flow regions that have a high concentration of the dispersed phase. The propagation of regions of high density can be studied by examining the topology of isosurfaces of the density data. We present a topology-based approach to track the splitting and merging events of these regions using the Reeb graphs. Time is used as the third dimension in addition to two-dimensional (2D) point-based simulation data. Due to low time resolution of the input data set, a physics-based interpolation scheme is required in order to improve the accuracy of the proposed topology tracking method. The model used for interpolation produces a smooth time-dependent density field by applying Lagrangian-based advection to the given simulated point cloud data, conforming to the physical laws of flow evolution. Using the Reeb graph, the spatial and temporal locations of bifurcation and merging events can be readily identified supporting in-depth analysis of the extraction process.

Keywords: Multi-phase fluid, Level set, Topology method, Point-based multi-fluid simulation



Exploring Power Behaviors and Trade-offs of In-situ Data Analytics
M. Gamell, I. Rodero, M. Parashar, J.C. Bennett, H. Kolla, J.H. Chen, P.-T. Bremer, A. Landge, A. Gyulassy, P. McCormick, Scott Pakin, Valerio Pascucci, Scott Klasky. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, Association for Computing Machinery, 2013.
ISBN: 978-1-4503-2378-9
DOI: 10.1145/2503210.2503303

As scientific applications target exascale, challenges related to data and energy are becoming dominating concerns. For example, coupled simulation workflows are increasingly adopting in-situ data processing and analysis techniques to address costs and overheads due to data movement and I/O. However it is also critical to understand these overheads and associated trade-offs from an energy perspective. The goal of this paper is exploring data-related energy/performance trade-offs for end-to-end simulation workflows running at scale on current high-end computing systems. Specifically, this paper presents: (1) an analysis of the data-related behaviors of a combustion simulation workflow with an in-situ data analytics pipeline, running on the Titan system at ORNL; (2) a power model based on system power and data exchange patterns, which is empirically validated; and (3) the use of the model to characterize the energy behavior of the workflow and to explore energy/performance trade-offs on current as well as emerging systems.

Keywords: SDAV



Probabilistic Principal Geodesic Analysis
M. Zhang, P.T. Fletcher. In Proceedings of the 2013 Conference on Neural Information Processing Systems (NIPS), pp. (accepted). 2013.

Principal geodesic analysis (PGA) is a generalization of principal component analysis (PCA) for dimensionality reduction of data on a Riemannian manifold. Currently PGA is defined as a geometric fit to the data, rather than as a probabilistic model. Inspired by probabilistic PCA, we present a latent variable model for PGA that provides a probabilistic framework for factor analysis on manifolds. To compute maximum likelihood estimates of the parameters in our model, we develop a Monte Carlo Expectation Maximization algorithm, where the expectation is approximated by Hamiltonian Monte Carlo sampling of the latent variables. We demonstrate the ability of our method to recover the ground truth parameters in simulated sphere data, as well as its effectiveness in analyzing shape variability of a corpus callosum data set from human brain images.



Image Segmentation with Cascaded Hierarchical Models and Logistic Disjunctive Normal Networks
M. Seyedhosseini, M. Sajjadi, T. Tasdizen. In Proceedings of the IEEE International Conference on Computer Vison (ICCV 2013), pp. (accepted). 2013.

Contextual information plays an important role in solving vision problems such as image segmentation. However, extracting contextual information and using it in an effective way remains a difficult problem. To address this challenge, we propose a multi-resolution contextual framework, called cascaded hierarchical model (CHM), which learns contextual information in a hierarchical framework for image segmentation. At each level of the hierarchy, a classifier is trained based on downsampled input images and outputs of previous levels. Our model then incorporates the resulting multi-resolution contextual information into a classifier to segment the input image at original resolution. We repeat this procedure by cascading the hierarchical framework to improve the segmentation accuracy. Multiple classifiers are learned in the CHM; therefore, a fast and accurate classifier is required to make the training tractable. The classifier also needs to be robust against overfitting due to the large number of parameters learned during training. We introduce a novel classification scheme, called logistic disjunctive normal networks (LDNN), which consists of one adaptive layer of feature detectors implemented by logistic sigmoid functions followed by two fixed layers of logical units that compute conjunctions and disjunctions, respectively. We demonstrate that LDNN outperforms state-of-theart classifiers and can be used in the CHM to improve object segmentation performance.