Designed especially for neurobiologists, FluoRender is an interactive tool for multi-channel fluorescence microscopy data visualization and analysis.
Large scale visualization on the Powerwall.
BrainStimulator is a set of networks that are used in SCIRun to perform simulations of brain stimulation such as transcranial direct current stimulation (tDCS) and magnetic transcranial stimulation (TMS).
Developing software tools for science has always been a central vision of the SCI Institute.

SCI Publications

2017


M. Berzins, D. A. Bonnell, Jr. Cizewski, K. M. Heeger, A.J.G. Hey, C. J. Keane, B. A. Ramsey, K. A. Remington, J.L. Rempe. “Department of Energy, Advanced Scientific Computing Advisory Committee (ASCAC), Subcommittee on LDRD Review Final Report,” May, 2017.



M. Berzins. “Nonlinear Stability of the MPM Method,” In V International Conference on Particle-based Methods – Fundamentals and Applications. PARTICLES 2017, Edited by P. Wriggers, M. Bischoff, E. O˜nate, D.R.J. Owen, & T. Zohdi, 2017.

ABSTRACT

The Material Point Method (MPM) has been very successful in providing solutions to many challenging problems involving large deformations. The nonlinear nature of MPM makes it necessary to use a full nonlinear stability analysis to determine a stable timestep. The stability analysis of Spigler and Vianello is adapted to MPM and used to derive a stable timestep bound for a model problem. This bound is contrasted against a traditional CFL bound.



A. Bhatele, J. Yeom, N. Jain, C. J. Kuhlman, Y. Livnat, K. R. Bisset, L. V. Kale, M. V. Marathe. “Massively Parallel Simulations of Spread of Infectious Diseases over Realistic Social Networks,” In 2017 17th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID), May, 2017.
DOI: 10.1109/ccgrid.2017.141

ABSTRACT

Controlling the spread of infectious diseases in large populations is an important societal challenge. Mathematically, the problem is best captured as a certain class of reaction-diffusion processes (referred to as contagion processes) over appropriate synthesized interaction networks. Agent-based models have been successfully used in the recent past to study such contagion processes. We describe EpiSimdemics, a highly scalable, parallel code written in Charm++ that uses agent-based modeling to simulate disease spreads over large, realistic, co-evolving interaction networks. We present a new parallel implementation of EpiSimdemics that achieves unprecedented strong and weak scaling on different architectures — Blue Waters, Cori and Mira. EpiSimdemics achieves five times greater speedup than the second fastest parallel code in this field. This unprecedented scaling is an important step to support the long term vision of real-time epidemic science. Finally, we demonstrate the capabilities of EpiSimdemics by simulating the spread of influenza over a realistic synthetic social contact network spanning the continental United States (∼280 million nodes and 5.8 billion social contacts).



M. Chen, G. Grinstein, C. R. Johnson, J. Kennedy, M. Tory. “Pathways for Theoretical Advances in Visualization,” In IEEE Computer Graphics and Applications, IEEE, pp. 103--112. July, 2017.

ABSTRACT

More than a decade ago, Chris Johnson proposed the "Theory of Visualization" as one of the top research problems in visualization. Since then, there have been several theory-focused events, including three workshops and three panels at IEEE Visualization (VIS) Conferences. Together, these events have produced a set of convincing arguments.



C. Gritton, J. Guilkey, J. Hooper, D. Bedrov, R. M. Kirby, M. Berzins. “Using the material point method to model chemical/mechanical coupling in the deformation of a silicon anode,” In Modelling and Simulation in Materials Science and Engineering, Vol. 25, No. 4, pp. 045005. 2017.

ABSTRACT

The lithiation and delithiation of a silicon battery anode is modeled using the material point method (MPM). The main challenges in modeling this process using the MPM is to simulate stress dependent diffusion coupled with concentration dependent stress within a material that undergoes large deformations. MPM is chosen as the numerical method of choice because of its ability to handle large deformations. A method for modeling diffusion within MPM is described. A stress dependent model for diffusivity and three different constitutive models that fully couple the equations for stress with the equations for diffusion are considered. Verifications tests for the accuracy of the numerical implementations of the models and validation tests with experimental results show the accuracy of the approach. The application of the fully coupled stress diffusion model implemented in MPM is applied to modeling the lithiation and delithiation of silicon nanopillars.



J. K. Holmen, A. Humphrey, D. Sutherland, M. Berzins. “Improving Uintah's Scalability Through the Use of Portable Kokkos-Based Data Parallel Tasks,” In Proceedings of the Practice and Experience in Advanced Research Computing 2017 on Sustainability, Success and Impact, PEARC17, No. 27, pp. 27:1--27:8. 2017.
ISBN: 978-1-4503-5272-7
DOI: 10.1145/3093338.3093388

ABSTRACT

The University of Utah's Carbon Capture Multidisciplinary Simulation Center (CCMSC) is using the Uintah Computational Framework to predict performance of a 1000 MWe ultra-supercritical clean coal boiler. The center aims to utilize the Intel Xeon Phi-based DOE systems, Theta and Aurora, through the Aurora Early Science Program by using the Kokkos C++ library to enable node-level performance portability. This paper describes infrastructure advancements and portability improvements made possible by our integration of Kokkos within Uintah. Scalability results are presented that compare serial and data parallel task execution models for a challenging radiative heat transfer calculation, central to the center's predictive boiler simulations. These results demonstrate both good strong-scaling characteristics to 256 Knights Landing (KNL) processors on the NSF Stampede system, and show the KNL-based calculation to compete with prior GPU-based results for the same calculation.



T.A.J. Ouermi, A. Knoll, R.M. Kirby, M. Berzins. “OpenMP 4 Fortran Modernization of WSM6 for KNL,” In Proceedings of the Practice and Experience in Advanced Research Computing 2017 on Sustainability, Success and Impact, PEARC17, No. 12, ACM, pp. 12:1--12:8. 2017.
ISBN: 978-1-4503-5272-7
DOI: 10.1145/3093338.3093387

ABSTRACT

Parallel code portability in the petascale era requires modifying existing codes to support new architectures with large core counts and SIMD vector units. OpenMP is a well established and increasingly supported vehicle for portable parallelization. As architectures mature and compiler OpenMP implementations evolve, best practices for code modernization change as well. In this paper, we examine the impact of newer OpenMP features (in particular OMP SIMD) on the Intel Xeon Phi Knights Landing (KNL) architecture, applied in optimizing loops in the single moment 6-class microphysics module (WSM6) in the US Navy's NEPTUNE code. We find that with functioning OMP SIMD constructs, low thread invocation overhead on KNL and reduced penalty for unaligned access compared to previous architectures, one can leverage OpenMP 4 to achieve reasonable scalability with relatively minor reorganization of a production physics code.



T.A.J. Ouermi, A. Knoll, R.M. Kirby, M. Berzins. “Optimization Strategies for WRF Single-Moment 6-Class Microphysics Scheme (WSM6) on Intel Microarchitectures,” In Proceedings of the fifth international symposium on computing and networking (CANDAR 17). Awarded Best Paper , IEEE, 2017.

ABSTRACT

Optimizations in the petascale era require modifications of existing codes to take advantage of new architectures with large core counts and SIMD vector units. This paper examines high-level and low-level optimization strategies for numerical weather prediction (NWP) codes. These strategies employ thread-local structures of arrays (SOA) and an OpenMP directive such as OMP SIMD. These optimization approaches are applied to the Weather Research Forecasting single-moment 6-class microphysics schemes (WSM6) in the US Navy NEPTUNE system. The results of this study indicate that the high-level approach with SOA and low-level OMP SIMD improves thread and vector parallelism by increasing data and temporal locality. The modified version of WSM6 runs 70x faster than the original serial code. This improvement is about 23.3x faster than the performance achieved by Ouermi et al., and 14.9x faster than the performance achieved by Michalakes et al.



B. Peterson, A. Humphrey, J. Schmidt, M. Berzins. “Addressing Global Data Dependencies in Heterogeneous Asynchronous Runtime Systems on GPUs. Awarded Best Paper,” In Proceedings of the Third International Workshop on Extreme Scale Programming Models and Middleware - ESPM2'17, ACM, 2017.
DOI: 10.1145/3152041.3152082

ABSTRACT

Large-scale parallel applications with complex global data dependencies beyond those of reductions pose significant scalability challenges in an asynchronous runtime system. Internodal challenges include identifying the all-to-all communication of data dependencies among the nodes. Intranodal challenges include gathering together these data dependencies into usable data objects while avoiding data duplication. This paper addresses these challenges within the context of a large-scale, industrial coal boiler simulation using the Uintah asynchronous many-task runtime system on GPU architectures. We show significant reduction in time spent analyzing data dependencies through refinements in our dependency search algorithm. Multiple task graphs are used to eliminate subsequent analysis when task graphs change in predictable and repeatable ways. Using a combined data store and task scheduler redesign reduces data dependency duplication ensuring that problems fit within host and GPU memory. These modifications did not require any changes to application code or sweeping changes to the Uintah runtime system. We report results running on the DOE Titan system on 119K CPU cores and 7.5K GPUs simultaneously. Our solutions can be generalized to other task dependency problems with global dependencies among thousands of nodes which must be processed efficiently at large scale.



W. Usher, J. Amstutz, C. Brownlee, A. Knoll, I. Wald . “Progressive CPU Volume Rendering with Sample Accumulation,” In Eurographics Symposium on Parallel Graphics and Visualization, Edited by Alexandru Telea and Janine Bennett, The Eurographics Association, 2017.
ISBN: 978-3-03868-034-5
ISSN: 1727-348X
DOI: 10.2312/pgv.20171090

ABSTRACT

We present a new method for progressive volume rendering by accumulating object-space samples over successively rendered frames. Existing methods for progressive refinement either use image space methods or average pixels over frames, which can blur features or integrate incorrectly with respect to depth. Our approach stores samples along each ray, accumulates new samples each frame into a buffer, and progressively interleaves and integrates these samples. Though this process requires additional memory, it ensures interactivity and is well suited for CPU architectures with large memory and cache. This approach also extends well to distributed rendering in cluster environments. We implement this technique in Intel's open source OSPRay CPU ray tracing framework and demonstrate that it is particularly useful for rendering volumetric data with costly sampling functions.



W. Usher, P. Klacansky, F. Federer, P. T. Bremer, A. Knoll, J. Yarch, A. Angelucci, V. Pascucci. “A Virtual Reality Visualization Tool for Neuron Tracing,” In IEEE Transactions on Visualization and Computer Graphics, IEEE, 2017.
ISSN: 1077-2626
DOI: 10.1109/TVCG.2017.2744079

ABSTRACT

Tracing neurons in large-scale microscopy data is crucial to establishing a wiring diagram of the brain, which is needed to understand how neural circuits in the brain process information and generate behavior. Automatic techniques often fail for large and complex datasets, and connectomics researchers may spend weeks or months manually tracing neurons using 2D image stacks. We present a design study of a new virtual reality (VR) system, developed in collaboration with trained neuroanatomists, to trace neurons in microscope scans of the visual cortex of primates. We hypothesize that using consumer-grade VR technology to interact with neurons directly in 3D will help neuroscientists better resolve complex cases and enable them to trace neurons faster and with less physical and mental strain. We discuss both the design process and technical challenges in developing an interactive system to navigate and manipulate terabyte-sized image volumes in VR. Using a number of different datasets, we demonstrate that, compared to widely used commercial software, consumer-grade VR presents a promising alternative for scientists.



Y. Wan, C. Hansen. “Uncertainty Footprint: Visualization of Nonuniform Behavior of Iterative Algorithms Applied to 4D Cell Tracking,” In Computer Graphics Forum, Wiley, 2017.

ABSTRACT

Research on microscopy data from developing biological samples usually requires tracking individual cells over time. When cells are three-dimensionally and densely packed in a time-dependent scan of volumes, tracking results can become unreliable and uncertain. Not only are cell segmentation results often inaccurate to start with, but it also lacks a simple method to evaluate the tracking outcome. Previous cell tracking methods have been validated against benchmark data from real scans or artificial data, whose ground truth results are established by manual work or simulation. However, the wide variety of real-world data makes an exhaustive validation impossible. Established cell tracking tools often fail on new data, whose issues are also difficult to diagnose with only manual examinations. Therefore, data-independent tracking evaluation methods are desired for an explosion of microscopy data with increasing scale and resolution. In this paper, we propose the uncertainty footprint, an uncertainty quantification and visualization technique that examines nonuniformity at local convergence for an iterative evaluation process on a spatial domain supported by partially overlapping bases. We demonstrate that the patterns revealed by the uncertainty footprint indicate data processing quality in two algorithms from a typical cell tracking workflow – cell identification and association. A detailed analysis of the patterns further allows us to diagnose issues and design methods for improvements. A 4D cell tracking workflow equipped with the uncertainty footprint is capable of self diagnosis and correction for a higher accuracy than previous methods whose evaluation is limited by manual examinations.



Y. Wan, H. Otsuna, H. A. Holman, B. Bagley, M. Ito, A. K. Lewis, M. Colasanto, G. Kardon, K. Ito, C. Hansen. “FluoRender: joint freehand segmentation and visualization for many-channel fluorescence data analysis,” In BMC Bioinformatics, Vol. 18, No. 1, Springer Nature, May, 2017.
DOI: 10.1186/s12859-017-1694-9

ABSTRACT

Background:
Image segmentation and registration techniques have enabled biologists to place large amounts of volume data from fluorescence microscopy, morphed three-dimensionally, onto a common spatial frame. Existing tools built on volume visualization pipelines for single channel or red-green-blue (RGB) channels have become inadequate for the new challenges of fluorescence microscopy. For a three-dimensional atlas of the insect nervous system, hundreds of volume channels are rendered simultaneously, whereas fluorescence intensity values from each channel need to be preserved for versatile adjustment and analysis. Although several existing tools have incorporated support of multichannel data using various strategies, the lack of a flexible design has made true many-channel visualization and analysis unavailable. The most common practice for many-channel volume data presentation is still converting and rendering pseudosurfaces, which are inaccurate for both qualitative and quantitative evaluations.

Results:
Here, we present an alternative design strategy that accommodates the visualization and analysis of about 100 volume channels, each of which can be interactively adjusted, selected, and segmented using freehand tools. Our multichannel visualization includes a multilevel streaming pipeline plus a triple-buffer compositing technique. Our method also preserves original fluorescence intensity values on graphics hardware, a crucial feature that allows graphics-processing-unit (GPU)-based processing for interactive data analysis, such as freehand segmentation. We have implemented the design strategies as a thorough restructuring of our original tool, FluoRender.

Conclusion:
The redesign of FluoRender not only maintains the existing multichannel capabilities for a greatly extended number of volume channels, but also enables new analysis functions for many-channel data from emerging biomedical-imaging techniques.


2016


K. Aras B. Burton, D. Swenson, R.S. MacLeod. “Spatial organization of acute myocardial ischemia,” In Journal of Electrocardiology, Vol. 49, No. 3, Elsevier, pp. 323–336. May, 2016.

ABSTRACT

Introduction
Myocardial ischemia is a pathological condition initiated by supply and demand imbalance of the blood to the heart. Previous studies suggest that ischemia originates in the subendocardium, i.e., that nontransmural ischemia is limited to the subendocardium. By contrast, we hypothesized that acute myocardial ischemia is not limited to the subendocardium and sought to document its spatial distribution in an animal preparation. The goal of these experiments was to investigate the spatial organization of ischemia and its relationship to the resulting shifts in ST segment potentials during short episodes of acute ischemia.

Methods
We conducted acute ischemia studies in open-chest canines (N = 19) and swines (N = 10), which entailed creating carefully controlled ischemia using demand, supply or complete occlusion ischemia protocols and recording intramyocardial and epicardial potentials. Elevation of the potentials at 40% of the ST segment between the J-point and the peak of the T-wave (ST40%) provided the metric for local ischemia. The threshold for ischemic ST segment elevations was defined as two standard deviations away from the baseline values.

Results
The relative frequency of occurrence of acute ischemia was higher in the subendocardium (78% for canines and 94% for swines) and the mid-wall (87% for canines and 97% for swines) in comparison with the subepicardium (30% for canines and 22% for swines). In addition, acute ischemia was seen arising throughout the myocardium (distributed pattern) in 87% of the canine and 94% of the swine episodes. Alternately, acute ischemia was seen originating only in the subendocardium (subendocardial pattern) in 13% of the canine episodes and 6% of the swine episodes (p < 0.05).

Conclusions
Our findings suggest that the spatial distribution of acute ischemia is a complex phenomenon arising throughout the myocardial wall and is not limited to the subendocardium.



P.R. Atkins, S.Y. Elhabian, P. Agrawal, M.D. Harris, R.T. Whitaker, J.A. Weiss, C.L. Peters, A.E. Anderson. “Quantitative comparison of cortical bone thickness using correspondence-based shape modeling in patients with cam femoroacetabular impingement,” In Journal of Orthopaedic Research, Wiley-Blackwell, Nov, 2016.
DOI: 10.1002/jor.23468

ABSTRACT

The proximal femur is abnormally shaped in patients with cam-type femoroacetabular impingement (FAI). Impingement
may elicit bone remodeling at the proximal femur, causing increases in cortical bone thickness. We used correspondence-based shape modeling to quantify and compare cortical thickness between cam patients and controls for the location of the cam lesion and the proximal femur. Computed tomography images were segmented for 45 controls and 28 cam-type FAI patients. The segmentations were input to a correspondence-based shape model to identify the region of the cam lesion. Median cortical thickness data over the region of the cam lesion and the proximal femur were compared between mixed-gender and gender-specific groups. Median [interquartile range] thickness was significantly greater in FAI patients than controls in the cam lesion (1.47 [0.64] vs. 1.13 [0.22] mm, respectively; p < 0.001) and proximal femur (1.28 [0.30] vs. 0.97 [0.22] mm, respectively; p < 0.001). Maximum thickness in the region of the cam lesion was more anterior and less lateral (p < 0.001) in FAI patients. Male FAI patients had increased thickness compared to male controls in the cam lesion (1.47 [0.72] vs. 1.10 [0.19] mm, respectively; p < 0.001) and proximal femur (1.25 [0.29] vs. 0.94 [0.17] mm, respectively; p < 0.001). Thickness was not significantly different between male and female controls. Clinical significance: Studies of non-pathologic cadavers have provided guidelines regarding safe surgical resection depth for FAI patients. However, our results suggest impingement induces cortical thickening in cam patients, which may strengthen the proximal femur. Thus, these previously established guidelines may be too conservative.



J.L. Baker, J. Ryou, X.F. Wei, C.R. Butson, N.D. Schiff, K.P. Purpura. “Robust modulation of arousal regulation, performance, and frontostriatal activity through central thalamic deep brain stimulation in healthy nonhuman primates,” In Journal of Neurophysiology, Vol. 116, No. 5, American Physiological Society, pp. 2383--2404. Aug, 2016.
DOI: 10.1152/jn.01129.2015

ABSTRACT

The central thalamus (CT) is a key component of the brain-wide network underlying arousal regulation and sensory-motor integration during wakefulness in the mammalian brain. Dysfunction of the CT, typically a result of severe brain injury (SBI), leads to long-lasting impairments in arousal regulation and subsequent deficits in cognition. Central thalamic deep brain stimulation (CT-DBS) is proposed as a therapy to reestablish and maintain arousal regulation to improve cognition in select SBI patients. However, a mechanistic understanding of CT-DBS and an optimal method of implementing this promising therapy are unknown. Here we demonstrate in two healthy nonhuman primates (NHPs), Macaca mulatta, that location-specific CT-DBS improves performance in visuomotor tasks and is associated with physiological effects consistent with enhancement of endogenous arousal. Specifically, CT-DBS within the lateral wing of the central lateral nucleus and the surrounding medial dorsal thalamic tegmental tract (DTTm) produces a rapid and robust modulation of performance and arousal, as measured by neuronal activity in the frontal cortex and striatum. Notably, the most robust and reliable behavioral and physiological responses resulted when we implemented a novel method of CT-DBS that orients and shapes the electric field within the DTTm using spatially separated DBS leads. Collectively, our results demonstrate that selective activation within the DTTm of the CT robustly regulates endogenous arousal and enhances cognitive performance in the intact NHP; these findings provide insights into the mechanism of CT-DBS and further support the development of CT-DBS as a therapy for reestablishing arousal regulation to support cognition in SBI patients.



J. Beckvermit, T. Harman, C. Wight, M. Berzins. “Physical Mechanisms of DDT in an Array of PBX 9501 Cylinders Initiation Mechanisms of DDT,” SCI Institute, April, 2016.

ABSTRACT

The Deflagration to Detonation Transition (DDT) in large arrays (100s) of explosive devices is investigated using large-scale computer simulations running the Uintah Computational Framework. Our particular interest is understanding the fundamental physical mechanisms by which convective deflagration of cylindrical PBX 9501 devices can transition to a fully-developed detonation in transportation accidents. The simulations reveal two dominant mechanisms, inertial confinement and Impact to Detonation Transition. In this study we examined the role of physical spacing of the cylinders and how it influenced the initiation of DDT.



J. Beckvermit, T. Harman, C. Wight,, M. Berzins. “Packing Configurations of PBX-9501 Cylinders to Reduce the Probability of a Deflagration to Detonation Transition (DDT),” In Propellants, Explosives, Pyrotechnics, 2016.
ISSN: 1521-4087
DOI: 10.1002/prep.201500331

ABSTRACT

The detonation of hundreds of explosive devices from either a transportation or storage accident is an extremely dangerous event. This paper focuses on identifying ways of packing/storing arrays of explosive cylinders that will reduce the probability of a Deflagration to Detonation Transition (DDT). The Uintah Computational Framework was utilized to predict the conditions necessary for a large scale DDT to occur. The results showed that the arrangement of the explosive cylinders and the number of devices packed in a "box" greatly effects the probability of a detonation.



M. Berzins, J. Beckvermit, T. Harman, A. Bezdjian, A. Humphrey, Q. Meng, J. Schmidt,, C. Wight. “Extending the Uintah Framework through the Petascale Modeling of Detonation in Arrays of High Explosive Devices,” In SIAM Journal on Scientific Computing (Accepted), 2016.

ABSTRACT

The Uintah framework for solving a broad class of fluid-structure interaction problems uses a layered taskgraph approach that decouples the problem specification as a set of tasks from the adaptove runtime system that executes these tasks. Uintah has been developed by using a problem-driven approach that dates back to its inception. Using this approach it is possible to improve the performance of the problem-independent software components to enable the solution of broad classes of problems as well as the driving problem itself. This process is illustrated by a motivating problem that is the computational modeling of the hazards posed by thousands of explosive devices during a Deflagration to Detonation Transition (DDT) that occurred on Highway 6 in Utah. In order to solve this complex fluid-structure interaction problem at the required scale, algorithmic and data structure improvements were needed in a code that already appeared to work well at scale. These transformations enabled scalable runs for our target problem and provided the capability to model the transition to detonation. The performance improvements achieved are shown and the solution to the target problem provides insight as to why the detonation happened, as well as to a possible remediation strategy.



A. Bigelow, R. Choudhury, J. Baumes. “Resonant Laboratory and Candela: Spreading Your Visualization Ideas to the Masses,” In Proceedings of Workshop on Visualization in Practice (VIP '16), Note: Best Paper Award , 2016.

ABSTRACT

Visualization practitioners are constantly developing new, innovative ways to visualize data, but much of the software that practitioners produce does not make it into production in professional systems. To solve this problem, we have developed and informally tested two open source systems. The first, Candela, is a framework and API for creating visualization components for the web that can wrap up new or existing visualizations as needed. Because Candela's API generalizes the inputs to a visualization, we have also developed a system called Resonant Laboratory that makes it possible for novice users to connect arbitrary datasets to Candela visualizations. Together, these systems enable novice users to explore and share their data with the growing library of state-of-the-art visualization techniques.