Designed especially for neurobiologists, FluoRender is an interactive tool for multi-channel fluorescence microscopy data visualization and analysis.
Deep brain stimulation
BrainStimulator is a set of networks that are used in SCIRun to perform simulations of brain stimulation such as transcranial direct current stimulation (tDCS) and magnetic transcranial stimulation (TMS).
Developing software tools for science has always been a central vision of the SCI Institute.

SCI Publications

2022


D. Tong, N. Soley, R. Kolasangiani, M.A. Schwartz, T.C. Bidone. “αIIbβ3 integrin intermediates: from molecular dynamics to adhesion assembly,” In Biophysical Journal, 2022.

ABSTRACT

The platelet integrin αIIbβ3 undergoes long range conformational transitions associated with its functional conversion from inactive (low affinity) to active (high affinity) states during hemostasis. Although new conformations intermediate between the well-characterized bent and extended states have been identified, their molecular dynamic properties and functions in the assembly of adhesions remain largely unexplored. In this study, we evaluated the properties of intermediate conformations of integrin αIIbβ3 and characterized their effects on the assembly of adhesions by combining all-atom simulations, principal component analysis, and mesoscale modeling. Our results show that in the low affinity, bent conformation, the integrin ectodomain tends to pivot around the legs; in intermediate conformations the upper headpiece becomes partially extended, away from the lower legs. In the fully open, active state, αIIbβ3 is flexible and the motions between upper headpiece and lower legs are accompanied by fluctuations of the transmembrane helices. At the mesoscale, bent integrins form only unstable adhesions, but intermediate or open conformations stabilize the adhesions. These studies reveal a mechanism by which small variations in ligand binding affinity and enhancement of the ligand-bound lifetime in the presence of actin retrograde flow stabilize αIIbβ3 integrin adhesions.



H. D. Tran, M. Fernando, K. Saurabh, B. Ganapathysubramanian, R. M. Kirby, H. Sundar. “A scalable adaptive-matrix SPMV for heterogeneous architectures,” In 2022 IEEE International Parallel and Distributed Processing Symposium (IPDPS), IEEE, pp. 13--24. 2022.
DOI: 10.1109/IPDPS53621.2022.00011

ABSTRACT

In most computational codes, the core computational kernel is the Sparse Matrix-Vector product (SpMV) that enables specialized linear algebra libraries like PETSc to be used, especially in the distributed memory setting. However, optimizing SpMvperformance and scalability at all levels of a modern heterogeneous architecture can be challenging as it is characterized by irregular memory access. This work presents a hybrid approach (HyMV) for evaluating SpMV for matrices arising from PDE discretization schemes such as the finite element method (FEM). The approach enables localized structured memory access that provides improved performance and scalability. Additionally, it simplifies the programmability and portability on different architectures. The developed HyMV approach enables efficient parallelization using MPI, SIMD, OpenMP, and CUDA with minimum programming effort. We present a detailed comparison of HyMV with the two traditional approaches in computational code, matrix-assembled and matrix-free approaches, for structured and unstructured meshes. Our results demonstrate that the HyMV approach achieves excellent scalability and outperforms both approaches, e.g., achieving average speedups of 11x for matrix setup, 1.7x for SpMV with structured meshes, 3.6x for SpMV with unstructured meshes, and 7.5x for GPU SpMV.



W. Usher, J. Amstutz, J. Günther, A. Knoll, G. P. Johnson, C. Brownlee, A. Hota, B. Cherniak, T. Rowley, J. Jeffers, V. Pascucci . “Scalable CPU Ray Tracing for In Situ Visualization Using OSPRay,” In In Situ Visualization for Computational Science, Springer International Publishing, pp. 353--374. 2022.
ISBN: 978-3-030-81627-8

ABSTRACT

In situ visualization increasingly involves rendering large numbers of images for post hoc exploration. As both the number of images to be rendered and the data being rendered are large, the scalability of the rendering component is of key concern. Furthermore, the renderer must be able to support a wide range of data distributions, simulation configurations, and HPC systems to provide the flexibility required for a portable, general purpose in situ rendering package. In this chapter, we discuss recent developments in OSPRay’s support for MPI-parallel applications to provide a flexible and scalable rendering API, with a focus on how these developments can be applied to enable scalable, high-quality in situ visualization.



A. Venkat, D. Hoang, A. Gyulassy, P.T. Bremer, F. Federer, V. Pascucci. “High-Quality Progressive Alignment of Large 3D Microscopy Data,” In 2022 IEEE 12th Symposium on Large Data Analysis and Visualization (LDAV), pp. 1--10. 2022.
DOI: 10.1109/LDAV57265.2022.9966406

ABSTRACT

Large-scale three-dimensional (3D) microscopy acquisitions fre-quently create terabytes of image data at high resolution and magni-fication. Imaging large specimens at high magnifications requires acquiring 3D overlapping image stacks as tiles arranged on a two-dimensional (2D) grid that must subsequently be aligned and fused into a single 3D volume. Due to their sheer size, aligning many overlapping gigabyte-sized 3D tiles in parallel and at full resolution is memory intensive and often I/O bound. Current techniques trade accuracy for scalability, perform alignment on subsampled images, and require additional postprocess algorithms to refine the alignment quality, usually with high computational requirements. One common solution to the memory problem is to subdivide the overlap region into smaller chunks (sub-blocks) and align the sub-block pairs in parallel, choosing the pair with the most reliable alignment to determine the global transformation. Yet aligning all sub-block pairs at full resolution remains computationally expensive. The key to quickly developing a fast, high-quality, low-memory solution is to identify a single or a small set of sub-blocks that give good alignment at full resolution without touching all the overlapping data. In this paper, we present a new iterative approach that leverages coarse resolution alignments to progressively refine and align only the promising candidates at finer resolutions, thereby aligning only a small user-defined number of sub-blocks at full resolution to determine the lowest error transformation between pairwise overlapping tiles. Our progressive approach is 2.6x faster than the state of the art, requires less than 450MB of peak RAM (per parallel thread), and offers a higher quality alignment without the need for additional postprocessing refinement steps to correct for alignment errors.



Z. Wang, Y. Xu, C. Tillinghast, S. Li, A. Narayan, S. Zhe. “Nonparametric Embeddings of Sparse High-Order Interaction Events,” In Proceedings of the 39 th International Conference on Machine Learning, PLMR, pp. 23237-23253. 2022.

ABSTRACT

High-order interaction events are common in real-world applications. Learning embeddings that encode the complex relationships of the participants from these events is of great importance in knowledge mining and predictive tasks. Despite the success of existing approaches, eg Poisson tensor factorization, they ignore the sparse structure underlying the data, namely the occurred interactions are far less than the possible interactions among all the participants. In this paper, we propose Nonparametric Embeddings of Sparse High-order interaction events (NESH). We hybridize a sparse hypergraph (tensor) process and a matrix Gaussian process to capture both the asymptotic structural sparsity within the interactions and nonlinear temporal relationships between the participants. We prove strong asymptotic bounds (including both a lower and an upper bound) of the sparse ratio, which reveals the asymptotic properties of the sampled structure. We use batch-normalization, stick-breaking construction and sparse variational GP approximations to develop an efficient, scalable model inference algorithm. We demonstrate the advantage of our approach in several real-world applications.



Z. Wang, M. Dorier, P. Subedi, P.E. Davis, M. Parashar. “Adaptive elasticity policies for staging-based in situ visualization,” In Future Generation Computer Systems, 2022.
ISSN: 0167-739X
DOI: https://doi.org/10.1016/j.future.2022.12.010

ABSTRACT

In situ processing aims to alleviate the growing gap between computation and I/O capabilities by performing data processing close to the data source. In situ processing is widely used to process data generated by multiple data sources, including observation data from edge devices or scientific observational facilities and the simulation data generated by scientific computation on a high-performance computing (HPC) platform. For a scientific workflow that is run on an HPC platform and composed of a simulation program and an in situ data analytics or visualization (abbreviated as ana/vis) task, there is an implicit assumption that the computing resources assigned to the workflow keep static during the workflow execution. However, with the converging trend between the HPC and cloud computing platform, running the in situ ana/vis task in an elastic way is promising to decrease its overhead and improve its resource utilization rate. Resource elasticity represents the ability to change resource configurations such as the number of computing nodes/processes during workflow execution. An elastic job may dynamically adjust resource configurations; it may use a few resources at the beginning and more resources toward the end of the job when interesting data appear. However, it is hard to predict a priori how many computing nodes/processes need to be added/removed during the workflow execution to adapt to changing workflow needs. How to efficiently guide elasticity operations, such as growing or shrinking the number of processes used for in situ analysis during workflow execution, is an open-ended research question. In this article, we present adaptive elasticity policies that adopt workflow runtime information collected during workflow execution to predict how to trigger the addition/removal of processes in order to minimize in situ processing overhead. Taking in situ visualization tasks as an example, we integrate the presented elasticity policies into a staging-based elastic workflow and evaluate its efficiency in multiple elasticity scenarios. Compared with the situation without elasticity or with a static elasticity policy that uses a fixed number of processes for each rescaling operation, the adaptive elasticity policy can save overhead in finding a proper resource configuration and improve resource utilization efficiency. For example, one experiment illustrates that the adaptive elasticity policy saves 41% of core-hours compared with the situation without the resource elasticity.



V. Zala, A. Narayan, R.M. Kirby. “Convex Optimization-Based Structure-Preserving Filter For Multidimensional Finite Element Simulations,” Subtitled “arXiv preprint arXiv:2203.09748,” 2022.

ABSTRACT

In simulation sciences, it is desirable to capture the real-world problem features as accurately as possible. Methods popular for scientific simulations such as the finite element method (FEM) and finite volume method (FVM) use piecewise polynomials to approximate various characteristics of a problem, such as the concentration profile and the temperature distribution across the domain. Polynomials are prone to creating artifacts such as Gibbs oscillations while capturing a complex profile. An efficient and accurate approach must be applied to deal with such inconsistencies in order to obtain accurate simulations. This often entails dealing with negative values for the concentration of chemicals, exceeding a percentage value over 100, and other such problems. We consider these inconsistencies in the context of partial differential equations (PDEs). We propose an innovative filter based on convex optimization to deal with the inconsistencies observed in polynomial-based simulations. In two or three spatial dimensions, additional complexities are involved in solving the problems related to structure preservation. We present the construction and application of a structure-preserving filter with a focus on multidimensional PDEs. Methods used such as the Barycentric interpolation for polynomial evaluation at arbitrary points in the domain and an optimized root-finder to identify points of interest improve the filter efficiency, usability, and robustness. Lastly, we present numerical experiments in 2D and 3D using discontinuous Galerkin formulation and demonstrate the filter's efficacy to preserve the desired structure. As a real-world application …



B. Zhang, P. Subedi, P. E. Davis, F. Rizzi, K. Teranishi, M. Parashar. “Assembling Portable In-Situ Workflow from Heterogeneous Components using Data Reorganization,” In 22nd IEEE International Symposium on Cluster, Cloud and Internet Computing (CCGrid), pp. 41-50. 2022.
DOI: 10.1109/CCGrid54584.2022.00013

ABSTRACT

Heterogeneous computing is becoming common in the HPC world. The fast-changing hardware landscape is pushing programmers and developers to rely on performance-portable programming models to rewrite old and legacy applications and develop new ones. While this approach is suitable for individual applications, outstanding challenges still remain when multiple applications are combined into complex workflows. One critical difficulty is the exchange of data between communicating applications where performance constraints imposed by heterogeneous hardware advantage different data layouts. We attempt to solve this problem by exploring asynchronous data layout conversions for applications requiring different memory access patterns for shared data. We implement the proposed solution within the DataSpaces data staging service, extending it to support heterogeneous application workflows across a broad spectrum of programming models. In addition, we integrate heterogeneous DataSpaces with the Kokkos programming model and propose the Kokkos Staging Space as an extension of the Kokkos data abstraction. This new abstraction enables us to express data on a virtual shared space for multiple Kokkos applications, thus guaranteeing the portability of each application when assembling them into an efficient heterogeneous workflow. We present performance results for the Kokkos Staging Space using a synthetic workflow emulator and three different scenarios representing access frequency and use patterns in shared data. The results show that the Kokkos Staging Space is a superior solution in terms of time-to-solution and scalability compared to existing file-based Kokkos data abstractions for inter-application data exchange.



L. Zhou, M. Fan, C. Hansen, C. R. Johnson, D. Weiskopf. “A Review of Three-Dimensional Medical Image Visualization,” In Health Data Science, Vol. 2022, 2022.
DOI: https://doi.org/10.34133/2022/9840519

ABSTRACT

Importance. Medical images are essential for modern medicine and an important research subject in visualization. However, medical experts are often not aware of the many advanced three-dimensional (3D) medical image visualization techniques that could increase their capabilities in data analysis and assist the decision-making process for specific medical problems. Our paper provides a review of 3D visualization techniques for medical images, intending to bridge the gap between medical experts and visualization researchers. Highlights. Fundamental visualization techniques are revisited for various medical imaging modalities, from computational tomography to diffusion tensor imaging, featuring techniques that enhance spatial perception, which is critical for medical practices. The state-of-the-art of medical visualization is reviewed based on a procedure-oriented classification of medical problems for studies of individuals and populations. This paper summarizes free software tools for different modalities of medical images designed for various purposes, including visualization, analysis, and segmentation, and it provides respective Internet links. Conclusions. Visualization techniques are a useful tool for medical experts to tackle specific medical problems in their daily work. Our review provides a quick reference to such techniques given the medical problem and modalities of associated medical images. We summarize fundamental techniques and readily available visualization tools to help medical experts to better understand and utilize medical imaging data. This paper could contribute to the joint effort of the medical and visualization communities to advance precision medicine.


2021


P. Agrawal, R. T. Whitaker, S. Y. Elhabian. “Learning Deep Features for Shape Correspondence with Domain Invariance,” Subtitled “arXiv preprint arXiv:2102.10493,” 2021.

ABSTRACT

Correspondence-based shape models are key to various medical imaging applications that rely on a statistical analysis of anatomies. Such shape models are expected to represent consistent anatomical features across the population for population-specific shape statistics. Early approaches for correspondence placement rely on nearest neighbor search for simpler anatomies. Coordinate transformations for shape correspondence hold promise to address the increasing anatomical complexities. Nonetheless, due to the inherent shape-level geometric complexity and population-level shape variation, the coordinate-wise correspondence often does not translate to the anatomical correspondence. An alternative, group-wise approach for correspondence placement explicitly models the trade-off between geometric description and the population's statistical compactness. However, these models achieve limited success in resolving nonlinear shape correspondence. Recent works have addressed this limitation by adopting an application-specific notion of correspondence through lifting positional data to a higher dimensional feature space. However, they heavily rely on manual expertise to create domain-specific features and consistent landmarks. This paper proposes an automated feature learning approach, using deep convolutional neural networks to extract correspondence-friendly features from shape ensembles. Further, an unsupervised domain adaptation scheme is introduced to augment the pretrained geometric features with new anatomies. Results on anatomical datasets of human scapula, femur, and pelvis bones demonstrate that …



T. M. Athawale, B. J. Stanislawski, S. Sane,, C. R. Johnson. “Visualizing Interactions Between Solar Photovoltaic Farms and the Atmospheric Boundary Layer,” In Twelfth ACM International Conference on Future Energy Systems, pp. 377--381. 2021.

ABSTRACT

The efficiency of solar panels depends on the operating temperature. As the panel temperature rises, efficiency drops. Thus, the solar energy community aims to understand the factors that influence the operating temperature, which include wind speed, wind direction, turbulence, ambient temperature, mounting configuration, and solar cell material. We use high-resolution numerical simulations to model the flow and thermal behavior of idealized solar farms. Because these simulations model such complex behavior, advanced visualization techniques are needed to investigate and understand the results. Here, we present advanced 3D visualizations of numerical simulation results to illustrate the flow and heat transport in an idealized solar farm. The findings can be used to understand how flow behavior influences module temperatures, and vice versa.



T. M. Athawale, S. Sane, C. R. Johnson. “Uncertainty Visualization of the Marching Squares and Marching Cubes Topology Cases,” Subtitled “arXiv:2108.03066,” 2021.

ABSTRACT

Marching squares (MS) and marching cubes (MC) are widely used algorithms for level-set visualization of scientific data. In this paper, we address the challenge of uncertainty visualization of the topology cases of the MS and MC algorithms for uncertain scalar field data sampled on a uniform grid. The visualization of the MS and MC topology cases for uncertain data is challenging due to their exponential nature and the possibility of multiple topology cases per cell of a grid. We propose the topology case count and entropy-based techniques for quantifying uncertainty in the topology cases of the MS and MC algorithms when noise in data is modeled with probability distributions. We demonstrate the applicability of our techniques for independent and correlated uncertainty assumptions. We visualize the quantified topological uncertainty via color mapping proportional to uncertainty, as well as with interactive probability queries in the MS case and entropy isosurfaces in the MC case. We demonstrate the utility of our uncertainty quantification framework in identifying the isovalues exhibiting relatively high topological uncertainty. We illustrate the effectiveness of our techniques via results on synthetic, simulation, and hixel datasets.



T. M. Athawale, B. Ma, E. Sakhaee, C. R. Johnson,, A. Entezari. “Direct Volume Rendering with Nonparametric Models of Uncertainty,” In IEEE Transactions on Visualization and Computer Graphics, Vol. 27, No. 2, pp. 1797-1807. 2021.
DOI: 10.1109/TVCG.2020.3030394

ABSTRACT

We present a nonparametric statistical framework for the quantification, analysis, and propagation of data uncertainty in direct volume rendering (DVR). The state-of-the-art statistical DVR framework allows for preserving the transfer function (TF) of the ground truth function when visualizing uncertain data; however, the existing framework is restricted to parametric models of uncertainty. In this paper, we address the limitations of the existing DVR framework by extending the DVR framework for nonparametric distributions. We exploit the quantile interpolation technique to derive probability distributions representing uncertainty in viewing-ray sample intensities in closed form, which allows for accurate and efficient computation. We evaluate our proposed nonparametric statistical models through qualitative and quantitative comparisons with the mean-field and parametric statistical models, such as uniform and Gaussian, as well as Gaussian mixtures. In addition, we present an extension of the state-of-the-art rendering parametric framework to 2D TFs for improved DVR classifications. We show the applicability of our uncertainty quantification framework to ensemble, downsampled, and bivariate versions of scalar field datasets.



P. R. Atkins, P. Agrawal, J. D. Mozingo, K. Uemura, K. Tokunaga, C. L. Peters, S. Y. Elhabian, R. T. Whitaker, A. E. Anderson. “Prediction of Femoral Head Coverage from Articulated Statistical Shape Models of Patients with Developmental Dysplasia of the Hip,” In Journal of Orthopaedic Research, Wiley, 2021.
DOI: 10.1002/jor.25227

ABSTRACT

Developmental dysplasia of the hip (DDH) is commonly described as reduced femoral head coverage due to anterolateral acetabular deficiency. Although reduced coverage is the defining trait of DDH, more subtle and localized anatomic features of the joint are also thought to contribute to symptom development and degeneration. These features are challenging to identify using conventional approaches. Herein, we assessed the morphology of the full femur and hemi-pelvis using an articulated statistical shape model (SSM). The model determined the morphological and pose-based variations associated with DDH in a population of Japanese females and established which of these variations predict coverage. Computed tomography images of 83 hips from 47 patients were segmented for input into a correspondence-based SSM. The dominant modes of variation in the model initially represented scale and pose. After removal of these factors through individual bone alignment, femoral version and neck-shaft angle, pelvic curvature, and acetabular version dominated the observed variation. Femoral head oblateness and prominence of the acetabular rim and various muscle attachment sites of the femur and hemi-pelvis were found to predict 3D CT-based coverage measurements (R2=0.5-0.7 for the full bones, R2=0.9 for the joint).



A. Bagherinezhad, M. Young, Bei Wang, M. Parvania. “Spatio-Temporal Visualization of Interdependent Battery Bus Transit and Power Distribution Systems,” In IEEE PES Innovative Smart Grid Technologies Conference(ISGT), IEEE, 2021.

ABSTRACT

The high penetration of transportation electrification and its associated charging requirements magnify the interdependency of the transportation and power distribution systems. The emergent interdependency requires that system operators fully understand the status of both systems. To this end,a visualization tool is presented to illustrate the inter dependency of battery bus transit and power distribution systems and the associated components. The tool aims at monitoring components from both systems, such as the locations of electric buses, the state of charge of batteries, the price of electricity, voltage, current,and active/reactive power flow. The results showcase the success of the visualization tool in monitoring the bus transit and power distribution components to determine a reliable cost-effective scheme for spatio-temporal charging of electric buses.



M. K. Ballard, R. Amici, V. Shankar, L. A. Ferguson, M. Braginsky, R. M. Kirby. “Towards an Extrinsic, CG-XFEM Approach Based on Hierarchical Enrichments for Modeling Progressive Fracture,” Subtitled “arXiv preprint arXiv:2104.14704,” 2021.

ABSTRACT

We propose an extrinsic, continuous-Galerkin (CG), extended finite element method (XFEM) that generalizes the work of Hansbo and Hansbo to allow multiple Heaviside enrichments within a single element in a hierarchical manner. This approach enables complex, evolving XFEM surfaces in 3D that cannot be captured using existing CG-XFEM approaches. We describe an implementation of the method for 3D static elasticity with linearized strain for modeling open cracks as a salient step towards modeling progressive fracture. The implementation includes a description of the finite element model, hybrid implicit/explicit representation of enrichments, numerical integration method, and novel degree-of-freedom (DoF) enumeration algorithm. This algorithm supports an arbitrary number of enrichments within an element, while simultaneously maintaining a CG solution across elements. Additionally, our approach easily allows an implementation suitable for distributed computing systems. Enabled by the DoF enumeration algorithm, the proposed method lays the groundwork for a computational tool that efficiently models progressive fracture. To facilitate a discussion of the complex enrichment hierarchies, we develop enrichment diagrams to succinctly describe and visualize the relationships between the enrichments (and the fields they create) within an element. This also provides a unified language for discussing extrinsic XFEM methods in the literature. We compare several methods, relying on the enrichment diagrams to highlight their nuanced differences.



D. Balouek-Thomert, I. Rodero, M. Parashar. “Evaluating policy-driven adaptation on the Edge-to-Cloud Continuum,” In IEEE/ACM HPC for Urgent Decision Making (UrgentHPC), pp. 11-20. 2021.
DOI: 10.1109/UrgentHPC54802.2021.00007

ABSTRACT

Developing data-driven applications requires developers and service providers to orchestrate data-to-discovery pipelines across distributed data sources and computing units. Realizing such pipelines poses two major challenges: programming analytics that reacts at runtime to unforeseen events, and adaptation of the resources and computing paths between the edge and the cloud. While these concerns are interdependent, they must be separated during the design process of the application and the deployment operations of the infrastructure. This work proposes a system stack for the adaptation of distributed analytics across the computing continuum. We implemented this software stack to evaluate its ability to continually balance the computation or data movement’s cost with the value of operations to the application objectives. Using a disaster response application, we observe that the system can select appropriate configurations while managing trade-offs between user-defined constraints, quality of results, and resource utilization. The evaluation shows that our model is able to adapt to variations in the data input size, bandwidth, and CPU capacities with minimal deadline violations (close to 10%). This constitutes encouraging results to benefit and facilitate the creation of ad-hoc computing paths for urgent science and time-critical decision-making.



D. Balouek-Thomert, P. Silva, K. Fauvel, A. Costan, G. Antoniu, M. Parashar. “MDSC: Modelling Distributed Stream Processing across the Edge-to-Cloud Continuum,” In DML-ICC 2021 workshop (held in conjunction with UCC 2021), December, 2021.

ABSTRACT

The growth of the Internet of Things is resulting in an explosion of data volumes at the Edge of the Internet. To reduce costs incurred due to data movement and centralized cloud-based processing, it is becoming increasingly important to process and analyze such data closer to the data sources. Exploiting Edge computing capabilities for stream-based processing is however challenging. It requires addressing the complex characteristics and constraints imposed by all the resources along the data path, as well as the large set of heterogeneous data processing and management frameworks. Consequently, the community needs tools that can facilitate the modeling of this complexity and can integrate the various components involved. In this work, we introduce MDSC, a hierarchical approach for modeling distributed stream-based applications on Edge-to-Cloud continuum infrastructures. We demonstrate how MDSC can be applied to a concrete real-life ML-based application -early earthquake warning - to help answer questions such as: when is it worth decentralizing the classification load from the Cloud to the Edge and how?



M. Berzins. “Symplectic Time Integration Methods for the Material Point Method, Experiments, Analysis and Order Reduction,” In WCCM-ECCOMAS2020 virtual Conference, Note: Minor typographical correction in March 2024, January, 2021.

ABSTRACT

The provision of appropriate time integration methods for the Material Point Method (MPM) involves considering stability, accuracy and energy conservation. A class of methods that addresses many of these issues are the widely-used symplectic time integration methods. Such methods have good conservation properties and have the potential to achieve high accuracy. In this work we build on the work in [5] and consider high order methods for the time integration of the Material Point Method. The results of practical experiments show that while high order methods in both space and time have good accuracy initially, unless the problem has relatively little particle movement then the accuracy of the methods for later time is closer to that of low order methods. A theoretical analysis explains these results as being similar to the stage error found in Runge Kutta methods, though in this case the stage error arises from the MPM differentiations and interpolations from particles to grid and back again, particularly in cases in which there are many grid crossings.



J. A. Bergquist, W. W. Good, B. Zenger, J. D. Tate, L. C. Rupp, R. S. MacLeod. “The Electrocardiographic Forward Problem: A Benchmark Study,” In Computers in Biology and Medicine, Vol. 134, Pergamon, pp. 104476. 2021.
DOI: https://doi.org/10.1016/j.compbiomed.2021.104476

ABSTRACT

Background
Electrocardiographic forward problems are crucial components for noninvasive electrocardiographic imaging (ECGI) that compute torso potentials from cardiac source measurements. Forward problems have few sources of error as they are physically well posed and supported by mature numerical and computational techniques. However, the residual errors reported from experimental validation studies between forward computed and measured torso signals remain surprisingly high.

Objective
To test the hypothesis that incomplete cardiac source sampling, especially above the atrioventricular (AV) plane is a major contributor to forward solution errors.

Methods
We used a modified Langendorff preparation suspended in a human-shaped electrolytic torso-tank and a novel pericardiac-cage recording array to thoroughly sample the cardiac potentials. With this carefully controlled experimental preparation, we minimized possible sources of error, including geometric error and torso inhomogeneities. We progressively removed recorded signals from above the atrioventricular plane to determine how the forward-computed torso-tank potentials were affected by incomplete source sampling.

Results
We studied 240 beats total recorded from three different activation sequence types (sinus, and posterior and anterior left-ventricular free-wall pacing) in each of two experiments. With complete sampling by the cage electrodes, all correlation metrics between computed and measured torso-tank potentials were above 0.93 (maximum 0.99). The mean root-mean-squared error across all beat types was also low, less than or equal to 0.10 mV. A precipitous drop in forward solution accuracy was observed when we included only cage measurements below the AV plane.

Conclusion
First, our forward computed potentials using complete cardiac source measurements set a benchmark for similar studies. Second, this study validates the importance of complete cardiac source sampling above the AV plane to produce accurate forward computed torso potentials. Testing ECGI systems and techniques with these more complete and highly accurate datasets will improve inverse techniques and noninvasive detection of cardiac electrical abnormalities.