The NIH/NIGMS
Center for Integrative Biomedical Computing

SCI Publications

2023


J. A. Bergquist, B. Zenger, L. Rupp, A. Busatto, J. D. Tate, D. H. Brooks, A. Narayan, R. MacLeod. “Uncertainty quantification of the effect of cardiac position variability in the inverse problem of electrocardiographic imaging,” In Journal of Physiological Measurement, IOP Science, 2023.
DOI: 10.1088/1361-6579/acfc32

ABSTRACT

Objective:
Electrocardiographic imaging (ECGI) is a functional imaging modality that consists of two related problems, the forward problem of reconstructing body surface electrical signals given cardiac bioelectric activity, and the inverse problem of reconstructing cardiac bioelectric activity given measured body surface signals. ECGI relies on a model for how the heart generates bioelectric signals which is subject to variability in inputs. The study of how uncertainty in model inputs affects the model output is known as uncertainty quantification (UQ). This study establishes develops, and characterizes the application of UQ to ECGI.

Approach:
We establish two formulations for applying UQ to ECGI: a polynomial chaos expansion (PCE) based parametric UQ formulation (PCE-UQ formulation), and a novel UQ-aware inverse formulation which leverages our previously established ``joint-inverse" formulation (UQ joint-inverse formulation). We apply these to evaluate the effect of uncertainty in the heart position on the ECGI solutions across a range of ECGI datasets.

Main Results:
We demonstrated the ability of our UQ-ECGI formulations to characterize the effect of parameter uncertainty on the ECGI inverse problem. We found that while the PCE-UQ inverse solution provided more complex outputs such as sensitivities and standard deviation, the UQ joint-inverse solution provided a more interpretable output in the form of a single ECGI solution. We find that between these two methods we are able to assess a wide range of effects that heart position variability has on the ECGI solution.

Significance:
This study, for the first time, characterizes in detail the application of UQ to the ECGI inverse problem. We demonstrated how UQ can provide insight into the behavior of ECGI using variability in cardiac position as a test case. This study lays the groundwork for future development of UQ-ECGI studies, as well as future development of ECGI formulations which are robust to input parameter variability.



R. Kamali, E. Kwan, M. Regouski, T.J. Bunch, D.J. Dosdall, E. Hsu, R. S. Macleod, I. Polejaeva, R. Ranjan. “Contribution of atrial myofiber architecture to atrial fibrillation,” In PLOS ONE, Vol. 18, No. 1, Public Library of Science, pp. 1--16. Jan, 2023.
DOI: 10.1371/journal.pone.0279974

ABSTRACT

Background

The role of fiber orientation on a global chamber level in sustaining atrial fibrillation (AF) is unknown. The goal of this study was to correlate the fiber direction derived from Diffusion Tensor Imaging (DTI) with AF inducibility.

Methods

Transgenic goats with cardiac-specific overexpression of constitutively active TGF-β1 (n = 14) underwent AF inducibility testing by rapid pacing in the left atrium. We chose a minimum of 10 minutes of sustained AF as a cut-off for AF inducibility. Explanted hearts underwent DTI to determine the fiber direction. Using tractography data, we clustered, visualized, and quantified the fiber helix angles in 8 different regions of the left atrial wall using two reference vectors defined based on anatomical landmarks.
Results

Sustained AF was induced in 7 out of 14 goats. The mean helix fiber angles in 7 out of 8 selected regions were statistically different (P-Value < 0.05) in the AF inducible group. The average fractional anisotropy (FA) and the mean diffusivity (MD) were similar in the two groups with FA of 0.32±0.08 and MD of 8.54±1.72 mm2/s in the non-inducible group and FA of 0.31±0.05 (P-value = 0.90) and MD of 8.68±1.60 mm2/s (P-value = 0.88) in the inducible group.
Conclusions

DTI based fiber direction shows significant variability across subjects with a significant difference between animals that are AF inducible versus animals that are not inducible. Fiber direction might be contributing to the initiation and sustaining of AF, and its role needs to be investigated further.



B.A. Orkild, J.A. Bergquist, E.N. Paccione, M. Lange, E. Kwan, B. Hunt, R. MacLeod, A. Narayan, R. Ranjan. “A Grid Search of Fibrosis Thresholds for Uncertainty Quantification in Atrial Flutter Simulations,” In Computing in Cardiology, 2023.

ABSTRACT

Atypical Atrial Flutter (AAF) is the most common cardiac arrhythmia to develop following catheter ablation for atrial fibrillation. Patient-specific computational simulations of propagation have shown promise in prospectively predicting AAF reentrant circuits and providing useful insight to guide successful ablation procedures. These patient-specific models require a large number of inputs, each with an unknown amount of uncertainty. Uncertainty quantification (UQ) is a technique to assess how variability in a set of input parameters can affect the output of a model. However, modern UQ techniques, such as polynomial chaos expansion, require a well-defined output to map to the inputs. In this study, we aimed to explore the sensitivity of simulated reentry to the selection of fibrosis threshold in patient-specific AAF models. We utilized the image intensity ratio (IIR) method to set the fibrosis threshold in the LGE-MRI from a single patient with prior ablation. We found that the majority of changes to the duration of reentry occurred within an IIR range of 1.01 to 1.39, and that there was a large amount of variability in the resulting arrhythmia. This study serves as a starting point for future UQ studies to investigate the nonlinear relationship between fibrosis threshold and the resulting arrhythmia in AAF models.


2022


E.E. Anstadt, W. Tao, E. Guo, L. Dvoracek, M.K. Bruce, P.J. Grosse, L. Wang, L. Kavan, R. Whitaker, J.A. Goldstein. “Quantifying the Severity of Metopic Craniosynostosis Using Unsupervised Machine Learning,” In Plastic and Reconstructive Surgery, November, 2022.

ABSTRACT

Background: 

Quantifying the severity of head shape deformity and establishing a threshold for operative intervention remains challenging in patients with Metopic Craniosynostosis (MCS). This study combines 3D skull shape analysis with an unsupervised machine-learning algorithm to generate a quantitative shape severity score (CMD) and provide an operative threshold score.

Methods: 

Head computed tomography (CT) scans from subjects with MCS and normal controls (age 5-15 months) were used for objective 3D shape analysis using ShapeWorks software and in a survey for craniofacial surgeons to rate head-shape deformity and report whether they would offer surgical correction based on head shape alone. An unsupervised machine-learning algorithm was developed to quantify the degree of shape abnormality of MCS skulls compared to controls.

Results: 

124 CTs were used to develop the model; 50 (24% MCS, 76% controls) were rated by 36 craniofacial surgeons, with an average of 20.8 ratings per skull. The interrater reliability was high (ICC=0.988). The algorithm performed accurately and correlates closely with the surgeons assigned severity ratings (Spearman’s Correlation coefficient r=0.817). The median CMD for affected skulls was 155.0 (IQR 136.4-194.6, maximum 231.3). Skulls with ratings ≥150.2 were highly likely to be offered surgery by the experts in this study.

Conclusions: 

This study describes a novel metric to quantify the head shape deformity associated with metopic craniosynostosis and contextualizes the results using clinical assessments of head shapes by craniofacial experts. This metric may be useful in supporting clinical decision making around operative intervention as well as in describing outcomes and comparing patient population across centers.



T. M. Athawale, D. Maljovec. L. Yan, C. R. Johnson, V. Pascucci, B. Wang. “Uncertainty Visualization of 2D Morse Complex Ensembles Using Statistical Summary Maps,” In IEEE Transactions on Visualization and Computer Graphics, Vol. 28, No. 4, pp. 1955-1966. April, 2022.
ISSN: 1077-2626
DOI: 10.1109/TVCG.2020.3022359

ABSTRACT

Morse complexes are gradient-based topological descriptors with close connections to Morse theory. They are widely applicable in scientific visualization as they serve as important abstractions for gaining insights into the topology of scalar fields. Data uncertainty inherent to scalar fields due to randomness in their acquisition and processing, however, limits our understanding of Morse complexes as structural abstractions. We, therefore, explore uncertainty visualization of an ensemble of 2D Morse complexes that arises from scalar fields coupled with data uncertainty. We propose several statistical summary maps as new entities for quantifying structural variations and visualizing positional uncertainties of Morse complexes in ensembles. Specifically, we introduce three types of statistical summary maps – the probabilistic map , the significance map , and the survival map – to characterize the uncertain behaviors of gradient flows. We demonstrate the utility of our proposed approach using wind, flow, and ocean eddy simulation datasets.



W. Bangerth, C. R. Johnson, D. K. Njeru, B. van Bloemen Waanders. “Estimating and using information in inverse problems,” Subtitled “arXiv:2208.09095,” 2022.

ABSTRACT

For inverse problems one attempts to infer spatially variable functions from indirect measurements of a system. To practitioners of inverse problems, the concept of ``information'' is familiar when discussing key questions such as which parts of the function can be inferred accurately and which cannot. For example, it is generally understood that we can identify system parameters accurately only close to detectors, or along ray paths between sources and detectors, because we have ``the most information'' for these places.

Although referenced in many publications, the ``information'' that is invoked in such contexts is not a well understood and clearly defined quantity. Herein, we present a definition of information density that is based on the variance of coefficients as derived from a Bayesian reformulation of the inverse problem. We then discuss three areas in which this information density can be useful in practical algorithms for the solution of inverse problems, and illustrate the usefulness in one of these areas -- how to choose the discretization mesh for the function to be reconstructed -- using numerical experiments.



J. A. Bergquist, J. Coll-Font, B. Zenger, L. C. Rupp, W. W. Good, D. H. Brooks, R. S. MacLeod. “Reconstruction of cardiac position using body surface potentials,” In Computers in Biology and Medicine, Vol. 142, pp. 105174. 2022.
DOI: https://doi.org/10.1016/j.compbiomed.2021.105174

ABSTRACT

Electrocardiographic imaging (ECGI) is a noninvasive technique to assess the bioelectric activity of the heart which has been applied to aid in clinical diagnosis and management of cardiac dysfunction. ECGI is built on mathematical models that take into account several patient specific factors including the position of the heart within the torso. Errors in the localization of the heart within the torso, as might arise due to natural changes in heart position from respiration or changes in body position, contribute to errors in ECGI reconstructions of the cardiac activity, thereby reducing the clinical utility of ECGI. In this study we present a novel method for the reconstruction of cardiac geometry utilizing noninvasively acquired body surface potential measurements. Our geometric correction method simultaneously estimates the cardiac position over a series of heartbeats by leveraging an iterative approach which alternates between estimating the cardiac bioelectric source across all heartbeats and then estimating cardiac positions for each heartbeat. We demonstrate that our geometric correction method is able to reduce geometric error and improve ECGI accuracy in a wide range of testing scenarios. We examine the performance of our geometric correction method using different activation sequences, ranges of cardiac motion, and body surface electrode configurations. We find that after geometric correction resulting ECGI solution accuracy is improved and variability of the ECGI solutions between heartbeats is substantially reduced.



J.A. Bergquist, L.C. Rupp, A. Busatto, B. Orkild, B. Zenger, W. Good, J. Coll-Font, A. Narayan, J. Tate, D. Brooks, R.S. MacLeod. “Heart Position Uncertainty Quantification in the Inverse Problem of ECGI,” In Computing in Cardiology, Vol. 49, 2022.

ABSTRACT

Electrocardiographic imaging (ECGI) is a clinical and research tool for noninvasive diagnosis of cardiac electrical dysfunction. The position of the heart within the torso is both an input and common source of error in ECGI. Many studies have sought to improve cardiac localization accuracy, however, few have examined quantitatively the effects of uncertainty in the position of the heart within the torso. Recently developed uncertainty quantification (UQ) tools enable the robust application of UQ to ECGI reconstructions. In this study, we developed an ECGI formulation, which for the first time, directly incorporated uncertainty in the heart position. The result is an ECGI solution that is robust to variation in heart position. Using data from two Langendorff experimental preparations, each with 120 heartbeats distributed across three activation sequences, we found that as heart position uncertainty increased above ±10 mm, the solution quality of the ECGI degraded. However, even at large heart position uncertainty (±40 mm) our novel UQ-ECGI formulation produced reasonable solutions (root mean squared error < 1 mV, spatial correlation >0.6, temporal correlation >0.75).



A. Busatto, J.A. Bergquist, L.C. Rupp, B. Zenger, R.S. MacLeod. “Unexpected Errors in the Electrocardiographic Forward Problem,” In Computing in Cardiology, Vol. 49, 2022.

ABSTRACT

Previous studies have compared recorded torso potentials with electrocardiographic forward solutions from a pericardial cage. In this study, we introduce new comparisons of the forward solutions from the sock and cage with each other and with respect to the measured potentials on the torso. The forward problem of electrocardiographic imaging is expected to achieve high levels of accuracy since it is mathematically well posed. However, unexpectedly high residual errors remain between the computed and measured torso signals in experiments. A possible source of these errors is the limited spatial coverage of the cardiac sources in most experiments; most capture potentials only from the ventricles. To resolve the relationship between spatial coverage and the accuracy of the forward simulations, we combined two methods of capturing cardiac potentials using a 240-electrode sock and a 256-electrode cage, both surrounding a heart suspended in a 192-electrode torso tank. We analyzed beats from three pacing sites and calculated the RMSE, spatial correlation, and temporal correlation. We found that the forward solutions using the sock as the cardiac source were poorer compared to those obtained from the cage. In this study, we explore the differences in forward solution accuracy using the sock and the cage and suggest some possible explanations for these differences.



M. Han, S. Sane, C. R. Johnson. “Exploratory Lagrangian-Based Particle Tracing Using Deep Learning,” In Journal of Flow Visualization and Image Processing, Begell, 2022.
DOI: 10.1615/JFlowVisImageProc.2022041197

ABSTRACT

Time-varying vector fields produced by computational fluid dynamics simulations are often prohibitively large and pose challenges for accurate interactive analysis and exploration. To address these challenges, reduced Lagrangian representations have been increasingly researched as a means to improve scientific time-varying vector field exploration capabilities. This paper presents a novel deep neural network-based particle tracing method to explore time-varying vector fields represented by Lagrangian flow maps. In our workflow, in situ processing is first utilized to extract Lagrangian flow maps, and deep neural networks then use the extracted data to learn flow field behavior. Using a trained model to predict new particle trajectories offers a fixed small memory footprint and fast inference. To demonstrate and evaluate the proposed method, we perform an in-depth study of performance using a well-known analytical data set, the Double Gyre. Our study considers two flow map extraction strategies, the impact of the number of training samples and integration durations on efficacy, evaluates multiple sampling options for training and testing, and informs hyperparameter settings. Overall, we find our method requires a fixed memory footprint of 10.5 MB to encode a Lagrangian representation of a time-varying vector field while maintaining accuracy. For post hoc analysis, loading the trained model costs only two seconds, significantly reducing the burden of I/O when reading data for visualization. Moreover, our parallel implementation can infer one hundred locations for each of two thousand new pathlines in 1.3 seconds using one NVIDIA Titan RTX GPU.



Y. Ishidoya, E. Kwan, D. J. Dosdall, R. S. Macleod, L. Navaravong, B. A. Steinberg, T. J. Bunch, R. Ranjan. “Shorter Distance Between The Esophagus And The Left Atrium Is Associated With Higher Rates Of Esophageal Thermal Injury After Radiofrequency Ablation,” In Journal of Cardiovascular Electrophysiology, Wiley, 2022.
DOI: 10.1111/jce.15554

ABSTRACT

Background
Esophageal thermal injury (ETI) is a known and potentially serious complication of catheter ablation for atrial fibrillation. We intended to evaluate the distance between the esophagus and the left atrium posterior wall (LAPW) and its association with esophageal thermal injury.

Methods
A retrospective analysis of 73 patients who underwent esophagogastroduodenoscopy (EGD) after LA radiofrequency catheter ablation for symptomatic atrial fibrillation and pre-ablation magnetic resonance imaging (MRI) was used to identify the minimum distance between the inner lumen of the esophagus and the ablated atrial endocardium (pre-ablation atrial esophageal distance; pre-AED) and occurrence of ETI. Parameters of ablation index (AI, Visitag Surpoint) were collected in 30 patients from the CARTO3 system and compared to assess if ablation strategies and AI further impacted risk of ETI.
Results
Pre-AED was significantly larger in patients without ETI than those with ETI (5.23 ± 0.96 mm vs 4.31 ± 0.75 mm, p < 0.001). Pre-AED showed high accuracy for predicting ETI with the best cutoff value of 4.37 mm. AI was statistically comparable between Visitag lesion markers with and without associated esophageal late gadolinium enhancement (LGE) detected by post-ablation MRI in the low-power long-duration ablation group (LPLD, 25-40W for 10 to 30 s, 393.16 [308.62, 408.86] versus 406.58 [364.38, 451.22], p = 0.16) and high-power short-duration group (HPSD, 50W for 5-10 s, 336.14 [299.66, 380.11] versus 330.54 [286.21, 384.71], p = 0.53), respectively.
Conclusion
Measuring the distance between the LA and the esophagus in pre-ablation LGE-MRI could be helpful in predicting ETI after LAPW ablation.



X. Jiang, Z. Li, R. Missel, Md. Zaman, B. Zenger, W. W. Good, R. S. MacLeod, J. L. Sapp, L. Wang. “Few-Shot Generation of Personalized Neural Surrogates for Cardiac Simulation via Bayesian Meta-learning,” In Medical Image Computing and Computer Assisted Intervention -- MICCAI 2022, Springer Nature Switzerland, pp. 46--56. 2022.
ISBN: 978-3-031-16452-1
DOI: 10.1007/978-3-031-16452-1_5

ABSTRACT

Clinical adoption of personalized virtual heart simulations faces challenges in model personalization and expensive computation. While an ideal solution is an efficient neural surrogate that at the same time is personalized to an individual subject, the state-of-the-art is either concerned with personalizing an expensive simulation model, or learning an efficient yet generic surrogate. This paper presents a completely new concept to achieve personalized neural surrogates in a single coherent framework of meta-learning (metaPNS). Instead of learning a single neural surrogate, we pursue the process of learning a personalized neural surrogate using a small amount of context data from a subject, in a novel formulation of few-shot generative modeling underpinned by: 1) a set-conditioned neural surrogate for cardiac simulation that, conditioned on subject-specific context data, learns to generate query simulations not included in the context set, and 2) a meta-model of amortized variational inference that learns to condition the neural surrogate via simple feed-forward embedding of context data. As test time, metaPNS delivers a personalized neural surrogate by fast feed-forward embedding of a small and flexible number of data available from an individual, achieving -- for the first time -- personalization and surrogate construction for expensive simulations in one end-to-end learning framework. Synthetic and real-data experiments demonstrated that metaPNS was able to improve personalization and predictive accuracy in comparison to conventionally-optimized cardiac simulation models, at a fraction of computation.



X. Jiang, M. Toloubidokhti, J. Bergquist, B. Zenger, w. Good, R.S. MacLeod, L. Wang. “Improving Generalization by Learning Geometry-Dependent and Physics-Based Reconstruction of Image Sequences,” In IEEE Transactions on Medical Imaging, 2022.
DOI: 10.1109/TMI.2022.3218170

ABSTRACT

Deep neural networks have shown promise in image reconstruction tasks, although often on the premise of large amounts of training data. In this paper, we present a new approach to exploit the geometry and physics underlying electrocardiographic imaging (ECGI) to learn efficiently with a relatively small dataset. We first introduce a non-Euclidean encoding-decoding network that allows us to describe the unknown and measurement variables over their respective geometrical domains. We then explicitly model the geometry-dependent physics in between the two domains via a bipartite graph over their graphical embeddings. We applied the resulting network to reconstruct electrical activity on the heart surface from body-surface potentials. In a series of generalization tasks with increasing difficulty, we demonstrated the improved ability of the network to generalize across geometrical changes underlying the data using less than 10% of training data and fewer variations of training geometry in comparison to its Euclidean alternatives. In both simulation and real-data experiments, we further demonstrated its ability to be quickly fine-tuned to new geometry using a modest amount of data.



X. Jiang, J. Tate, J. Bergquist, A. Narayan, R. MacLeod, L. Wang. “Uncertainty Quantification of Cardiac Position on Deep Graph Network ECGI,” In Computing in Cardiology, Vol. 49, 2022.

ABSTRACT

Subject-specific geometry such as cardiac position and torso size plays an important role in electrocardiographic imaging (ECGI). Previously, we introduced a graph-based neural network (GNN) that is dependent on patient-specific geometry to improve reconstruction accuracy. However, geometric uncertainty, including changes in cardiac position and torso size, has not been addressed in network-based methods. In this study, we estimate geometrical uncertainty on GNN by applying uncertainty quantification with polynomial chaos emulators (PCE). To estimate the effect of geometric variation from common motions, we evaluated the model on samples generated by different heart torso geometries. The experiments shows that the GNN is less sensitive to heart position and torso shape and helps us direct development of similar models to account of possible variability.



A. Narayan, Z. Liu, J. A. Bergquist, C. Charlebois, S. Rampersad, L. Rupp, D. Brooks, D. White, J. Tate, R. S. MacLeod. “UncertainSCI: Uncertainty quantification for computational models in biomedicine and bioengineering,” In Computers in Biology and Medicine, 2022.
DOI: https://doi.org/10.1016/j.compbiomed.2022.106407

ABSTRACT

Background:

Computational biomedical simulations frequently contain parameters that model physical features, material coefficients, and physiological effects, whose values are typically assumed known a priori. Understanding the effect of variability in those assumed values is currently a topic of great interest. A general-purpose software tool that quantifies how variation in these parameters affects model outputs is not broadly available in biomedicine. For this reason, we developed the ‘UncertainSCI’ uncertainty quantification software suite to facilitate analysis of uncertainty due to parametric variability.

Methods:

We developed and distributed a new open-source Python-based software tool, UncertainSCI, which employs advanced parameter sampling techniques to build polynomial chaos (PC) emulators that can be used to predict model outputs for general parameter values. Uncertainty of model outputs is studied by modeling parameters as random variables, and model output statistics and sensitivities are then easily computed from the emulator. Our approaches utilize modern, near-optimal techniques for sampling and PC construction based on weighted Fekete points constructed by subsampling from a suitably randomized candidate set.
Results:

Concentrating on two test cases—modeling bioelectric potentials in the heart and electric stimulation in the brain—we illustrate the use of UncertainSCI to estimate variability, statistics, and sensitivities associated with multiple parameters in these models.
Conclusion:

UncertainSCI is a powerful yet lightweight tool enabling sophisticated probing of parametric variability and uncertainty in biomedical simulations. Its non-intrusive pipeline allows users to leverage existing software libraries and suites to accurately ascertain parametric uncertainty in a variety of applications.



D. K. Njeru, T. M. Athawale, J. J. France, C. R. Johnson. “Quantifying and Visualizing Uncertainty for Source Localisation in Electrocardiographic Imaging,” In Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, Taylor & Francis, pp. 1--11. 2022.
DOI: 10.1080/21681163.2022.2113824

ABSTRACT

Electrocardiographic imaging (ECGI) presents a clinical opportunity to noninvasively understand the sources of arrhythmias for individual patients. To help increase the effectiveness of ECGI, we provide new ways to visualise associated measurement and modelling errors. In this paper, we study source localisation uncertainty in two steps: First, we perform Monte Carlo simulations of a simple inverse ECGI source localisation model with error sampling to understand the variations in ECGI solutions. Second, we present multiple visualisation techniques, including confidence maps, level-sets, and topology-based visualisations, to better understand uncertainty in source localization. Our approach offers a new way to study uncertainty in the ECGI pipeline.



B.A. Orkild, J.A. Bergquist, L.C. Rupp, A. Busatto, B. Zenger, W.W. Good, J. Coll-Font, R.S. MacLeod. “A Sliding Window Approach to Regularization in Electrocardiographic Imaging,” In Computing in Cardiology, Vol. 49, 2022.

ABSTRACT

Introduction: The inverse problem of ECGI is ill-posed, so regularization must be applied to constrain the solution. Regularization is typically applied to each individual time point (instantaneous) or to the beat as a whole (global). These techniques often lead to over- or underregularization. We aimed to develop an inverse formulation that strikes a balance between these two approaches that would realize the benefits of both by implementing a sliding-window regularization. Methods: We formulated sliding-window regularization using the boundary element method with Tikhonov 0 and 2nd order regularization. We applied regularization to a varying time window of the body-surface potentials centered around each time sample. We compared reconstructed potentials from the sliding-window, instantaneous, and global regularization techniques to ground truth potentials for 10 heart beats paced from the ventricle in a large-animal model. Results: The sliding-window technique provided smoother transitions of regularization weights than instantaneous regularization while improving spatial correlation over global regularization. Discussion: Although the differences in regularization weights were nuanced, smoother transitions provided by the sliding-window regularization have the ability to eliminate discontinuities in potential seen with instantaneous regularization.



E. Paccione, B. Hunt, E. Kwan, D. Dosdall, R. MacLeod, R. Ranjan. “Unipolar R:S Development in Chronic Atrial Fibrillation,” In Computing in Cardiology, Vol. 49, 2022.

ABSTRACT

Past studies have examined the differences between R and S waves of unipolar atrial signals in patients with atrial fibrillation (AF) and have shown a difference in the R to S ratio (R:S) in certain regions of the atria compared to a healthy population. This work indicates a potential use of R:S as a marker for AF. In this study, we further examine these claims and investigate temporal changes in R:S over AF development in animals.

Four canines underwent AF development protocols and endocardial sinus rhythm maps were recorded as AF progressed. Unipolar signals gathered from mapping were used to calculate R:S within the left atrium of each animal. Calculations were performed at time points: before AF initiation, 3-4 months of chronic AF, and 6 months of chronic AF. From our analysis, we observed an increase in R-dominant signals within the left atrium once AF is induced. Temporal results show that R dominance may be an indicator for chronic AF patients and may be associated with the presence of arrhythmogenic substrate. With the addition of regional information, this unipolar signal analysis could guide therapeutic strategies.



L.C. Rupp, B. Zenger, J.A. Bergquist, A. Busatto, R.S. MacLeod. “The Role of Beta-1 Receptors in the Response to Myocardial Ischemia,” In Computing in Cardiology, Vol. 49, 2022.

ABSTRACT

Acute myocardial ischemia is commonly diagnosed by ST-segment deviations. These deviations, however, can show a paradoxical recovery even in the face of ongoing ischemic stress. A possible mechanism for this response may be the cardio-protective effects of the autonomic nervous system (ANS) via beta-1 receptors. We assessed the role of norepinephrine (NE), a beta-1 agonist, and esmolol (ES), a beta-1 antagonist, in the recovery of ST-segment deviations during myocardial ischemia. We used an experimental model of controlled myocardial ischemia in which we simultaneously recorded electrograms intramurally and on the epicardial surface. We measured ischemia as deviations in the potentials measured at 40% of the ST-segment duration. During control intervention, 27% of epicardial electrodes showed no ischemic ST-segment deviations, whereas during the interventions with NE and ES, 100% of epicardial electrodes showed no ischemic ST-segment deviations. Intramural electrodes revealed a different behavior with 71% of electrodes showing no ischemic ST-segment deviations during control ischemia, increasing to 79% and 82% for NE infusion and ES infusion interventions, respectively. These preliminary results suggest that recovery of intramural regions of the heart is delayed by the presence of both beta-1 agonists and antagonists even as epicardial potentials show almost complete recovery.



S. Sane, C. R. Johnson, H. Childs. “Demonstrating the viability of Lagrangian in situ reduction on supercomputers,” In Journal of Computational Science, Vol. 61, Elsevier, 2022.

ABSTRACT

Performing exploratory analysis and visualization of large-scale time-varying computational science applications is challenging due to inaccuracies that arise from under-resolved data. In recent years, Lagrangian representations of the vector field computed using in situ processing are being increasingly researched and have emerged as a potential solution to enable exploration. However, prior works have offered limited estimates of the encumbrance on the simulation code as they consider “theoretical” in situ environments. Further, the effectiveness of this approach varies based on the nature of the vector field, benefitting from an in-depth investigation for each application area. With this study, an extended version of Sane et al. (2021), we contribute an evaluation of Lagrangian analysis viability and efficacy for simulation codes executing at scale on a supercomputer. We investigated previously unexplored cosmology and seismology applications as well as conducted a performance benchmarking study by using a hydrodynamics mini-application targeting exascale computing. To inform encumbrance, we integrated in situ infrastructure with simulation codes, and evaluated Lagrangian in situ reduction in representative homogeneous and heterogeneous HPC environments. To inform post hoc accuracy, we conducted a statistical analysis across a range of spatiotemporal configurations as well as a qualitative evaluation. Additionally, our study contributes cost estimates for distributed-memory post hoc reconstruction. In all, we demonstrate viability for each application — data reduction to less than 1% of the total data via Lagrangian representations, while maintaining accurate reconstruction and requiring under 10% of total execution time in over 90% of our experiments.