SCI Publications
2021
T. M. Athawale, B. J. Stanislawski, S. Sane,, C. R. Johnson.
Visualizing Interactions Between Solar Photovoltaic Farms and the Atmospheric Boundary Layer, In Twelfth ACM International Conference on Future Energy Systems, pp. 377--381. 2021.
The efficiency of solar panels depends on the operating temperature. As the panel temperature rises, efficiency drops. Thus, the solar energy community aims to understand the factors that influence the operating temperature, which include wind speed, wind direction, turbulence, ambient temperature, mounting configuration, and solar cell material. We use high-resolution numerical simulations to model the flow and thermal behavior of idealized solar farms. Because these simulations model such complex behavior, advanced visualization techniques are needed to investigate and understand the results. Here, we present advanced 3D visualizations of numerical simulation results to illustrate the flow and heat transport in an idealized solar farm. The findings can be used to understand how flow behavior influences module temperatures, and vice versa.
T. M. Athawale, S. Sane, C. R. Johnson.
Uncertainty Visualization of the Marching Squares and Marching Cubes Topology Cases, Subtitled arXiv:2108.03066, 2021.
Marching squares (MS) and marching cubes (MC) are widely used algorithms for level-set visualization of scientific data. In this paper, we address the challenge of uncertainty visualization of the topology cases of the MS and MC algorithms for uncertain scalar field data sampled on a uniform grid. The visualization of the MS and MC topology cases for uncertain data is challenging due to their exponential nature and the possibility of multiple topology cases per cell of a grid. We propose the topology case count and entropy-based techniques for quantifying uncertainty in the topology cases of the MS and MC algorithms when noise in data is modeled with probability distributions. We demonstrate the applicability of our techniques for independent and correlated uncertainty assumptions. We visualize the quantified topological uncertainty via color mapping proportional to uncertainty, as well as with interactive probability queries in the MS case and entropy isosurfaces in the MC case. We demonstrate the utility of our uncertainty quantification framework in identifying the isovalues exhibiting relatively high topological uncertainty. We illustrate the effectiveness of our techniques via results on synthetic, simulation, and hixel datasets.
T. M. Athawale, B. Ma, E. Sakhaee, C. R. Johnson,, A. Entezari.
Direct Volume Rendering with Nonparametric Models of Uncertainty, In IEEE Transactions on Visualization and Computer Graphics, Vol. 27, No. 2, pp. 1797-1807. 2021.
DOI: 10.1109/TVCG.2020.3030394
We present a nonparametric statistical framework for the quantification, analysis, and propagation of data uncertainty in direct volume rendering (DVR). The state-of-the-art statistical DVR framework allows for preserving the transfer function (TF) of the ground truth function when visualizing uncertain data; however, the existing framework is restricted to parametric models of uncertainty. In this paper, we address the limitations of the existing DVR framework by extending the DVR framework for nonparametric distributions. We exploit the quantile interpolation technique to derive probability distributions representing uncertainty in viewing-ray sample intensities in closed form, which allows for accurate and efficient computation. We evaluate our proposed nonparametric statistical models through qualitative and quantitative comparisons with the mean-field and parametric statistical models, such as uniform and Gaussian, as well as Gaussian mixtures. In addition, we present an extension of the state-of-the-art rendering parametric framework to 2D TFs for improved DVR classifications. We show the applicability of our uncertainty quantification framework to ensemble, downsampled, and bivariate versions of scalar field datasets.
P. R. Atkins, P. Agrawal, J. D. Mozingo, K. Uemura, K. Tokunaga, C. L. Peters, S. Y. Elhabian, R. T. Whitaker, A. E. Anderson.
Prediction of Femoral Head Coverage from Articulated Statistical Shape Models of Patients with Developmental Dysplasia of the Hip, In Journal of Orthopaedic Research, Wiley, 2021.
DOI: 10.1002/jor.25227
Developmental dysplasia of the hip (DDH) is commonly described as reduced femoral head coverage due to anterolateral acetabular deficiency. Although reduced coverage is the defining trait of DDH, more subtle and localized anatomic features of the joint are also thought to contribute to symptom development and degeneration. These features are challenging to identify using conventional approaches. Herein, we assessed the morphology of the full femur and hemi-pelvis using an articulated statistical shape model (SSM). The model determined the morphological and pose-based variations associated with DDH in a population of Japanese females and established which of these variations predict coverage. Computed tomography images of 83 hips from 47 patients were segmented for input into a correspondence-based SSM. The dominant modes of variation in the model initially represented scale and pose. After removal of these factors through individual bone alignment, femoral version and neck-shaft angle, pelvic curvature, and acetabular version dominated the observed variation. Femoral head oblateness and prominence of the acetabular rim and various muscle attachment sites of the femur and hemi-pelvis were found to predict 3D CT-based coverage measurements (R2=0.5-0.7 for the full bones, R2=0.9 for the joint).
A. Bagherinezhad, M. Young, Bei Wang, M. Parvania.
Spatio-Temporal Visualization of Interdependent Battery Bus Transit and Power Distribution Systems, In IEEE PES Innovative Smart Grid Technologies Conference(ISGT), IEEE, 2021.
The high penetration of transportation electrification and its associated charging requirements magnify the interdependency of the transportation and power distribution systems. The emergent interdependency requires that system operators fully understand the status of both systems. To this end,a visualization tool is presented to illustrate the inter dependency of battery bus transit and power distribution systems and the associated components. The tool aims at monitoring components from both systems, such as the locations of electric buses, the state of charge of batteries, the price of electricity, voltage, current,and active/reactive power flow. The results showcase the success of the visualization tool in monitoring the bus transit and power distribution components to determine a reliable cost-effective scheme for spatio-temporal charging of electric buses.
M. K. Ballard, R. Amici, V. Shankar, L. A. Ferguson, M. Braginsky, R. M. Kirby.
Towards an Extrinsic, CG-XFEM Approach Based on Hierarchical Enrichments for Modeling Progressive Fracture, Subtitled arXiv preprint arXiv:2104.14704, 2021.
We propose an extrinsic, continuous-Galerkin (CG), extended finite element method (XFEM) that generalizes the work of Hansbo and Hansbo to allow multiple Heaviside enrichments within a single element in a hierarchical manner. This approach enables complex, evolving XFEM surfaces in 3D that cannot be captured using existing CG-XFEM approaches. We describe an implementation of the method for 3D static elasticity with linearized strain for modeling open cracks as a salient step towards modeling progressive fracture. The implementation includes a description of the finite element model, hybrid implicit/explicit representation of enrichments, numerical integration method, and novel degree-of-freedom (DoF) enumeration algorithm. This algorithm supports an arbitrary number of enrichments within an element, while simultaneously maintaining a CG solution across elements. Additionally, our approach easily allows an implementation suitable for distributed computing systems. Enabled by the DoF enumeration algorithm, the proposed method lays the groundwork for a computational tool that efficiently models progressive fracture. To facilitate a discussion of the complex enrichment hierarchies, we develop enrichment diagrams to succinctly describe and visualize the relationships between the enrichments (and the fields they create) within an element. This also provides a unified language for discussing extrinsic XFEM methods in the literature. We compare several methods, relying on the enrichment diagrams to highlight their nuanced differences.
D. Balouek-Thomert, I. Rodero, M. Parashar.
Evaluating policy-driven adaptation on the Edge-to-Cloud Continuum, In IEEE/ACM HPC for Urgent Decision Making (UrgentHPC), pp. 11-20. 2021.
DOI: 10.1109/UrgentHPC54802.2021.00007
Developing data-driven applications requires developers and service providers to orchestrate data-to-discovery pipelines across distributed data sources and computing units. Realizing such pipelines poses two major challenges: programming analytics that reacts at runtime to unforeseen events, and adaptation of the resources and computing paths between the edge and the cloud. While these concerns are interdependent, they must be separated during the design process of the application and the deployment operations of the infrastructure. This work proposes a system stack for the adaptation of distributed analytics across the computing continuum. We implemented this software stack to evaluate its ability to continually balance the computation or data movement’s cost with the value of operations to the application objectives. Using a disaster response application, we observe that the system can select appropriate configurations while managing trade-offs between user-defined constraints, quality of results, and resource utilization. The evaluation shows that our model is able to adapt to variations in the data input size, bandwidth, and CPU capacities with minimal deadline violations (close to 10%). This constitutes encouraging results to benefit and facilitate the creation of ad-hoc computing paths for urgent science and time-critical decision-making.
D. Balouek-Thomert, P. Silva, K. Fauvel, A. Costan, G. Antoniu, M. Parashar.
MDSC: Modelling Distributed Stream Processing across the Edge-to-Cloud Continuum, In DML-ICC 2021 workshop (held in conjunction with UCC 2021), December, 2021.
The growth of the Internet of Things is resulting in an explosion of data volumes at the Edge of the Internet. To reduce costs incurred due to data movement and centralized cloud-based processing, it is becoming increasingly important to process and analyze such data closer to the data sources. Exploiting Edge computing capabilities for stream-based processing is however challenging. It requires addressing the complex characteristics and constraints imposed by all the resources along the data path, as well as the large set of heterogeneous data processing and management frameworks. Consequently, the community needs tools that can facilitate the modeling of this complexity and can integrate the various components involved. In this work, we introduce MDSC, a hierarchical approach for modeling distributed stream-based applications on Edge-to-Cloud continuum infrastructures. We demonstrate how MDSC can be applied to a concrete real-life ML-based application -early earthquake warning - to help answer questions such as: when is it worth decentralizing the classification load from the Cloud to the Edge and how?
M. Berzins.
Symplectic Time Integration Methods for the Material Point Method, Experiments, Analysis and Order Reduction, In WCCM-ECCOMAS2020 virtual Conference, Note: Minor typographical correction in March 2024, January, 2021.
The provision of appropriate time integration methods for the Material Point Method (MPM) involves considering stability, accuracy and energy conservation. A class of methods that addresses many of these issues are the widely-used symplectic time integration methods. Such methods have good conservation properties and have the potential to achieve high accuracy. In this work we build on the work in [5] and consider high order methods for the time integration of the Material Point Method. The results of practical experiments show that while high order methods in both space and time have good accuracy initially, unless the problem has relatively little particle movement then the accuracy of the methods for later time is closer to that of low order methods. A theoretical analysis explains these results as being similar to the stage error found in Runge Kutta methods, though in this case the stage error arises from the MPM differentiations and interpolations from particles to grid and back again, particularly in cases in which there are many grid crossings.
J. A. Bergquist, W. W. Good, B. Zenger, J. D. Tate, L. C. Rupp, R. S. MacLeod.
The Electrocardiographic Forward Problem: A Benchmark Study, In Computers in Biology and Medicine, Vol. 134, Pergamon, pp. 104476. 2021.
DOI: https://doi.org/10.1016/j.compbiomed.2021.104476
Background
Electrocardiographic forward problems are crucial components for noninvasive electrocardiographic imaging (ECGI) that compute torso potentials from cardiac source measurements. Forward problems have few sources of error as they are physically well posed and supported by mature numerical and computational techniques. However, the residual errors reported from experimental validation studies between forward computed and measured torso signals remain surprisingly high.
Objective
To test the hypothesis that incomplete cardiac source sampling, especially above the atrioventricular (AV) plane is a major contributor to forward solution errors.
Methods
We used a modified Langendorff preparation suspended in a human-shaped electrolytic torso-tank and a novel pericardiac-cage recording array to thoroughly sample the cardiac potentials. With this carefully controlled experimental preparation, we minimized possible sources of error, including geometric error and torso inhomogeneities. We progressively removed recorded signals from above the atrioventricular plane to determine how the forward-computed torso-tank potentials were affected by incomplete source sampling.
Results
We studied 240 beats total recorded from three different activation sequence types (sinus, and posterior and anterior left-ventricular free-wall pacing) in each of two experiments. With complete sampling by the cage electrodes, all correlation metrics between computed and measured torso-tank potentials were above 0.93 (maximum 0.99). The mean root-mean-squared error across all beat types was also low, less than or equal to 0.10 mV. A precipitous drop in forward solution accuracy was observed when we included only cage measurements below the AV plane.
Conclusion
First, our forward computed potentials using complete cardiac source measurements set a benchmark for similar studies. Second, this study validates the importance of complete cardiac source sampling above the AV plane to produce accurate forward computed torso potentials. Testing ECGI systems and techniques with these more complete and highly accurate datasets will improve inverse techniques and noninvasive detection of cardiac electrical abnormalities.
M. Berzins.
Time Stepping with Space and Time Errors and Stability of the Material Point Method, In Proceedings of VII International Conference on Particle-Based Methods, PARTICLES 2021, Edited by P. Wriggers, M. Bischoff, E. Onate, M. Bischoff, A. Duster & T. Zohdi, 2021.
The choice of the time step for the Material Point Method (MPM) is often addressed by using a simple stability criterion, such as the speed of sound or a CFL condition. Recently there have been several advances in understanding the stability of MPM. These range from non-linear stability analysis, through to Von Neumann type approaches. While in many instances this works well it is important to understand how this relates to the overall errors present in the method. Although it has been observed that spatial errors may dominate temporal ones at stable time steps, recent work has made more precise the sources and forms of the different MPM errors. This now makes it possible to understand how the different errors and the stability analysis are connected. At the same time this also requires simple computable estimates of the different errors in the material point method. The use of simple estimates of these errors makes it possible to connect some of the errors introduced with the stability criteria used. A number of simple computational experiments are used to illustrate the theoretical results.
R. Bhalodia, S. Elhabian, L. Kavan, R. Whitaker.
Leveraging Unsupervised Image Registration for Discovery of Landmark Shape Descriptor, In Medical Image Analysis, Elsevier, pp. 102157. 2021.
In current biological and medical research, statistical shape modeling (SSM) provides an essential framework for the characterization of anatomy/morphology. Such analysis is often driven by the identification of a relatively small number of geometrically consistent features found across the samples of a population. These features can subsequently provide information about the population shape variation. Dense correspondence models can provide ease of computation and yield an interpretable low-dimensional shape descriptor when followed by dimensionality reduction. However, automatic methods for obtaining such correspondences usually require image segmentation followed by significant preprocessing, which is taxing in terms of both computation as well as human resources. In many cases, the segmentation and subsequent processing require manual guidance and anatomy specific domain expertise. This paper proposes a self-supervised deep learning approach for discovering landmarks from images that can directly be used as a shape descriptor for subsequent analysis. We use landmark-driven image registration as the primary task to force the neural network to discover landmarks that register the images well. We also propose a regularization term that allows for robust optimization of the neural network and ensures that the landmarks uniformly span the image domain. The proposed method circumvents segmentation and preprocessing and directly produces a usable shape descriptor using just 2D or 3D images. In addition, we also propose two variants on the training loss function that allows for prior shape information to be integrated into the model. We apply this framework on several 2D and 3D datasets to obtain their shape descriptors. We analyze these shape descriptors in their efficacy of capturing shape information by performing different shape-driven applications depending on the data ranging from shape clustering to severity prediction to outcome diagnosis.
R. Bhalodia, S. Elhabian, J. Adams, W. Tao, L. Kavan, R. Whitaker.
DeepSSM: A Blueprint for Image-to-Shape Deep Learning Models, Subtitled arXiv preprint arXiv:2110.07152, 2021.
Statistical shape modeling (SSM) characterizes anatomical variations in a population of shapes generated from medical images. SSM requires consistent shape representation across samples in shape cohort. Establishing this representation entails a processing pipeline that includes anatomy segmentation, re-sampling, registration, and non-linear optimization. These shape representations are then used to extract low-dimensional shape descriptors that facilitate subsequent analyses in different applications. However, the current process of obtaining these shape descriptors from imaging data relies on human and computational resources, requiring domain expertise for segmenting anatomies of interest. Moreover, this same taxing pipeline needs to be repeated to infer shape descriptors for new image data using a pre-trained/existing shape model. Here, we propose DeepSSM, a deep learning-based framework for learning the functional mapping from images to low-dimensional shape descriptors and their associated shape representations, thereby inferring statistical representation of anatomy directly from 3D images. Once trained using an existing shape model, DeepSSM circumvents the heavy and manual pre-processing and segmentation and significantly improves the computational time, making it a viable solution for fully end-to-end SSM applications. In addition, we introduce a model-based data-augmentation strategy to address data scarcity. Finally, this paper presents and analyzes two different architectural variants of DeepSSM with different loss functions using three medical datasets and their downstream clinical application. Experiments showcase that DeepSSM performs comparably or better to the state-of-the-art SSM both quantitatively and on application-driven downstream tasks. Therefore, DeepSSM aims to provide a comprehensive blueprint for deep learning-based image-to-shape models.
H. Bhatia, S. N. Petruzza, R. Anirudh, A. G. Gyulassy, R. M. Kirby, V. Pascucci, P. T. Bremer.
Data-Driven Estimation of Temporal-Sampling Errors in Unsteady Flows, 2021.
While computer simulations typically store data at the highest available spatial resolution, it is often infeasible to do so for the temporal dimension. Instead, the common practice is to store data at regular intervals, the frequency of which is strictly limited by the available storage and I/O bandwidth. However, this manner of temporal subsampling can cause significant errors in subsequent analysis steps. More importantly, since the intermediate data is lost, there is no direct way of measuring this error after the fact. One particularly important use case that is affected is the analysis of unsteady flows using pathlines, as it depends on an accurate interpolation across time. Although the potential problem with temporal undersampling is widely acknowledged, there currently does not exist a practical way to estimate the potential impact. This paper presents a simple-to-implement yet powerful technique to estimate the error in pathlines due to temporal subsampling. Given an unsteady flow, we compute pathlines at the given temporal resolution as well as subsamples thereof. We then compute the error induced due to various levels of subsampling and use it to estimate the error between the given resolution and the unknown ground truth. Using two turbulent flows, we demonstrate that our approach, for the first time, provides an accurate, a posteriori error estimate for pathline computations. This estimate will enable scientists to better understand the uncertainties involved in pathline-based analysis techniques and can lead to new uncertainty visualization approaches using the predicted errors.
H. Bhatia, D. Hoang, N. Morrical, V. Pascucci, P.T. Bremer, P. Lindstrom.
AMM: Adaptive Multilinear Meshes, Subtitled arXiv:2007.15219, 2021.
Adaptive representations are increasingly indispensable for reducing the in-memory and on-disk footprints of large-scale data. Usual solutions are designed broadly along two themes: reducing data precision, e.g., through compression, or adapting data resolution, e.g., using spatial hierarchies. Recent research suggests that combining the two approaches, i.e., adapting both resolution and precision simultaneously, can offer significant gains over using them individually. However, there currently exist no practical solutions to creating and evaluating such representations at scale. In this work, we present a new resolution-precision-adaptive representation to support hybrid data reduction schemes and offer an interface to existing tools and algorithms. Through novelties in spatial hierarchy, our representation, Adaptive Multilinear Meshes (AMM), provides considerable reduction in the mesh size. AMM creates a piecewise multilinear representation of uniformly sampled scalar data and can selectively relax or enforce constraints on conformity, continuity, and coverage, delivering a flexible adaptive representation. AMM also supports representing the function using mixed-precision values to further the achievable gains in data reduction. We describe a practical approach to creating AMM incrementally using arbitrary orderings of data and demonstrate AMM on six types of resolution and precision datastreams. By interfacing with state-of-the-art rendering tools through VTK, we demonstrate the practical and computational advantages of our representation for visualization techniques. With an open-source release of our tool to create AMM, we make such evaluation of data reduction accessible to the community, which we hope will foster new opportunities and future data reduction schemes
S. R. Black, A. Janson, M. Mahan, J. Anderson, C. R. Butson.
Identification of Deep Brain Stimulation Targets for Neuropathic Pain After Spinal Cord Injury Using Localized Increases in White Matter Fiber Cross‐Section, In Neuromodulation: Technology at the Neural Interface, John Wiley & Sons, Inc., 2021.
Objectives
The spinal cord injury (SCI) patient population is overwhelmingly affected by neuropathic pain (NP), a secondary condition for which therapeutic options are limited and have a low degree of efficacy. The objective of this study was to identify novel deep brain stimulation (DBS) targets that may theoretically benefit those with NP in the SCI patient population. We hypothesize that localized changes in white matter identified in SCI subjects with NP compared to those without NP could be used to develop an evidence‐based approach to DBS target identification.
K. M. Campbell, H. Dai, Z. Su, M. Bauer, P. T. Fletcher, S. C. Joshi.
Structural Connectome Atlas Construction in the Space of Riemannian Metrics, Subtitled arXiv, 2021.
The structural connectome is often represented by fiber bundles generated from various types of tractography. We propose a method of analyzing connectomes by representing them as a Riemannian metric, thereby viewing them as points in an infinite-dimensional manifold. After equipping this space with a natural metric structure, the Ebin metric, weapply object-oriented statistical analysis to define an atlas as the Fŕechet mean of a population of Riemannian metrics. We demonstrate connectome registration and atlas formation using connectomes derived from diffusion tensors estimated from a subset of subjects from the Human Connectome Project.
S. H. Campbell, T. Bidone.
3D Model of Cell Migration and Proliferation in a Tissue Scaffold, In Biophysical Journal, Vol. 120, No. 3, Elsevier, pp. 265a. 2021.
Tissue scaffolds restore tissue functionality without the limitations of transplants. However, successful tissue growth depends on the interplay between scaffold properties and cell activities. It has been previously reported that scaffold porosity and Young's modulus affect cell migration and tissue generation. However, how the geometrical and mechanical properties of a scaffold exactly interplay with cell processes remain poorly understood and are essential for successful tissue growth. We developed a 3D computational model that simulates cell migration and proliferation on a scaffold. The model generates an adjustable 3D porous scaffold environment with a defined pore size and Young modulus. Cells are treated as explicit spherical particles comparable in size to bone-marrow cells and are initially seeded randomly throughout the scaffold. Cells can create adhesions, proliferate, and independently migrate across pores in a random walk. Cell adhesions during migration follow the molecular-clutch mechanism, where traction force from the cells against the scaffold stiffness reinforces adhesions lifetime up to a threshold. We used the model to test how variations in cell proliferation rate, scaffold Young's modulus, and porosity affect cell migration speed. At a low proliferation rate (1 x 10−7 s−1), the spread of cell speeds is larger than at a high replication rate (1 x 10−6 s−1). A biphasic relation between Young's modulus and cell speed is also observed reflecting the molecular-clutch mechanism at the level of individual adhesions. These observations are consistent with previous reports regarding fibroblast migration on collagen-glycosaminoglycan scaffolds. Additionally, our model shows that similar cell diameters and pore diameter induces a crowding effect decreasing cell speed. The results from our study provide important insights about biophysical mechanisms that govern cell motility on scaffolds with different properties for tissue engineering applications.
K.M. Campbell, H. Dai, Z. Su, M. Bauer, P.T. Fletcher, S.C. Joshi.
Integrated Construction of Multimodal Atlases with Structural Connectomes in the Space of Riemannian Metrics, Subtitled arXiv preprint arXiv:2109.09808, 2021.
The structural network of the brain, or structural connectome, can be represented by fiber bundles generated by a variety of tractography methods. While such methods give qualitative insights into brain structure, there is controversy over whether they can provide quantitative information, especially at the population level. In order to enable population-level statistical analysis of the structural connectome, we propose representing a connectome as a Riemannian metric, which is a point on an infinite-dimensional manifold. We equip this manifold with the Ebin metric, a natural metric structure for this space, to get a Riemannian manifold along with its associated geometric properties. We then use this Riemannian framework to apply object-oriented statistical analysis to define an atlas as the Fr\'echet mean of a population of Riemannian metrics. This formulation ties into the existing framework for diffeomorphic construction of image atlases, allowing us to construct a multimodal atlas by simultaneously integrating complementary white matter structure details from DWMRI and cortical details from T1-weighted MRI. We illustrate our framework with 2D data examples of connectome registration and atlas formation. Finally, we build an example 3D multimodal atlas using T1 images and connectomes derived from diffusion tensors estimated from a subset of subjects from the Human Connectome Project.
M. Carlson, X. Zheng, H. Sundar, G. E. Karniadakis, R. M. Kirby.
An open-source parallel code for computing the spectral fractional Laplacian on 3D complex geometry domains, In Computer Physics Communications, Vol. 261, North-Holland, pp. 107695. 2021.
We present a spectral element algorithm and open-source code for computing the fractional Laplacian defined by the eigenfunction expansion on finite 2D/3D complex domains with both homogeneous and nonhomogeneous boundaries. We demonstrate the scalability of the spectral element algorithm on large clusters by constructing the fractional Laplacian based on computed eigenvalues and eigenfunctions using up to thousands of CPUs. To demonstrate the accuracy of this eigen-based approach for computing the factional Laplacian, we approximate the solutions of the fractional diffusion equation using the computed eigenvalues and eigenfunctions on a 2D quadrilateral, and on a 3D cubic and cylindrical domain, and compare the results with the contrived solutions to demonstrate fast convergence. Subsequently, we present simulation results for a fractional diffusion equation on a hand-shaped domain discretized with 3D hexahedra, as well as on a domain constructed from the Hanford site geometry corresponding to nonzero Dirichlet boundary conditions. Finally, we apply the algorithm to solve the surface quasi-geostrophic (SQG) equation on a 2D square with periodic boundaries. Simulation results demonstrate the accuracy, efficiency, and geometric flexibility of our algorithm and that our algorithm can capture the subtle dynamics of anomalous diffusion modeled by the fractional Laplacian on complex geometry domains. The included open-source code is the first of its kind.
Page 15 of 142