This paper addresses the challenge of extracting meaningful information from measured bioelectric signals generated by complex, large scale physiological systems such as the brain or the heart. We focus on a combination of the well-known Laplacian eigenmaps machine learning approach with dynamical systems ideas to analyze emergent dynamic behaviors. The method reconstructs the abstract dynamical system phase-space geometry of the embedded measurements and tracks changes in physiological conditions or activities through changes in that geometry. It is geared to extract information from the joint behavior of time traces obtained from large sensor arrays, such as those used in multiple-electrode ECG and EEG, and explore the geometrical structure of the low dimensional embedding of moving time windows of those joint snapshots. Our main contribution is a method for mapping vectors from the phase space to the data domain. We present cases to evaluate the methods, including a synthetic example using the chaotic Lorenz system, several sets of cardiac measurements from both canine and human hearts, and measurements from a human brain.
Reconstruction of the electrical sources of human EEG activity at high spatio-temporal accuracy is an important aim in neuroscience and neurological diagnostics. Over the last decades, numerous studies have demonstrated that realistic modeling of head anatomy improves the accuracy of source reconstruction of EEG signals. For example, including a cerebro-spinal fluid compartment and the anisotropy of white matter electrical conductivity were both shown to significantly reduce modeling errors. Here, we for the first time quantify the role of detailed reconstructions of the cerebral blood vessels in volume conductor head modeling for EEG. To study the role of the highly arborized cerebral blood vessels, we created a submillimeter head model based on ultra-high-field-strength (7T) structural MRI datasets. Blood vessels (arteries and emissary/intraosseous veins) were segmented using Frangi multi-scale vesselness filtering. The final head model consisted of a geometry-adapted cubic mesh with over 17×10(6) nodes. We solved the forward model using a finite-element-method (FEM) transfer matrix approach, which allowed reducing computation times substantially and quantified the importance of the blood vessel compartment by computing forward and inverse errors resulting from ignoring the blood vessels. Our results show that ignoring emissary veins piercing the skull leads to focal localization errors of approx. 5 to 15mm. Large errors (>2cm) were observed due to the carotid arteries and the dense arterial vasculature in areas such as in the insula or in the medial temporal lobe. Thus, in such predisposed areas, errors caused by neglecting blood vessels can reach similar magnitudes as those previously reported for neglecting white matter anisotropy, the CSF or the dura - structures which are generally considered important components of realistic EEG head models. Our findings thus imply that including a realistic blood vessel compartment in EEG head models will be helpful to improve the accuracy of EEG source analyses particularly when high accuracies in brain areas with dense vasculature are required.
Vision loss after optic neuropathy is considered irreversible. Here, repetitive transorbital alternating current stimulation (rtACS) was applied in partially blind patients with the goal of activating their residual vision.
We conducted a multicenter, prospective, randomized, double-blind, sham-controlled trial in an ambulatory setting with daily application of rtACS (n = 45) or sham-stimulation (n = 37) for 50 min for a duration of 10 week days. A volunteer sample of patients with optic nerve damage (mean age 59.1 yrs) was recruited. The primary outcome measure for efficacy was super-threshold visual fields with 48 hrs after the last treatment day and at 2-months follow-up. Secondary outcome measures were near-threshold visual fields, reaction time, visual acuity, and resting-state EEGs to assess changes in brain physiology.
The rtACS-treated group had a mean improvement in visual field of 24.0% which was significantly greater than after sham-stimulation (2.5%). This improvement persisted for at least 2 months in terms of both within- and between-group comparisons. Secondary analyses revealed improvements of near-threshold visual fields in the central 5° and increased thresholds in static perimetry after rtACS and improved reaction times, but visual acuity did not change compared to shams. Visual field improvement induced by rtACS was associated with EEG power-spectra and coherence alterations in visual cortical networks which are interpreted as signs of neuromodulation. Current flow simulation indicates current in the frontal cortex, eye, and optic nerve and in the subcortical but not in the cortical regions.
rtACS treatment is a safe and effective means to partially restore vision after optic nerve damage probably by modulating brain plasticity. This class 1 evidence suggests that visual fields can be improved in a clinically meaningful way.
Functional properties of neurons are strongly coupled with their morphology. Changes in neuronal activity alter morphological characteristics of dendritic spines. First step towards understanding the structure-function relationship is to group spines into main spine classes reported in the literature. Shape analysis of dendritic spines can help neuroscientists understand the underlying relationships. Due to unavailability of reliable automated tools, this analysis is currently performed manually which is a time-intensive and subjective task. Several studies on spine shape classification have been reported in the literature, however, there is an on-going debate on whether distinct spine shape classes exist or whether spines should be modeled through a continuum of shape variations. Another challenge is the subjectivity and bias that is introduced due to the supervised nature of classification approaches. In this paper, we aim to address these issues by presenting a clustering perspective. In this context, clustering may serve both confirmation of known patterns and discovery of new ones. We perform cluster analysis on two-photon microscopic images of spines using morphological, shape, and appearance based features and gain insights into the spine shape analysis problem. We use histogram of oriented gradients (HOG), disjunctive normal shape models (DNSM), morphological features, and intensity profile based features for cluster analysis. We use x-means to perform cluster analysis that selects the number of clusters automatically using the Bayesian information criterion (BIC). For all features, this analysis produces 4 clusters and we observe the formation of at least one cluster consisting of spines which are difficult to be assigned to a known class. This observation supports the argument of intermediate shape types.
Dendritic spines are one of the key functional components of neurons. Their morphological changes are correlated with neuronal activity. Neuroscientists study spine shape variations to understand their relation with neuronal activity. Currently this analysis performed manually, the availability of reliable automated tools would assist neuroscientists and accelerate this research. Previously, morphological features based spine analysis has been performed and reported in the literature. In this paper, we explore the idea of using and comparing manifold learning techniques for classifying spine shapes. We start with automatically segmented data and construct our feature vector by stacking and concatenating the columns of images. Further, we apply unsupervised manifold learning algorithms and compare their performance in the context of dendritic spine classification. We achieved 85.95% accuracy on a dataset of 242 automatically segmented mushroom and stubby spines. We also observed that ISOMAP implicitly computes prominent features suitable for classification purposes.
Analysis of dendritic spines is an essential task to understand the functional behavior of neurons. Their shape variations are known to be closely linked with neuronal activities. Spine shape analysis in particular, can assist neuroscientists to identify this relationship. A novel shape representation has been proposed recently, called Disjunctive Normal Shape Models (DNSM). DNSM is a parametric shape representation and has proven to be successful in several segmentation problems. In this paper, we apply this parametric shape representation as a feature extraction algorithm. Further, we propose a kernel density estimation (KDE) based classification approach for dendritic spine classification. We evaluate our proposed approach on a data set of 242 spines, and observe that it outperforms the classical morphological feature based approach for spine classification. Our probabilistic framework also provides a way to examine the separability of spine shape classes in the likelihood ratio space, which leads to further insights about the nature of the shape analysis problem in this context.
Samuel Gratzl, Alexander Lex, Nils Gehlenborg, Nicola Cosgrove, Marc Streit .
From Visual Exploration to Storytelling and Back Again, In Computer Graphics Forum, Vol. 35, No. 3, pp. 491--500. jun, 2016.
The primary goal of visual data exploration tools is to enable the discovery of new insights. To justify and reproduce insights, the discovery process needs to be documented and communicated. A common approach to documenting and presenting findings is to capture visualizations as images or videos. Images, however, are insufficient for telling the story of a visual discovery, as they lack full provenance information and context. Videos are difficult to produce and edit, particularly due to the non-linear nature of the exploratory process. Most importantly, however, neither approach provides the opportunity to return to any point in the exploration in order to review the state of the visualization in detail or to conduct additional analyses. In this paper we present CLUE (Capture, Label, Understand, Explain), a model that tightly integrates data exploration and presentation of discoveries. Based on provenance data captured during the exploration process, users can extract key steps, add annotations, and author "Vistories", visual stories based on the history of the exploration. These Vistories can be shared for others to view, but also to retrace and extend the original analysis. We discuss how the CLUE approach can be integrated into visualization tools and provide a prototype implementation. Finally, we demonstrate the general applicability of the model in two usage scenarios: a Gapminder-inspired visualization to explore public health data and an example from molecular biology that illustrates how Vistories could be used in scientific journals.
C. Gritton, M. Berzins.
Improving accuracy in the MPM method using a null space filter, In Computational Particle Mechanics, pp. 1--12. 2016.
The material point method (MPM) has been very successful in providing solutions to many challenging problems involving large deformations. Nevertheless there are some important issues that remain to be resolved with regard to its analysis. One key challenge applies to both MPM and particle-in-cell (PIC) methods and arises from the difference between the number of particles and the number of the nodal grid points to which the particles are mapped. This difference between the number of particles and the number of grid points gives rise to a non-trivial null space of the linear operator that maps particle values onto nodal grid point values. In other words, there are non-zero particle values that when mapped to the grid point nodes result in a zero value there. Moreover, when the nodal values at the grid points are mapped back to particles, part of those particle values may be in that same null space. Given positive mapping weights from particles to nodes such null space values are oscillatory in nature. While this problem has been observed almost since the beginning of PIC methods there are still elements of it that are problematical today as well as methods that transcend it. The null space may be viewed as being connected to the ringing instability identified by Brackbill for PIC methods. It will be shown that it is possible to remove these null space values from the solution using a null space filter. This filter improves the accuracy of the MPM methods using an approach that is based upon a local singular value decomposition (SVD) calculation. This local SVD approach is compared against the global SVD approach previously considered by the authors and to a recent MPM method by Zhang and colleagues.
A.V. P. Grosset, A. Knoll, C.D. Hansen. Dynamically Scheduled Region-Based Image Compositing, In Eurographics Symposium on Parallel Graphics and Visualization, June, 2016.
Algorithms for sort-last parallel volume rendering on large distributed memory machines usually divide a dataset equally across all nodes for rendering. Depending on the features that a user wants to see in a dataset, all the nodes will rarely finish rendering at the same time. Existing compositing algorithms do not often take this into consideration, which can lead to significant delays when nodes that are compositing wait for other nodes that are still rendering. In this paper, we present an image compositing algorithm that uses spatial and temporal awareness to dynamically schedule the exchange of regions in an image and progressively composite images as they become available. Running on the Edison supercomputer at NERSC, we show that a scheduler-based algorithm with awareness of the spatial contribution from each rendering node can outperform traditional image compositing algorithms.
Modern supercomputers have thousands of nodes, each with CPUs and/or GPUs capable of several teraflops. However, the network connecting these nodes is relatively slow, on the order of gigabits per second. For time-critical workloads such as interactive visualization, the bottleneck is no longer computation but communication. In this paper, we present an image compositing algorithm that works on both CPU-only and GPU-accelerated supercomputers and focuses on communication avoidance and overlapping communication with computation at the expense of evenly balancing the workload. The algorithm has three stages: a parallel direct send stage, followed by a tree compositing stage and a gather stage. We compare our algorithm with radix-k and binary-swap from the IceT library in a hybrid OpenMP/MPI setting on the Stampede and Edison supercomputers, show strong scaling results and explain how we generally achieve better performance than these two algorithms. We developed a GPU-based image compositing algorithm where we use CUDA kernels for computation and GPU Direct RDMA for inter-node GPU communication. We tested the algorithm on the Piz Daint GPU-accelerated supercomputer and show that we achieve performance on par with CPUs. Lastly, we introduce a workflow in which both rendering and compositing are done on the GPU.
Transcranial direct current stimulation (tDCS) aims to alter brain function non-invasively via electrodes placed on the scalp. Conventional tDCS uses two relatively large patch electrodes to deliver electrical current to the brain region of interest (ROI). Recent studies have shown that using dense arrays containing up to 512 smaller electrodes may increase the precision of targeting ROIs. However, this creates a need for methods to determine effective and safe stimulus patterns as the number of degrees of freedom is much higher with such arrays. Several approaches to this problem have appeared in the literature. In this paper, we describe a new method for calculating optimal electrode stimulus patterns for targeted and directional modulation in dense array tDCS which differs in some important aspects with methods reported to date.
We optimize stimulus pattern of dense arrays with fixed electrode placement to maximize the current density in a particular direction in the ROI. We impose a flexible set of safety constraints on the current power in the brain, individual electrode currents, and total injected current, to protect subject safety. The proposed optimization problem is convex and thus efficiently solved using existing optimization software to find unique and globally optimal electrode stimulus patterns.
Solutions for four anatomical ROIs based on a realistic head model are shown as exemplary results. To illustrate the differences between our approach and previously introduced methods, we compare our method with two of the other leading methods in the literature. We also report on extensive simulations that show the effect of the values chosen for each proposed safety constraint bound on the optimized stimulus patterns.
The proposed optimization approach employs volume based ROIs, easily adapts to different sets of safety constraints, and takes negligible time to compute. An in-depth comparison study gives insight into the relationship between different objective criteria and optimized stimulus patterns. In addition, the analysis of the interaction between optimized stimulus patterns and safety constraint bounds suggests that more precise current localization in the ROI, with improved safety criterion, may be achieved by careful selection of the constraint bounds.
B. Hollister, G. Duffley, C. Butson,, C.R. Johnson. Visualization for Understanding Uncertainty in Activation Volumes for Deep Brain Stimulation, In Eurographics Conference on Visualization, Edited by K.L. Ma G. Santucci, and J. van Wijk, 2016.
We have created the Neurostimulation Uncertainty Viewer (nuView or nView) tool for exploring data arising from deep brain stimulation (DBS). Simulated volume of tissue activated (VTA), using clinical electrode placements, are recorded along withpatient outcomes in the Unified Parkinson's disease rating scale (UPDRS). The data is volumetric and sparse, with multi-value patient results for each activated voxel in the simulation. nView provides a collection of visual methods to explore the activated tissue to enhance understanding of electrode usage for improved therapy with DBS.
Modeling thermal radiation is computationally challenging in parallel due to its all-to-all physical and resulting computational connectivity, and is also the dominant mode of heat transfer in practical applications such as next-generation clean coal boilers, being modeled by the Uintah framework. However, a direct all-to-all treatment of radiation is prohibitively expensive on large computers systems whether homogeneous or heterogeneous. DOE Titan and the planned DOE Summit and Sierra machines are examples of current and emerging GPUbased heterogeneous systems where the increased processing capability of GPUs over CPUs exacerbates this problem. These systems require that computational frameworks like Uintah leverage an arbitrary number of on-node GPUs, while simultaneously utilizing thousands of GPUs within a single simulation. We show that radiative heat transfer problems can be made to scale within Uintah on heterogeneous systems through a combination of reverse Monte Carlo ray tracing (RMCRT) techniques combined with AMR, to reduce the amount of global communication. In particular, significant Uintah infrastructure changes, including a novel lock and contention-free, thread-scalable data structure for managing MPI communication requests and improved memory allocation strategies were necessary to achieve excellent strong scaling results to 16384 GPUs on Titan.
S.K. Iyer, T. Tasdizen, D. Likhite, E.V.R. DiBella.
Split Bregman multicoil accelerated reconstruction technique: A new framework for rapid reconstruction of cardiac perfusion MRI, In Medical Physics, Vol. 43, No. 4, Wiley-Blackwell, pp. 1969--1981. March, 2016.
Rapid reconstruction of undersampled multicoil MRI data with iterative constrained reconstruction method is a challenge. The authors sought to develop a new substitution based variable splitting algorithm for faster reconstruction of multicoil cardiac perfusion MRI data.
The new method, split Bregman multicoil accelerated reconstruction technique (SMART), uses a combination of split Bregman based variable splitting and iterative reweighting techniques to achieve fast convergence. Total variation constraints are used along the spatial and temporal dimensions. The method is tested on nine ECG-gated dog perfusion datasets, acquired with a 30-ray golden ratio radial sampling pattern and ten ungated human perfusion datasets, acquired with a 24-ray golden ratio radial sampling pattern. Image quality and reconstruction speed are evaluated and compared to a gradient descent (GD) implementation and to multicoil k-t SLR, a reconstruction technique that uses a combination of sparsity and low rank constraints.
Comparisons based on blur metric and visual inspection showed that SMART images had lower blur and better texture as compared to the GD implementation. On average, the GD based images had an ∼18% higher blur metric as compared to SMART images. Reconstruction of dynamic contrast enhanced (DCE) cardiac perfusion images using the SMART method was ∼6 times faster than standard gradient descent methods. k-t SLR and SMART produced images with comparable image quality, though SMART was ∼6.8 times faster than k-t SLR.
The SMART method is a promising approach to reconstruct good quality multicoil images from undersampled DCE cardiac perfusion data rapidly.
Current late gadolinium enhancement (LGE) imaging of left atrial (LA) scar or fibrosis is relatively slow and requires 5–15 min to acquire an undersampled (R = 1.7) 3D navigated dataset. The GeneRalized Autocalibrating Partially Parallel Acquisitions (GRAPPA) based parallel imaging method is the current clinical standard for accelerating 3D LGE imaging of the LA and permits an acceleration factor ~ R = 1.7. Two compressed sensing (CS) methods have been developed to achieve higher acceleration factors: a patch based collaborative filtering technique tested with acceleration factor R ~ 3, and a technique that uses a 3D radial stack-of-stars acquisition pattern (R ~ 1.8) with a 3D total variation constraint. The long reconstruction time of these CS methods makes them unwieldy to use, especially the patch based collaborative filtering technique. In addition, the effect of CS techniques on the quantification of percentage of scar/fibrosis is not known.
We sought to develop a practical compressed sensing method for imaging the LA at high acceleration factors. In order to develop a clinically viable method with short reconstruction time, a Split Bregman (SB) reconstruction method with 3D total variation (TV) constraints was developed and implemented. The method was tested on 8 atrial fibrillation patients (4 pre-ablation and 4 post-ablation datasets). Blur metric, normalized mean squared error and peak signal to noise ratio were used as metrics to analyze the quality of the reconstructed images, Quantification of the extent of LGE was performed on the undersampled images and compared with the fully sampled images. Quantification of scar from post-ablation datasets and quantification of fibrosis from pre-ablation datasets showed that acceleration factors up to R ~ 3.5 gave good 3D LGE images of the LA wall, using a 3D TV constraint and constrained SB methods. This corresponds to reducing the scan time by half, compared to currently used GRAPPA methods. Reconstruction of 3D LGE images using the SB method was over 20 times faster than standard gradient descent methods.
The quantification of local surface morphology in the human cortex is important for examining population differences as well as developmental changes in neurodegenerative or neurodevelopmental disorders. We propose a novel cortical shape measure, referred to as the 'shape complexity index' (SCI), that represents localized shape complexity as the difference between the observed distributions of local surface topology, as quantified by the shape index (SI) measure, to its best fitting simple topological model within a given neighborhood. We apply a relatively small, adaptive geodesic kernel to calculate the SCI. Due to the small size of the kernel, the proposed SCI measure captures fine differences of cortical shape. With this novel cortical feature, we aim to capture comparatively small local surface changes that capture a) the widening versus deepening of sulcal and gyral regions, as well as b) the emergence and development of secondary and tertiary sulci. Current cortical shape measures, such as the gyrification index (GI) or intrinsic curvature measures, investigate the cortical surface at a different scale and are less well suited to capture these particular cortical surface changes. In our experiments, the proposed SCI demonstrates higher complexity in the gyral/sulcal wall regions, lower complexity in wider gyral ridges and lowest complexity in wider sulcal fundus regions. In early postnatal brain development, our experiments show that SCI reveals a pattern of increased cortical shape complexity with age, as well as sexual dimorphisms in the insula, middle cingulate, parieto-occipital sulcal and Broca's regions. Overall, sex differences were greatest at 6months of age and were reduced at 24months, with the difference pattern switching from higher complexity in males at 6months to higher complexity in females at 24months. This is the first study of longitudinal, cortical complexity maturation and sex differences, in the early postnatal period from 6 to 24months of age with fine scale, cortical shape measures. These results provide information that complement previous studies of gyrification index in early brain development.
M. Larsen, K. Moreland, C.R. Johnson,, H. Childs. Optimizing Multi-Image Sort-Last Parallel Rendering, In Symposium on Large Data Analysis and Visualization, IEEE, 2016.
Sort-last parallel rendering can be improved by considering the rendering of multiple images at a time. Most parallel rendering algorithms consider the generation of only a single image. This makes sense when performing interactive rendering where the parameters of each rendering are not known until the previous rendering completes. However, in situ visualization often generates multiple images that do not need to be created sequentially. In this paper we present a simple and effective approach to improving parallel image generation throughput by amortizing the load and overhead among multiple image renders. Additionally, we validate our approach by conducting a performance study exploring the achievable speed-ups in a variety of image-based in situ use cases and rendering workloads. On average, our approach shows a 1.5 to 3.7 fold improvement in performance, and in some cases, shows a 10 fold improvement.
This paper investigates one of the most fundamental computer vision problems: image segmentation. We propose a supervised hierarchical approach to object-independent image segmentation. Starting with oversegmenting superpixels, we use a tree structure to represent the hierarchy of region merging, by which we reduce the problem of segmenting image regions to finding a set of label assignment to tree nodes. We formulate the tree structure as a constrained conditional model to associate region merging with likelihoods predicted using an ensemble boundary classifier. Final segmentations can then be inferred by finding globally optimal solutions to the model efficiently. We also present an iterative training and testing algorithm that generates various tree structures and combines them to emphasize accurate boundaries by segmentation accumulation. Experiment results and comparisons with other recent methods on six public datasets demonstrate that our approach achieves the state-of-the-art region accuracy and is competitive in image segmentation without semantic priors.
SSHMT: Semi-supervised Hierarchical Merge Tree for Electron Microscopy Image Segmentation, In Lecture Notes in Computer Science, Vol. 9905, Springer International Publishing, pp. 144--159. 2016.
Region-based methods have proven necessary for improving segmentation accuracy of neuronal structures in electron microscopy (EM) images. Most region-based segmentation methods use a scoring function to determine region merging. Such functions are usually learned with supervised algorithms that demand considerable ground truth data, which are costly to collect. We propose a semi-supervised approach that reduces this demand. Based on a merge tree structure, we develop a differentiable unsupervised loss term that enforces consistent predictions from the learned function. We then propose a Bayesian model that combines the supervised and the unsupervised information for probabilistic learning. The experimental results on three EM data sets demonstrate that by using a subset of only 3% to 7% of the entire ground truth data, our approach consistently performs close to the state-of-the-art supervised method with the full labeled data set, and significantly outperforms the supervised method with the same labeled subset.