Designed especially for neurobiologists, FluoRender is an interactive tool for multi-channel fluorescence microscopy data visualization and analysis.
Deep brain stimulation
BrainStimulator is a set of networks that are used in SCIRun to perform simulations of brain stimulation such as transcranial direct current stimulation (tDCS) and magnetic transcranial stimulation (TMS).
Developing software tools for science has always been a central vision of the SCI Institute.

SCI Publications

2020


C. Ly, C. Vachet, I. Schwerdt, E. Abbott, A. Brenkmann, L.W. McDonald, T. Tasdizen. “Determining uranium ore concentrates and their calcination products via image classification of multiple magnifications,” In Journal of Nuclear Materials, 2020.

ABSTRACT

Many tools, such as mass spectrometry, X-ray diffraction, X-ray fluorescence, ion chromatography, etc., are currently available to scientists investigating interdicted nuclear material. These tools provide an analysis of physical, chemical, or isotopic characteristics of the seized material to identify its origin. In this study, a novel technique that characterizes physical attributes is proposed to provide insight into the processing route of unknown uranium ore concentrates (UOCs) and their calcination products. In particular, this study focuses on the characteristics of the surface structure captured in scanning electron microscopy (SEM) images at different magnification levels. Twelve common commercial processing routes of UOCs and their calcination products are investigated. Multiple-input single-output (MISO) convolution neural networks (CNNs) are implemented to differentiate the processing routes. The proposed technique can determine the processing route of a given sample in under a second running on a graphics processing unit (GPU) with an accuracy of more than 95%. The accuracy and speed of this proposed technique enable nuclear scientists to provide the preliminary identification results of interdicted material in a short time period. Furthermore, this proposed technique uses a predetermined set of magnifications, which in turn eliminates the human bias in selecting the magnification during the image acquisition process.


2019


A. B. Hanson, R. N. Lee, C. Vachet, I. J. Schwerdt, T. Tasdizen, L. W. McDonald IV. “Quantifying Impurity Effects on the Surface Morphology of α-U3O8,” In Analytical Chemistry, 2019.
DOI: doi:10.1021/acs.analchem.9b02013

ABSTRACT

The morphological effect of impurities on α-U3O8 has been investigated. This study provides the first evidence that the presence of impurities can alter nuclear material morphology, and these changes can be quantified to aid in revealing processing history. Four elements: Ca, Mg, V, and Zr were implemented in the uranyl peroxide synthesis route and studied individually within the α-U3O8. Six total replicates were synthesized, and replicates 1–3 were filtered and washed with Millipore water (18.2 MΩ) to remove any residual nitrates. Replicates 4–6 were filtered but not washed to determine the amount of impurities removed during washing. Inductively coupled plasma mass spectrometry (ICP-MS) was employed at key points during the synthesis to quantify incorporation of the impurity. Each sample was characterized using powder X-ray diffraction (p-XRD), high-resolution scanning electron microscopy (HRSEM), and SEM with energy dispersive X-ray spectroscopy (SEM-EDS). p-XRD was utilized to evaluate any crystallographic changes due to the impurities; HRSEM imagery was analyzed with Morphological Analysis for MAterials (MAMA) software and machine learning classification for quantification of the morphology; and SEM-EDS was utilized to locate the impurity within the α-U3O8. All samples were found to be quantifiably distinguishable, further demonstrating the utility of quantitative morphology as a signature for the processing history of nuclear material.



S. T. Heffernan, N. Ly, B. J. Mower, C. Vachet, I. J. Schwerdt, T. Tasdizen, L. W. McDonald IV. “Identifying surface morphological characteristics to differentiate between mixtures of U3O8 synthesized from ammonium diuranate and uranyl peroxide,” In Radiochimica Acta, 2019.

ABSTRACT

In the present study, surface morphological differences of mixtures of triuranium octoxide (U3O8), synthesized from uranyl peroxide (UO4) and ammonium diuranate (ADU), were investigated. The purity of each sample was verified using powder X-ray diffractometry (p-XRD), and scanning electron microscopy (SEM) images were collected to identify unique morphological features. The U3O8 from ADU and UO4 was found to be unique. Qualitatively, both particles have similar features being primarily circular in shape. Using the morphological analysis of materials (MAMA) software, particle shape and size were quantified. UO4 was found to produce U3O8 particles three times the area of those produced from ADU. With the starting morphologies quantified, U3O8 samples from ADU and UO4 were physically mixed in known quantities. SEM images were collected of the mixed samples, and the MAMA software was used to quantify particle attributes. As U3O8 particles from ADU were unique from UO4, the composition of the mixtures could be quantified using SEM imaging coupled with particle analysis. This provides a novel means of quantifying processing histories of mixtures of uranium oxides. Machine learning was also used to help further quantify characteristics in the image database through direct classification and particle segmentation using deep learning techniques based on Convolutional Neural Networks (CNN). It demonstrates that these techniques can distinguish the mixtures with high accuracy as well as showing significant differences in morphology between the mixtures. Results from this study demonstrate the power of quantitative morphological analysis for determining the processing history of nuclear materials.



R.B. Lanfredi, J.D. Schroeder, C. Vachet, T. Tasdizen. “Adversarial regression training for visualizing the progression of chronic obstructive pulmonary disease with chest x-rays,” In Arxiv, In International Conference on Medical Image Computing and Computer-Assisted Intervention, 2019.

ABSTRACT

Knowledge of what spatial elements of medical images deep learning methods use as evidence is important for model interpretability, trustiness, and validation. There is a lack of such techniques for models in regression tasks. We propose a method, called visualization for regression with a generative adversarial network (VR-GAN), for formulating adversarial training specifically for datasets containing regression target values characterizing disease severity. We use a conditional generative adversarial network where the generator attempts to learn to shift the output of a regressor through creating disease effect maps that are added to the original images. Meanwhile, the regressor is trained to predict the original regression value for the modified images. A model trained with this technique learns to provide visualization for how the image would appear at different stages of the disease. We analyze our method in a dataset of chest x-rays associated with pulmonary function tests, used for diagnosing chronic obstructive pulmonary disease (COPD). For validation, we compute the difference of two registered x-rays of the same patient at different time points and correlate it to the generated disease effect map. The proposed method outperforms a technique based on classification and provides realistic-looking images, making modifications to images following what radiologists usually observe for this disease. Implementation code is available athttps://github.com/ricbl/vrgan.


2018


D. Ayyagari, N. Ramesh, D. Yatsenko, T. Tasdizen, C. Atria. “Image reconstruction using priors from deep learning,” In Medical Imaging 2018: Image Processing, SPIE, March, 2018.

ABSTRACT

Tomosynthesis, i.e. reconstruction of 3D volumes using projections from a limited perspective is a classical inverse, ill-posed or under constrained problem. Data insufficiency leads to reconstruction artifacts that vary in severity depending on the particular problem, the reconstruction method and also on the object being imaged. Machine learning has been used successfully in tomographic problems where data is insufficient, but the challenge with machine learning is that it introduces bias from the learning dataset. A novel framework to improve the quality of the tomosynthesis reconstruction that limits the learning dataset bias by maintaining consistency with the observed data is proposed. Convolutional Neural Networks (CNN) are embedded as regularizers in the reconstruction process to introduce the expected features and characterstics of the likely imaged object. The minimization of the objective function keeps the solution consistent with the observations and limits the bias introduced by the machine learning regularizers, improving the quality of the reconstruction. The proposed method has been developed and studied in the specific problem of Cone Beam Tomosynthesis Flouroscopy (CBT-fluoroscopy)1 but it is a general framework that can be applied to any image reconstruction problem that is limited by data insufficiency.



M. Javanmardi, T. Tasdizen. “Domain adaptation for biomedical image segmentation using adversarial training,” In 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), IEEE, pp. 554-558. April, 2018.
DOI: 10.1109/isbi.2018.8363637

ABSTRACT

Many biomedical image analysis applications require segmentation. Convolutional neural networks (CNN) have become a promising approach to segment biomedical images; however, the accuracy of these methods is highly dependent on the training data. We focus on biomedical image segmentation in the context where there is variation between source and target datasets and ground truth for the target dataset is very limited or non-existent. We use an adversarial based training approach to train CNNs to achieve good accuracy on the target domain. We use the DRIVE and STARE eye vasculture segmentation datasets and show that our approach can significantly improve results where we only use labels of one domain in training and test on the other domain. We also show improvements on membrane detection between MIC-CAI 2016 CREMI challenge and ISBI2013 EM segmentation challenge datasets.



Image segmentation using disjunctive normal Bayesian shape, appearance models. “F. Mesadi, E. Erdil, M. Cetin, T. Tasdizen,” In IEEE Transactions on Medical Imaging, Vol. 37, No. 1, IEEE, pp. 293--305. Jan, 2018.
DOI: 10.1109/tmi.2017.2756929

ABSTRACT

The use of appearance and shape priors in image segmentation is known to improve accuracy; however, existing techniques have several drawbacks. For instance, most active shape and appearance models require landmark points and assume unimodal shape and appearance distributions, and the level set representation does not support construction of local priors. In this paper, we present novel appearance and shape models for image segmentation based on a differentiable implicit parametric shape representation called a disjunctive normal shape model (DNSM). The DNSM is formed by the disjunction of polytopes, which themselves are formed by the conjunctions of half-spaces. The DNSM's parametric nature allows the use of powerful local prior statistics, and its implicit nature removes the need to use landmarks and easily handles topological changes. In a Bayesian inference framework, we model arbitrary shape and appearance distributions using nonparametric density estimations, at any local scale. The proposed local shape prior results in accurate segmentation even when very few training shapes are available, because the method generates a rich set of shape variations by locally combining training samples. We demonstrate the performance of the framework by applying it to both 2-D and 3-D data sets with emphasis on biomedical image segmentation applications.



Q.C. Nguyen, M. Sajjadi, M. McCullough, M. Pham, T.T. Nguyen, W. Yu, H. Meng, M. Wen, F. Li, K.R. Smith, K. Brunisholz, T, Tasdizen. “Neighbourhood looking glass: 360º automated characterisation of the built environment for neighbourhood effects research,” In Journal of Epidemiology and Community Health, BMJ, Jan, 2018.
DOI: 10.1136/jech-2017-209456

ABSTRACT

Background
Neighbourhood quality has been connected with an array of health issues, but neighbourhood research has been limited by the lack of methods to characterise large geographical areas. This study uses innovative computer vision methods and a new big data source of street view images to automatically characterise neighbourhood built environments.

Methods
A total of 430 000 images were obtained using Google's Street View Image API for Salt Lake City, Chicago and Charleston. Convolutional neural networks were used to create indicators of street greenness, crosswalks and building type. We implemented log Poisson regression models to estimate associations between built environment features and individual prevalence of obesity and diabetes in Salt Lake City, controlling for individual-level and zip code-level predisposing characteristics.

Results
Computer vision models had an accuracy of 86%–93% compared with manual annotations. Charleston had the highest percentage of green streets (79%), while Chicago had the highest percentage of crosswalks (23%) and commercial buildings/apartments (59%). Built environment characteristics were categorised into tertiles, with the highest tertile serving as the referent group. Individuals living in zip codes with the most green streets, crosswalks and commercial buildings/apartments had relative obesity prevalences that were 25%–28% lower and relative diabetes prevalences that were 12%–18% lower than individuals living in zip codes with the least abundance of these neighbourhood features.

Conclusion
Neighbourhood conditions may influence chronic disease outcomes. Google Street View images represent an underused data resource for the construction of built environment features.



N. Ramesh, T. Tasdizen. “Semi-supervised learning for cell tracking in microscopy images,” In 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), IEEE, April, 2018.

ABSTRACT

This paper discusses an algorithm for semi-supervised learning to predict cell division and motion in microscopy images. The cells for tracking are detected using extremal region selection and are depicted using a graphical representation. The supervised loss minimizes the error in predictions for the division and move classifiers. The unsupervised loss constrains the incoming links for every detection such that only one of the links is active. Similarly for the outgoing links, we enforce at-most two links to be active. The supervised and un-supervised losses are embedded in a Bayesian framework for probabilistic learning. The classifier predictions are used to model flow variables for every edge in the graph. The cell lineages are solved by formulating it as an energy minimization problem with constraints using integer linear programming. The unsupervised loss adds a significant improvement in the prediction of the division classifier.



I. .J Schwerdt, A. Brenkmann, S. Martinson, B. D. Albrecht, S. Heffernan, M. R. Klosterman, T. Kirkham, T. Tasdizen, L. W. McDonald IV. “Nuclear proliferomics: A new field of study to identify signatures of nuclear materials as demonstrated on alpha-UO3,” In Talanta, Vol. 186, Elsevier BV, pp. 433--444. Aug, 2018.
DOI: 10.1016/j.talanta.2018.04.092

ABSTRACT

The use of a limited set of signatures in nuclear forensics and nuclear safeguards may reduce the discriminating power for identifying unknown nuclear materials, or for verifying processing at existing facilities. Nuclear proliferomics is a proposed new field of study that advocates for the acquisition of large databases of nuclear material properties from a variety of analytical techniques. As demonstrated on a common uranium trioxide polymorph, α-UO3, in this paper, nuclear proliferomics increases the ability to improve confidence in identifying the processing history of nuclear materials. Specifically, α-UO3 was investigated from the calcination of unwashed uranyl peroxide at 350, 400, 450, 500, and 550 °C in air. Scanning electron microscopy (SEM) images were acquired of the surface morphology, and distinct qualitative differences are presented between unwashed and washed uranyl peroxide, as well as the calcination products from the unwashed uranyl peroxide at the investigated temperatures. Differential scanning calorimetry (DSC), UV–Vis spectrophotometry, powder X-ray diffraction (p-XRD), and thermogravimetric analysis-mass spectrometry (TGA-MS) were used to understand the source of these morphological differences as a function of calcination temperature. Additionally, the SEM images were manually segmented using Morphological Analysis for MAterials (MAMA) software to identify quantifiable differences in morphology for three different surface features present on the unwashed uranyl peroxide calcination products. No single quantifiable signature was sufficient to discern all calcination temperatures with a high degree of confidence; therefore, advanced statistical analysis was performed to allow the combination of a number of quantitative signatures, with their associated uncertainties, to allow for complete discernment by calcination history. Furthermore, machine learning was applied to the acquired SEM images to demonstrate automated discernment with at least 89% accuracy.



T. Tasdizen, M. Sajjadi, M. Javanmardi, N. Ramesh. “Improving the robustness of convolutional networks to appearance variability in biomedical images,” In 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), IEEE, April, 2018.
DOI: 10.1109/isbi.2018.8363636

ABSTRACT

While convolutional neural networks (CNN) produce state-of-the-art results in many applications including biomedical image analysis, they are not robust to variability in the data that is not well represented by the training set. An important source of variability in biomedical images is the appearance of objects such as contrast and texture due to different imaging settings. We introduce the neighborhood similarity layer (NSL) which can be used in a CNN to improve robustness to changes in the appearance of objects that are not well represented by the training data. The proposed NSL transforms its input feature map at a given pixel by computing its similarity to the surrounding neighborhood. This transformation is spatially varying, hence not a convolution. It is differentiable; therefore, networks including the proposed layer can be trained in an end-to-end manner. We demonstrate the advantages of the NSL for the vasculature segmentation and cell detection problems.


2017


E. Erdil, M.U. Ghani, L. Rada, A.O. Argunsah, D. Unay, T. Tasdizen, M. Cetin. “Nonparametric joint shape and feature priors for image segmentation,” In IEEE Transactions on Image Processing, Vol. 26, No. 11, IEEE, pp. 5312--5323. Nov, 2017.
DOI: 10.1109/tip.2017.2728185

ABSTRACT

In many image segmentation problems involving limited and low-quality data, employing statistical prior information about the shapes of the objects to be segmented can significantly improve the segmentation result. However, defining probability densities in the space of shapes is an open and challenging problem, especially if the object to be segmented comes from a shape density involving multiple modes (classes). Existing techniques in the literature estimate the underlying shape distribution by extending Parzen density estimator to the space of shapes. In these methods, the evolving curve may converge to a shape from a wrong mode of the posterior density when the observed intensities provide very little information about the object boundaries. In such scenarios, employing both shape- and class-dependent discriminative feature priors can aid the segmentation process. Such features may involve, e.g., intensity-based, textural, or geometric information about the objects to be segmented. In this paper, we propose a segmentation algorithm that uses nonparametric joint shape and feature priors constructed by Parzen density estimation. We incorporate the learned joint shape and feature prior distribution into a maximum a posteriori estimation framework for segmentation. The resulting optimization problem is solved using active contours. We present experimental results on a variety of synthetic and real data sets from several fields involving multimodal shape densities. Experimental results demonstrate the potential of the proposed method.


2016


M. Barjatia, T. Tasdizen, B. Song, K.M. Golden. “Network modeling of Arctic melt ponds,” In Cold Regions Science and Technology, Vol. 124, Elsevier BV, pp. 40--53. April, 2016.
DOI: 10.1016/j.coldregions.2015.11.019

ABSTRACT

The recent precipitous losses of summer Arctic sea ice have outpaced the projections of most climate models. A number of efforts to improve these models have focused in part on a more accurate accounting of sea ice albedo or reflectance. In late spring and summer, the albedo of the ice pack is determined primarily by melt ponds that form on the sea ice surface. The transition of pond configurations from isolated structures to interconnected networks is critical in allowing the lateral flow of melt water toward drainage features such as large brine channels, fractures, and seal holes, which can alter the albedo by removing the melt water. Moreover, highly connected ponds can influence the formation of fractures and leads during ice break-up. Here we develop algorithmic techniques for mapping photographic images of melt ponds onto discrete conductance networks which represent the geometry and connectedness of pond configurations. The effective conductivity of the networks is computed to approximate the ease of lateral flow. We implement an image processing algorithm with mathematical morphology operations to produce a conductance matrix representation of the melt ponds. Basic clustering and edge elimination, using undirected graphs, are then used to map the melt pond connections and reduce the conductance matrix to include only direct connections. The results for images taken during different times of the year are visually inspected and the number of mislabels is used to evaluate performance.



M. Elwardy, T. Tasdizen, M. Cetin. “Disjunctive Normal Unsupervised LDA for P300-based Brain-Computer Interfaces,” In 2016 24th Signal Processing and Communication Application Conference (SIU), IEEE, May, 2016.
DOI: 10.1109/siu.2016.7496226

ABSTRACT

Can people use text-entry based brain-computer interface (BCI) systems and start a free spelling mode without any calibration session? Brain activities differ largely across people and across sessions for the same user. Thus, how can the text-entry system classify the desired character among the other characters in the P300-based BCI speller matrix? In this paper, we introduce a new unsupervised classifier for a P300-based BCI speller, which uses a disjunctive normal form representation to define an energy function involving a logistic sigmoid function for classification. Our proposed classifier updates the initialized random weights performing classification for the P300 signals from the recorded data exploiting the knowledge of the sequence of row/column highlights. To verify the effectiveness of the proposed method, we performed an experimental analysis on data from 7 healthy subjects, collected in our laboratory. We compare the proposed unsupervised method to a baseline supervised linear discriminant analysis (LDA) classifier and demonstrate its effectiveness.



E. Erdil, M. Cetin, T. Tasdizen. “MCMC Shape Sampling for Image Segmentation with Nonparametric Shape Priors,” In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, June, 2016.
DOI: 10.1109/cvpr.2016.51

ABSTRACT

Segmenting images of low quality or with missing data is a challenging problem. Integrating statistical prior information about the shapes to be segmented can improve the segmentation results significantly. Most shape-based segmentation algorithms optimize an energy functional and find a point estimate for the object to be segmented. This does not provide a measure of the degree of confidence in that result, neither does it provide a picture of other probable solutions based on the data and the priors. With a statistical view, addressing these issues would involve the problem of characterizing the posterior densities of the shapes of the objects to be segmented. For such characterization, we propose a Markov chain Monte Carlo (MCMC) sampling-based image segmentation algorithm that uses statistical shape priors. In addition to better characterization of the statistical structure of the problem, such an approach would also have the potential to address issues with getting stuck at local optima, suffered by existing shape-based segmentation methods. Our approach is able to characterize the posterior probability density in the space of shapes through its samples, and to return multiple solutions, potentially from different modes of a multimodal probability density, which would be encountered, e.g., in segmenting objects from multiple shape classes. We present promising results on a variety of data sets. We also provide an extension for segmenting shapes of objects with parts that can go through independent shape variations. This extension involves the use of local shape priors on object parts and provides robustness to limitations in shape training data size.



E. Erdil, L. Rada, A.O. Argunsah, D. Unay, T. Tasdizen, M. Cetin. “Nonparametric joint shape and feature priors for segmentation of dendritic spines,” In 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), IEEE, April, 2016.
DOI: 10.1109/isbi.2016.7493279

ABSTRACT

Multimodal shape density estimation is a challenging task in many biomedical image segmentation problems. Existing techniques in the literature estimate the underlying shape distribution by extending Parzen density estimator to the space of shapes. Such density estimates are only expressed in terms of distances between shapes which may not be sufficient for ensuring accurate segmentation when the observed intensities provide very little information about the object boundaries. In such scenarios, employing additional shape-dependent discriminative features as priors and exploiting both shape and feature priors can aid to the segmentation process. In this paper, we propose a segmentation algorithm that uses nonparametric joint shape and feature priors using Parzen density estimator. The joint prior density estimate is expressed in terms of distances between shapes and distances between features. We incorporate the learned joint shape and feature prior distribution into a maximum a posteriori estimation framework for segmentation. The resulting optimization problem is solved using active contours. We present experimental results on dendritic spine segmentation in 2-photon microscopy images which involve a multimodal shape density.



M.U. Ghani, E. Erdil, S.D. Kanik, A.O. Argunsah, A. Hobbiss, I. Israely, D. Unay, T. Tasdizen, M. Cetin. “Dendritic Spine Shape Analysis: A Clustering Perspective,” In Lecture Notes in Computer Science, Springer International Publishing, pp. 256--273. 2016.
DOI: 10.1007/978-3-319-46604-0_19

ABSTRACT

Functional properties of neurons are strongly coupled with their morphology. Changes in neuronal activity alter morphological characteristics of dendritic spines. First step towards understanding the structure-function relationship is to group spines into main spine classes reported in the literature. Shape analysis of dendritic spines can help neuroscientists understand the underlying relationships. Due to unavailability of reliable automated tools, this  analysis is currently performed manually which is a time-intensive and subjective task. Several studies on spine shape classification have been reported in the literature, however, there is an on-going debate on whether distinct spine shape classes exist or whether spines should be modeled through a continuum of shape variations. Another challenge is the subjectivity and bias that is introduced due to the supervised nature of classification approaches. In this paper, we aim to address these issues by presenting a clustering perspective. In this context, clustering may serve both confirmation of known patterns and discovery of new ones. We perform cluster analysis on two-photon microscopic images of spines using morphological, shape, and appearance based features and gain insights into the spine  shape analysis problem. We use histogram of oriented gradients (HOG), disjunctive normal shape models (DNSM), morphological features, and intensity profile based features for cluster analysis. We use x-means to perform cluster analysis that selects the number of clusters automatically using the Bayesian information criterion (BIC). For all features, this analysis produces 4 clusters and we observe the formation of at least one cluster consisting of spines which are difficult to be assigned to a known class. This observation supports the argument of intermediate shape types.



M.U. Ghani, A.O. Argunsah, I. Israely, D. Unay, T. Tasdizen, M. Cetin. “On comparison of manifold learning techniques for dendritic spine classification,” In 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), IEEE, April, 2016.
DOI: 10.1109/isbi.2016.7493278

ABSTRACT

Dendritic spines are one of the key functional components of neurons. Their morphological changes are correlated with neuronal activity. Neuroscientists study spine shape variations to understand their relation with neuronal activity. Currently this analysis performed manually, the availability of reliable automated tools would assist neuroscientists and accelerate this research. Previously, morphological features based spine analysis has been performed and reported in the literature. In this paper, we explore the idea of using and comparing manifold learning techniques for classifying spine shapes. We start with automatically segmented data and construct our feature vector by stacking and concatenating the columns of images. Further, we apply unsupervised manifold learning algorithms and compare their performance in the context of dendritic spine classification. We achieved 85.95% accuracy on a dataset of 242 automatically segmented mushroom and stubby spines. We also observed that ISOMAP implicitly computes prominent features suitable for classification purposes.



M.U. Ghani, F. Mesadi, S..D Kanik, A.O. Argunsah, I. Israely, D. Unay, T. Tasdizen, M. Cetin. “Dendritic spine shape analysis using disjunctive normal shape models,” In 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), IEEE, April, 2016.
DOI: 10.1109/isbi.2016.7493280

ABSTRACT

Analysis of dendritic spines is an essential task to understand the functional behavior of neurons. Their shape variations are known to be closely linked with neuronal activities. Spine shape analysis in particular, can assist neuroscientists to identify this relationship. A novel shape representation has been proposed recently, called Disjunctive Normal Shape Models (DNSM). DNSM is a parametric shape representation and has proven to be successful in several segmentation problems. In this paper, we apply this parametric shape representation as a feature extraction algorithm. Further, we propose a kernel density estimation (KDE) based classification approach for dendritic spine classification. We evaluate our proposed approach on a data set of 242 spines, and observe that it outperforms the classical morphological feature based approach for spine classification. Our probabilistic framework also provides a way to examine the separability of spine shape classes in the likelihood ratio space, which leads to further insights about the nature of the shape analysis problem in this context.



S.K. Iyer, T. Tasdizen, D. Likhite, E.V.R. DiBella. “Split Bregman multicoil accelerated reconstruction technique: A new framework for rapid reconstruction of cardiac perfusion MRI,” In Medical Physics, Vol. 43, No. 4, Wiley-Blackwell, pp. 1969--1981. March, 2016.
DOI: 10.1118/1.4943643

ABSTRACT

Purpose:
Rapid reconstruction of undersampled multicoil MRI data with iterative constrained reconstruction method is a challenge. The authors sought to develop a new substitution based variable splitting algorithm for faster reconstruction of multicoil cardiac perfusion MRI data.

Methods:
The new method, split Bregman multicoil accelerated reconstruction technique (SMART), uses a combination of split Bregman based variable splitting and iterative reweighting techniques to achieve fast convergence. Total variation constraints are used along the spatial and temporal dimensions. The method is tested on nine ECG-gated dog perfusion datasets, acquired with a 30-ray golden ratio radial sampling pattern and ten ungated human perfusion datasets, acquired with a 24-ray golden ratio radial sampling pattern. Image quality and reconstruction speed are evaluated and compared to a gradient descent (GD) implementation and to multicoil k-t SLR, a reconstruction technique that uses a combination of sparsity and low rank constraints.

Results:
Comparisons based on blur metric and visual inspection showed that SMART images had lower blur and better texture as compared to the GD implementation. On average, the GD based images had an ∼18% higher blur metric as compared to SMART images. Reconstruction of dynamic contrast enhanced (DCE) cardiac perfusion images using the SMART method was ∼6 times faster than standard gradient descent methods. k-t SLR and SMART produced images with comparable image quality, though SMART was ∼6.8 times faster than k-t SLR.

Conclusions:
The SMART method is a promising approach to reconstruct good quality multicoil images from undersampled DCE cardiac perfusion data rapidly.