Designed especially for neurobiologists, FluoRender is an interactive tool for multi-channel fluorescence microscopy data visualization and analysis.
Large scale visualization on the Powerwall.
BrainStimulator is a set of networks that are used in SCIRun to perform simulations of brain stimulation such as transcranial direct current stimulation (tDCS) and magnetic transcranial stimulation (TMS).
Developing software tools for science has always been a central vision of the SCI Institute.

SCI Publications

2018


D. Ayyagari, N. Ramesh, D. Yatsenko, T. Tasdizen, C. Atria. “Image reconstruction using priors from deep learning,” In Medical Imaging 2018: Image Processing, SPIE, March, 2018.

ABSTRACT

Tomosynthesis, i.e. reconstruction of 3D volumes using projections from a limited perspective is a classical inverse, ill-posed or under constrained problem. Data insufficiency leads to reconstruction artifacts that vary in severity depending on the particular problem, the reconstruction method and also on the object being imaged. Machine learning has been used successfully in tomographic problems where data is insufficient, but the challenge with machine learning is that it introduces bias from the learning dataset. A novel framework to improve the quality of the tomosynthesis reconstruction that limits the learning dataset bias by maintaining consistency with the observed data is proposed. Convolutional Neural Networks (CNN) are embedded as regularizers in the reconstruction process to introduce the expected features and characterstics of the likely imaged object. The minimization of the objective function keeps the solution consistent with the observations and limits the bias introduced by the machine learning regularizers, improving the quality of the reconstruction. The proposed method has been developed and studied in the specific problem of Cone Beam Tomosynthesis Flouroscopy (CBT-fluoroscopy)1 but it is a general framework that can be applied to any image reconstruction problem that is limited by data insufficiency.



M. Javanmardi, T. Tasdizen. “Domain adaptation for biomedical image segmentation using adversarial training,” In 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), IEEE, pp. 554-558. April, 2018.
DOI: 10.1109/isbi.2018.8363637

ABSTRACT

Many biomedical image analysis applications require segmentation. Convolutional neural networks (CNN) have become a promising approach to segment biomedical images; however, the accuracy of these methods is highly dependent on the training data. We focus on biomedical image segmentation in the context where there is variation between source and target datasets and ground truth for the target dataset is very limited or non-existent. We use an adversarial based training approach to train CNNs to achieve good accuracy on the target domain. We use the DRIVE and STARE eye vasculture segmentation datasets and show that our approach can significantly improve results where we only use labels of one domain in training and test on the other domain. We also show improvements on membrane detection between MIC-CAI 2016 CREMI challenge and ISBI2013 EM segmentation challenge datasets.



Image segmentation using disjunctive normal Bayesian shape, appearance models. “F. Mesadi, E. Erdil, M. Cetin, T. Tasdizen,” In IEEE Transactions on Medical Imaging, Vol. 37, No. 1, IEEE, pp. 293--305. Jan, 2018.
DOI: 10.1109/tmi.2017.2756929

ABSTRACT

The use of appearance and shape priors in image segmentation is known to improve accuracy; however, existing techniques have several drawbacks. For instance, most active shape and appearance models require landmark points and assume unimodal shape and appearance distributions, and the level set representation does not support construction of local priors. In this paper, we present novel appearance and shape models for image segmentation based on a differentiable implicit parametric shape representation called a disjunctive normal shape model (DNSM). The DNSM is formed by the disjunction of polytopes, which themselves are formed by the conjunctions of half-spaces. The DNSM's parametric nature allows the use of powerful local prior statistics, and its implicit nature removes the need to use landmarks and easily handles topological changes. In a Bayesian inference framework, we model arbitrary shape and appearance distributions using nonparametric density estimations, at any local scale. The proposed local shape prior results in accurate segmentation even when very few training shapes are available, because the method generates a rich set of shape variations by locally combining training samples. We demonstrate the performance of the framework by applying it to both 2-D and 3-D data sets with emphasis on biomedical image segmentation applications.



Q.C. Nguyen, M. Sajjadi, M. McCullough, M. Pham, T.T. Nguyen, W. Yu, H. Meng, M. Wen, F. Li, K.R. Smith, K. Brunisholz, T, Tasdizen. “Neighbourhood looking glass: 360º automated characterisation of the built environment for neighbourhood effects research,” In Journal of Epidemiology and Community Health, BMJ, Jan, 2018.
DOI: 10.1136/jech-2017-209456

ABSTRACT

Background
Neighbourhood quality has been connected with an array of health issues, but neighbourhood research has been limited by the lack of methods to characterise large geographical areas. This study uses innovative computer vision methods and a new big data source of street view images to automatically characterise neighbourhood built environments.

Methods
A total of 430 000 images were obtained using Google's Street View Image API for Salt Lake City, Chicago and Charleston. Convolutional neural networks were used to create indicators of street greenness, crosswalks and building type. We implemented log Poisson regression models to estimate associations between built environment features and individual prevalence of obesity and diabetes in Salt Lake City, controlling for individual-level and zip code-level predisposing characteristics.

Results
Computer vision models had an accuracy of 86%–93% compared with manual annotations. Charleston had the highest percentage of green streets (79%), while Chicago had the highest percentage of crosswalks (23%) and commercial buildings/apartments (59%). Built environment characteristics were categorised into tertiles, with the highest tertile serving as the referent group. Individuals living in zip codes with the most green streets, crosswalks and commercial buildings/apartments had relative obesity prevalences that were 25%–28% lower and relative diabetes prevalences that were 12%–18% lower than individuals living in zip codes with the least abundance of these neighbourhood features.

Conclusion
Neighbourhood conditions may influence chronic disease outcomes. Google Street View images represent an underused data resource for the construction of built environment features.



N. Ramesh, T. Tasdizen. “Semi-supervised learning for cell tracking in microscopy images,” In 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), IEEE, April, 2018.

ABSTRACT

This paper discusses an algorithm for semi-supervised learning to predict cell division and motion in microscopy images. The cells for tracking are detected using extremal region selection and are depicted using a graphical representation. The supervised loss minimizes the error in predictions for the division and move classifiers. The unsupervised loss constrains the incoming links for every detection such that only one of the links is active. Similarly for the outgoing links, we enforce at-most two links to be active. The supervised and un-supervised losses are embedded in a Bayesian framework for probabilistic learning. The classifier predictions are used to model flow variables for every edge in the graph. The cell lineages are solved by formulating it as an energy minimization problem with constraints using integer linear programming. The unsupervised loss adds a significant improvement in the prediction of the division classifier.



I. .J Schwerdt, A. Brenkmann, S. Martinson, B. D. Albrecht, S. Heffernan, M. R. Klosterman, T. Kirkham, T. Tasdizen, L. W. McDonald IV. “Nuclear proliferomics: A new field of study to identify signatures of nuclear materials as demonstrated on alpha-UO3,” In Talanta, Vol. 186, Elsevier BV, pp. 433--444. Aug, 2018.
DOI: 10.1016/j.talanta.2018.04.092

ABSTRACT

The use of a limited set of signatures in nuclear forensics and nuclear safeguards may reduce the discriminating power for identifying unknown nuclear materials, or for verifying processing at existing facilities. Nuclear proliferomics is a proposed new field of study that advocates for the acquisition of large databases of nuclear material properties from a variety of analytical techniques. As demonstrated on a common uranium trioxide polymorph, α-UO3, in this paper, nuclear proliferomics increases the ability to improve confidence in identifying the processing history of nuclear materials. Specifically, α-UO3 was investigated from the calcination of unwashed uranyl peroxide at 350, 400, 450, 500, and 550 °C in air. Scanning electron microscopy (SEM) images were acquired of the surface morphology, and distinct qualitative differences are presented between unwashed and washed uranyl peroxide, as well as the calcination products from the unwashed uranyl peroxide at the investigated temperatures. Differential scanning calorimetry (DSC), UV–Vis spectrophotometry, powder X-ray diffraction (p-XRD), and thermogravimetric analysis-mass spectrometry (TGA-MS) were used to understand the source of these morphological differences as a function of calcination temperature. Additionally, the SEM images were manually segmented using Morphological Analysis for MAterials (MAMA) software to identify quantifiable differences in morphology for three different surface features present on the unwashed uranyl peroxide calcination products. No single quantifiable signature was sufficient to discern all calcination temperatures with a high degree of confidence; therefore, advanced statistical analysis was performed to allow the combination of a number of quantitative signatures, with their associated uncertainties, to allow for complete discernment by calcination history. Furthermore, machine learning was applied to the acquired SEM images to demonstrate automated discernment with at least 89% accuracy.



T. Tasdizen, M. Sajjadi, M. Javanmardi, N. Ramesh. “Improving the robustness of convolutional networks to appearance variability in biomedical images,” In 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), IEEE, April, 2018.
DOI: 10.1109/isbi.2018.8363636

ABSTRACT

While convolutional neural networks (CNN) produce state-of-the-art results in many applications including biomedical image analysis, they are not robust to variability in the data that is not well represented by the training set. An important source of variability in biomedical images is the appearance of objects such as contrast and texture due to different imaging settings. We introduce the neighborhood similarity layer (NSL) which can be used in a CNN to improve robustness to changes in the appearance of objects that are not well represented by the training data. The proposed NSL transforms its input feature map at a given pixel by computing its similarity to the surrounding neighborhood. This transformation is spatially varying, hence not a convolution. It is differentiable; therefore, networks including the proposed layer can be trained in an end-to-end manner. We demonstrate the advantages of the NSL for the vasculature segmentation and cell detection problems.


2017


E. Erdil, M.U. Ghani, L. Rada, A.O. Argunsah, D. Unay, T. Tasdizen, M. Cetin. “Nonparametric joint shape and feature priors for image segmentation,” In IEEE Transactions on Image Processing, Vol. 26, No. 11, IEEE, pp. 5312--5323. Nov, 2017.
DOI: 10.1109/tip.2017.2728185

ABSTRACT

In many image segmentation problems involving limited and low-quality data, employing statistical prior information about the shapes of the objects to be segmented can significantly improve the segmentation result. However, defining probability densities in the space of shapes is an open and challenging problem, especially if the object to be segmented comes from a shape density involving multiple modes (classes). Existing techniques in the literature estimate the underlying shape distribution by extending Parzen density estimator to the space of shapes. In these methods, the evolving curve may converge to a shape from a wrong mode of the posterior density when the observed intensities provide very little information about the object boundaries. In such scenarios, employing both shape- and class-dependent discriminative feature priors can aid the segmentation process. Such features may involve, e.g., intensity-based, textural, or geometric information about the objects to be segmented. In this paper, we propose a segmentation algorithm that uses nonparametric joint shape and feature priors constructed by Parzen density estimation. We incorporate the learned joint shape and feature prior distribution into a maximum a posteriori estimation framework for segmentation. The resulting optimization problem is solved using active contours. We present experimental results on a variety of synthetic and real data sets from several fields involving multimodal shape densities. Experimental results demonstrate the potential of the proposed method.


2016


M. Barjatia, T. Tasdizen, B. Song, K.M. Golden. “Network modeling of Arctic melt ponds,” In Cold Regions Science and Technology, Vol. 124, Elsevier BV, pp. 40--53. April, 2016.
DOI: 10.1016/j.coldregions.2015.11.019

ABSTRACT

The recent precipitous losses of summer Arctic sea ice have outpaced the projections of most climate models. A number of efforts to improve these models have focused in part on a more accurate accounting of sea ice albedo or reflectance. In late spring and summer, the albedo of the ice pack is determined primarily by melt ponds that form on the sea ice surface. The transition of pond configurations from isolated structures to interconnected networks is critical in allowing the lateral flow of melt water toward drainage features such as large brine channels, fractures, and seal holes, which can alter the albedo by removing the melt water. Moreover, highly connected ponds can influence the formation of fractures and leads during ice break-up. Here we develop algorithmic techniques for mapping photographic images of melt ponds onto discrete conductance networks which represent the geometry and connectedness of pond configurations. The effective conductivity of the networks is computed to approximate the ease of lateral flow. We implement an image processing algorithm with mathematical morphology operations to produce a conductance matrix representation of the melt ponds. Basic clustering and edge elimination, using undirected graphs, are then used to map the melt pond connections and reduce the conductance matrix to include only direct connections. The results for images taken during different times of the year are visually inspected and the number of mislabels is used to evaluate performance.



M. Elwardy, T. Tasdizen, M. Cetin. “Disjunctive Normal Unsupervised LDA for P300-based Brain-Computer Interfaces,” In 2016 24th Signal Processing and Communication Application Conference (SIU), IEEE, May, 2016.
DOI: 10.1109/siu.2016.7496226

ABSTRACT

Can people use text-entry based brain-computer interface (BCI) systems and start a free spelling mode without any calibration session? Brain activities differ largely across people and across sessions for the same user. Thus, how can the text-entry system classify the desired character among the other characters in the P300-based BCI speller matrix? In this paper, we introduce a new unsupervised classifier for a P300-based BCI speller, which uses a disjunctive normal form representation to define an energy function involving a logistic sigmoid function for classification. Our proposed classifier updates the initialized random weights performing classification for the P300 signals from the recorded data exploiting the knowledge of the sequence of row/column highlights. To verify the effectiveness of the proposed method, we performed an experimental analysis on data from 7 healthy subjects, collected in our laboratory. We compare the proposed unsupervised method to a baseline supervised linear discriminant analysis (LDA) classifier and demonstrate its effectiveness.



E. Erdil, M. Cetin, T. Tasdizen. “MCMC Shape Sampling for Image Segmentation with Nonparametric Shape Priors,” In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, June, 2016.
DOI: 10.1109/cvpr.2016.51

ABSTRACT

Segmenting images of low quality or with missing data is a challenging problem. Integrating statistical prior information about the shapes to be segmented can improve the segmentation results significantly. Most shape-based segmentation algorithms optimize an energy functional and find a point estimate for the object to be segmented. This does not provide a measure of the degree of confidence in that result, neither does it provide a picture of other probable solutions based on the data and the priors. With a statistical view, addressing these issues would involve the problem of characterizing the posterior densities of the shapes of the objects to be segmented. For such characterization, we propose a Markov chain Monte Carlo (MCMC) sampling-based image segmentation algorithm that uses statistical shape priors. In addition to better characterization of the statistical structure of the problem, such an approach would also have the potential to address issues with getting stuck at local optima, suffered by existing shape-based segmentation methods. Our approach is able to characterize the posterior probability density in the space of shapes through its samples, and to return multiple solutions, potentially from different modes of a multimodal probability density, which would be encountered, e.g., in segmenting objects from multiple shape classes. We present promising results on a variety of data sets. We also provide an extension for segmenting shapes of objects with parts that can go through independent shape variations. This extension involves the use of local shape priors on object parts and provides robustness to limitations in shape training data size.



E. Erdil, L. Rada, A.O. Argunsah, D. Unay, T. Tasdizen, M. Cetin. “Nonparametric joint shape and feature priors for segmentation of dendritic spines,” In 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), IEEE, April, 2016.
DOI: 10.1109/isbi.2016.7493279

ABSTRACT

Multimodal shape density estimation is a challenging task in many biomedical image segmentation problems. Existing techniques in the literature estimate the underlying shape distribution by extending Parzen density estimator to the space of shapes. Such density estimates are only expressed in terms of distances between shapes which may not be sufficient for ensuring accurate segmentation when the observed intensities provide very little information about the object boundaries. In such scenarios, employing additional shape-dependent discriminative features as priors and exploiting both shape and feature priors can aid to the segmentation process. In this paper, we propose a segmentation algorithm that uses nonparametric joint shape and feature priors using Parzen density estimator. The joint prior density estimate is expressed in terms of distances between shapes and distances between features. We incorporate the learned joint shape and feature prior distribution into a maximum a posteriori estimation framework for segmentation. The resulting optimization problem is solved using active contours. We present experimental results on dendritic spine segmentation in 2-photon microscopy images which involve a multimodal shape density.



M.U. Ghani, E. Erdil, S.D. Kanik, A.O. Argunsah, A. Hobbiss, I. Israely, D. Unay, T. Tasdizen, M. Cetin. “Dendritic Spine Shape Analysis: A Clustering Perspective,” In Lecture Notes in Computer Science, Springer International Publishing, pp. 256--273. 2016.
DOI: 10.1007/978-3-319-46604-0_19

ABSTRACT

Functional properties of neurons are strongly coupled with their morphology. Changes in neuronal activity alter morphological characteristics of dendritic spines. First step towards understanding the structure-function relationship is to group spines into main spine classes reported in the literature. Shape analysis of dendritic spines can help neuroscientists understand the underlying relationships. Due to unavailability of reliable automated tools, this  analysis is currently performed manually which is a time-intensive and subjective task. Several studies on spine shape classification have been reported in the literature, however, there is an on-going debate on whether distinct spine shape classes exist or whether spines should be modeled through a continuum of shape variations. Another challenge is the subjectivity and bias that is introduced due to the supervised nature of classification approaches. In this paper, we aim to address these issues by presenting a clustering perspective. In this context, clustering may serve both confirmation of known patterns and discovery of new ones. We perform cluster analysis on two-photon microscopic images of spines using morphological, shape, and appearance based features and gain insights into the spine  shape analysis problem. We use histogram of oriented gradients (HOG), disjunctive normal shape models (DNSM), morphological features, and intensity profile based features for cluster analysis. We use x-means to perform cluster analysis that selects the number of clusters automatically using the Bayesian information criterion (BIC). For all features, this analysis produces 4 clusters and we observe the formation of at least one cluster consisting of spines which are difficult to be assigned to a known class. This observation supports the argument of intermediate shape types.



M.U. Ghani, A.O. Argunsah, I. Israely, D. Unay, T. Tasdizen, M. Cetin. “On comparison of manifold learning techniques for dendritic spine classification,” In 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), IEEE, April, 2016.
DOI: 10.1109/isbi.2016.7493278

ABSTRACT

Dendritic spines are one of the key functional components of neurons. Their morphological changes are correlated with neuronal activity. Neuroscientists study spine shape variations to understand their relation with neuronal activity. Currently this analysis performed manually, the availability of reliable automated tools would assist neuroscientists and accelerate this research. Previously, morphological features based spine analysis has been performed and reported in the literature. In this paper, we explore the idea of using and comparing manifold learning techniques for classifying spine shapes. We start with automatically segmented data and construct our feature vector by stacking and concatenating the columns of images. Further, we apply unsupervised manifold learning algorithms and compare their performance in the context of dendritic spine classification. We achieved 85.95% accuracy on a dataset of 242 automatically segmented mushroom and stubby spines. We also observed that ISOMAP implicitly computes prominent features suitable for classification purposes.



M.U. Ghani, F. Mesadi, S..D Kanik, A.O. Argunsah, I. Israely, D. Unay, T. Tasdizen, M. Cetin. “Dendritic spine shape analysis using disjunctive normal shape models,” In 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), IEEE, April, 2016.
DOI: 10.1109/isbi.2016.7493280

ABSTRACT

Analysis of dendritic spines is an essential task to understand the functional behavior of neurons. Their shape variations are known to be closely linked with neuronal activities. Spine shape analysis in particular, can assist neuroscientists to identify this relationship. A novel shape representation has been proposed recently, called Disjunctive Normal Shape Models (DNSM). DNSM is a parametric shape representation and has proven to be successful in several segmentation problems. In this paper, we apply this parametric shape representation as a feature extraction algorithm. Further, we propose a kernel density estimation (KDE) based classification approach for dendritic spine classification. We evaluate our proposed approach on a data set of 242 spines, and observe that it outperforms the classical morphological feature based approach for spine classification. Our probabilistic framework also provides a way to examine the separability of spine shape classes in the likelihood ratio space, which leads to further insights about the nature of the shape analysis problem in this context.



S.K. Iyer, T. Tasdizen, D. Likhite, E.V.R. DiBella. “Split Bregman multicoil accelerated reconstruction technique: A new framework for rapid reconstruction of cardiac perfusion MRI,” In Medical Physics, Vol. 43, No. 4, Wiley-Blackwell, pp. 1969--1981. March, 2016.
DOI: 10.1118/1.4943643

ABSTRACT

Purpose:
Rapid reconstruction of undersampled multicoil MRI data with iterative constrained reconstruction method is a challenge. The authors sought to develop a new substitution based variable splitting algorithm for faster reconstruction of multicoil cardiac perfusion MRI data.

Methods:
The new method, split Bregman multicoil accelerated reconstruction technique (SMART), uses a combination of split Bregman based variable splitting and iterative reweighting techniques to achieve fast convergence. Total variation constraints are used along the spatial and temporal dimensions. The method is tested on nine ECG-gated dog perfusion datasets, acquired with a 30-ray golden ratio radial sampling pattern and ten ungated human perfusion datasets, acquired with a 24-ray golden ratio radial sampling pattern. Image quality and reconstruction speed are evaluated and compared to a gradient descent (GD) implementation and to multicoil k-t SLR, a reconstruction technique that uses a combination of sparsity and low rank constraints.

Results:
Comparisons based on blur metric and visual inspection showed that SMART images had lower blur and better texture as compared to the GD implementation. On average, the GD based images had an ∼18% higher blur metric as compared to SMART images. Reconstruction of dynamic contrast enhanced (DCE) cardiac perfusion images using the SMART method was ∼6 times faster than standard gradient descent methods. k-t SLR and SMART produced images with comparable image quality, though SMART was ∼6.8 times faster than k-t SLR.

Conclusions:
The SMART method is a promising approach to reconstruct good quality multicoil images from undersampled DCE cardiac perfusion data rapidly.



S.K. Iyer, T. Tasdizen, N. Burgon, E. Kholmovski, N. Marrouche, G. Adluru, E.V.R. DiBella. “Compressed sending for rapid late gadolinium enhanced imaging of the left atrium: A preliminary study, Magnetic Resonance Imaging,” In Magnetic Resonance Imaging, Vol. 34, No. 7, Elsevier BV, pp. 846--854. September, 2016.
DOI: 10.1016/j.mri.2016.03.002

ABSTRACT

Current late gadolinium enhancement (LGE) imaging of left atrial (LA) scar or fibrosis is relatively slow and requires 5–15 min to acquire an undersampled (R = 1.7) 3D navigated dataset. The GeneRalized Autocalibrating Partially Parallel Acquisitions (GRAPPA) based parallel imaging method is the current clinical standard for accelerating 3D LGE imaging of the LA and permits an acceleration factor ~ R = 1.7. Two compressed sensing (CS) methods have been developed to achieve higher acceleration factors: a patch based collaborative filtering technique tested with acceleration factor R ~ 3, and a technique that uses a 3D radial stack-of-stars acquisition pattern (R ~ 1.8) with a 3D total variation constraint. The long reconstruction time of these CS methods makes them unwieldy to use, especially the patch based collaborative filtering technique. In addition, the effect of CS techniques on the quantification of percentage of scar/fibrosis is not known.

We sought to develop a practical compressed sensing method for imaging the LA at high acceleration factors. In order to develop a clinically viable method with short reconstruction time, a Split Bregman (SB) reconstruction method with 3D total variation (TV) constraints was developed and implemented. The method was tested on 8 atrial fibrillation patients (4 pre-ablation and 4 post-ablation datasets). Blur metric, normalized mean squared error and peak signal to noise ratio were used as metrics to analyze the quality of the reconstructed images, Quantification of the extent of LGE was performed on the undersampled images and compared with the fully sampled images. Quantification of scar from post-ablation datasets and quantification of fibrosis from pre-ablation datasets showed that acceleration factors up to R ~ 3.5 gave good 3D LGE images of the LA wall, using a 3D TV constraint and constrained SB methods. This corresponds to reducing the scan time by half, compared to currently used GRAPPA methods. Reconstruction of 3D LGE images using the SB method was over 20 times faster than standard gradient descent methods.



T. Liu, S.M. Seyedhosseini, T. Tasdizen. “Image Segmentation Using Hierarchical Merge Tree,” In IEEE Transactions on Image Processing, Vol. 25, No. 10, IEEE, pp. 4596--4607. Oct, 2016.
DOI: 10.1109/tip.2016.2592704

ABSTRACT

This paper investigates one of the most fundamental computer vision problems: image segmentation. We propose a supervised hierarchical approach to object-independent image segmentation. Starting with oversegmenting superpixels, we use a tree structure to represent the hierarchy of region merging, by which we reduce the problem of segmenting image regions to finding a set of label assignment to tree nodes. We formulate the tree structure as a constrained conditional model to associate region merging with likelihoods predicted using an ensemble boundary classifier. Final segmentations can then be inferred by finding globally optimal solutions to the model efficiently. We also present an iterative training and testing algorithm that generates various tree structures and combines them to emphasize accurate boundaries by segmentation accumulation. Experiment results and comparisons with other recent methods on six public datasets demonstrate that our approach achieves the state-of-the-art region accuracy and is competitive in image segmentation without semantic priors.



T. Liu, M. Zhang, M. Javanmardi , N. Ramesh, T. Tasdizen. “SSHMT: Semi-supervised Hierarchical Merge Tree for Electron Microscopy Image Segmentation,” In Lecture Notes in Computer Science, Vol. 9905, Springer International Publishing, pp. 144--159. 2016.
DOI: 10.1007/978-3-319-46448-0_9

ABSTRACT

Region-based methods have proven necessary for improving segmentation accuracy of neuronal structures in electron microscopy (EM) images. Most region-based segmentation methods use a scoring function to determine region merging. Such functions are usually learned with supervised algorithms that demand considerable ground truth data, which are costly to collect. We propose a semi-supervised approach that reduces this demand. Based on a merge tree structure, we develop a differentiable unsupervised loss term that enforces consistent predictions from the learned function. We then propose a Bayesian model that combines the supervised and the unsupervised information for probabilistic learning. The experimental results on three EM data sets demonstrate that by using a subset of only 3% to 7% of the entire ground truth data, our approach consistently performs close to the state-of-the-art supervised method with the full labeled data set, and significantly outperforms the supervised method with the same labeled subset.



F. Mesadi, M. Cetin, T. Tasdizen. “Disjunctive normal level set: An efficient parametric implicit method,” In 2016 IEEE International Conference on Image Processing (ICIP), IEEE, September, 2016.
DOI: 10.1109/icip.2016.7533171

ABSTRACT

Level set methods are widely used for image segmentation because of their capability to handle topological changes. In this paper, we propose a novel parametric level set method called Disjunctive Normal Level Set (DNLS), and apply it to both two phase (single object) and multiphase (multi-object) image segmentations. The DNLS is formed by union of polytopes which themselves are formed by intersections of half-spaces. The proposed level set framework has the following major advantages compared to other level set methods available in the literature. First, segmentation using DNLS converges much faster. Second, the DNLS level set function remains regular throughout its evolution. Third, the proposed multiphase version of the DNLS is less sensitive to initialization, and its computational cost and memory requirement remains almost constant as the number of objects to be simultaneously segmented grows. The experimental results show the potential of the proposed method.