Motivation: Humans possess a variety of sensory systems with which we interact with the outside world. When a human observer views a scene, his or her visual system essentially segments the scene. This process is very efficient, in the sense that a person does not have to see a complex scene, but rather a collection of semantically defined objects. This was the first insight that made me aware of the possibility of endowing computational methods with the capability to achieve such segmentation automatically. Although this research question is well established, a robust, accurate, and high-performance solution remains a great challenge today. My interest in this problem space has grown to encompass more than simple segmentation; I am keenly interested in the study of how semantic information can be inferred from acquired signals. This was the main motivation behind my master’s research: automated visual analysis of sperm motion to perform numerous tedious computations that would be impractical for a human to realize efficiently.


Pulmonary nodule detection and segmentation for early diagnosis of lung cancer


Joint work with: Aly Farag


The segmentation of biomedical images typically deals with partitioning an image into multiple regions representing anatomical objects of interest. A variety of medical image segmentation problems present significant technical challenges, including heterogeneous pixel intensities, noisy/ill-defined boundaries, and irregular shapes with high variability. When I joined the Computer Vision and Image Processing (CVIP) Lab at the University of Louisville (UofL) to pursue my Ph.D., I applied my background in image segmentation to the problem of pulmonary nodule detection and classification for the early diagnosis of lung cancer.

Lung cancer in the United States account for 30% of all cancer-related deaths, resulting in over 160,000 deaths per year, which is more than the annual deaths for colon, breast, prostate, ovarian and pancreatic cancers combined. The survival of lung cancer is strongly dependent of diagnosis.

At the CVIP Lab, we developed a Computer-Assisted Diagnosis (CAD) system that consisted of four main stages. The preprocessing stage consisted of removing any noise artifacts that may have developed onto the CT images during scanning with minimal information loss. The goal of the image segmentation step was to achieve accurate segmentation while maintaining the details of the lung anatomy from low-dose chest CT scans. In particular, it was essential that the pulmonary nodules inside the lungs as well as on the boundary regions be maintained for subsequent analyses.



A block diagram of the major steps involved in computer-based analysis of LDCT of the chest in order to detect and classify doubtful lung nodules.




Average histogram of a CT slice from the ELCAP dataset. The average histogram of CT slices of the ELCAP database. Lung region (parenchyma) and fat/muscle region constitute two dominant peaks in the histogram, in order to separate the lung region a threshold is chosen to maximize the separation between these two peaks.




Schematic diagram of the lung segmentation algorithm.

Nodule detection step was where candidate nodules were identified using object modeling and recognition techniques. The approach of nodule detection hinged mainly on proper modeling of nodule templates and much less on the computational approach to carry out the detection.



The distribution of the radial distance from the centroid of the nodules. The bars are one standard deviation off the mean values of each nodule type. Note the exponential behavior of the radial distance, and that it diminishes after a distance of 10.




The most common parametric nodule models in 2D are circular and semi-circular. In 3D the corresponding models would be spherical and hemi-sphere or cups. This shows a few examples of such templates. For the 2D case the circular and semi-circular parametric templates (isotropic and non-isotropic) are used. The isotropic templates are defined in terms of the radius (size) and the gray level distribution as a circular symmetric Gaussian function while non-isotropic templates are defined by radius, gray level distribution and orientation.


We have also developed a data-driven approach o obtain representations that depict the realistic shape and texture properties of the various nodule types found in LDCT scans. The four main types of nodules we have been concerned with were: (1) Well-circumscribed where the nodule is located centrally in the lung without being connected to vasculature; (2) Vascularized where the nodule has significant connection(s) to the neighboring vessels while located centrally in the lung; (3) Juxta-pleural where a significant portion of the nodule is connected to the pleural surface; and (4) Pleural-tail where the nodule is near the pleural surface, connected by a thin structure.



Three cropped nodules from the four nodule types. Well-circumscribed (1st row) vascular (2nd row), juxta-pleural (3rd row) and pleural-tail (4th row) nodule types.


The mean nodule templates for these four nodule types generated using an Active Appearance Modeling (AAM) approach is depicted below. These models were used in the detection process which has shown much improvement over other methods found in the literature.



Generation of data-driven nodule models.

The final step, nodule classification took the candidate nodules located and identified or categorized these candidates by types, location, morphology, etc. but ultimately and the most important identification is benign and/or malignant.


Related publications:

Amal Farag, Shireen Y. Elhabian, James Graham, Aly Farag, Salwa Elshazly, Robert Falk, Hani Mahdi, Hossam Abdelmunim, Sahar Al-Ghaafary. Modeling of the Lung Nodules for Detection in LDCT Scans. Proc. of the 32nd IEEE Engineering in Medicine and Biology Society (EMBC), 2010, pp. 3618-3621.

Amal Farag, Shireen Y. Elhabian, Salwa Elshazly, Aly Farag. Quantification of nodule detection in chest CT: A clinical investigation based on the ELCAP study. In Proceeding of Second International Workshop on Pulmonary Image Processing in conjunction with MICCAI, pp. 149-160, 2009.

Shireen Y. Elhabian, Amal Farag, Salwa Elshazly, Aly Farag. Sensitivity of Template Matching for Pulmonary Nodule Detection: A Case Study. In IEEE Biomedical Engineering Conference (CIBEC). Cairo International, pp. 1-4, 2008.

Shireen Y. Elhabian, Hossam Abd EL Munim, Salwa Elshazly, Aly A Farag, Mohamed Aboelghar. Experiments on Sensitivity of Template Matching for Lung Nodule Detection in Low Dose CT Scans. IEEE International Symposium on Signal Processing and Information Technology (ISSPIT) 2007, 1040-1046.

Back to top


Colon segmentation for virtual colonoscopy and early screening of colorectal cancer


Joint work with: Aly Farag


Colorectal cancer, which includes cancer of the colon, rectum, anus, and appendix, is the third most common form of cancer and the second leading cause of death among cancers in the western world. Most colorectal cancers begin as a polyp: a small, superficial growth arising from the wall of the colon. As a polyp grows, it can develop into a cancer that invades and spreads. At the Computer Vision and Image Processing (CVIP) Lab at the University of Louisville (UofL), we developed a fully automated framework for accurate 3D colon tissue segmentation in CT colonography using adaptive level sets method with a focus on early screening of colorectal cancer. Subsequent steps involved building the 3D colon model by searching the iso-surface, and finally extract the 3D centerline for colon 3D navigation.



A sample CT colon slice showing typical challenges confronting colon segmentation.

For the segmentation step, we generalized the global/convex continuous minimization problem of the active contour model to the 3D case using Mumford and Shah’s functional model and Chan and Vese’s model of active contours without edges. Furthermore, we incorporated anatomical features with the 3D region growing in a postprocessing stage to discard non-colon parts such as spine and bowels while maintaining all colon segments. The proposed framework had an average accuracy level of approximately 99%. We demonstrated that this approach outperformed conventional approaches, including graph cuts (discrete optimization) and adaptive level sets (non-convex continuous optimization), with regard to accuracy, sensitivity, specificity, and speed of convergence.



Schematic diagram and sample result of the proposed 3D colon segmentation framework.



Sample result of a poorly distended colon. A 2D slice is shown with close-up views on challenging areas such as haustral folds. Quantitative results are shown for adaptive level sets and graph cuts along with our results according to Ac(accuracy), Sn(sensitivity), Sp(specificity), t(time) and iter(number of iterations).



Related publications:

Marwa Ismail, Shireen Y. Elhabian, Aly Farag and Gerald W. Dryden. Fully Automated 3D Colon Segmentation for Early Detection of Colorectal Cancer Based on Convex Formulation of the Active Contour Model. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 58-63. IEEE, 2012.

Marwa Ismail, Shireen Y. Elhabian, Aly Farag, Gerald Dryden, Albert Seow. 3D Automated Colon Segmentation for Efficient Polyp Detection. In Biomedical Engineering Conference (CIBEC), 2012 Cairo International, pp. 48-51. IEEE, 2012.

Back to top


Segmentation with shape priors


Joint work with: Ross Whitaker, Joshua Cates, and Nassir F. Marrouche


My current research in this area with Ross Whitaker at the University of Utah focuses on optimal segmentations within a statistical framework that combines image data with priors on anatomical structures. We began working with local shape priors including smoothness and spatial coherency. Recently, we have been working on the inclusion of global shape priors within a Bayesian estimation framework for image segmentation in which generative models of shapes play an important role to effectively learn priors that guide the image segmentation process. We demonstrated our approach on the left atrial wall segmentation from late-gadolinium enhancement (LGE) MRI, which has been shown to be effective for identifying myocardial fibrosis in the diagnosis of atrial fibrillation.



Left artrium wall segmentation challenges: Slices of LGE-MRI images showing the challenges in segmentation. The left two 2D slices are shown without manual delineations of the LA wall, and the right two slices show manual delineations.


We developed a surface-based image segmentation scheme, termed as ShapeCut, a shape-based generative model for extracting multiple surfaces from a given image. ShapeCut is modeled by incorporating global shape information within the Bayesian framework, thus biasing the solution toward the desired shape. However, to accommodate a subclass of shapes that are not captured by means of the global shape priors, due to the smaller sample size, the proposed method introduces local shape priors in the form of MRFs. The optimization of the derived model is done in two phases that are performed in an iterative fashion: one for a multi-column graph-based multisurface update and the other for closed form-based global shape refinement. The results demonstrated the effectiveness of our approach in the presence of weak boundaries, contrast variation and a low signal-to-noise ratio.



Schematic diagram of ShapeCut: the proposed graph-based object segmentation with shape priors.

ShapeCut differs from many other Bayesian methods in that the underlying search space is an approximation of a continuous parameterization of the set of possible surfaces. We discretize the underlying continuous parameterization in the form of a special form of geometric graph structure, in order to simultaneously estimate multiple, interacting surfaces in a globally optimal manner. The below figure depicts the strategy representing the continuous parameterization in the form of a discrete graph whose design involves a set of nested layers and columns. Here, each layer maintains a similar topological structure to that of the desired surface, and each column necessarily ensures the estimated surface to pass though it. For the 2D surface estimation, these layers represent 2D discrete contours, and in the case of 3D, these layers become meshes.



(a) Continuous parameterization of the surface estimate \(\mathcal{S}\), (b) discrete approximation of the underlying continuous parameterization within which the surface estimation takes place and (c) overlay of discrete grid on a given image and intensity profiles, \(p_{i,j}\) (yellow) at different grid points \(x_{i,j}\).


The shape prior in ShapeCut is a generative model for surfaces. For this work we use a linear model and an associated set of latent variables, \(\beta\), to represent global shape information. This linear model is learned over training shapes, and captures low-dimensional properties of shape representation. The figure below shows a simplified geometric interpretation of the shape model. The linear subspace, which is learned from training data, is relatively low-dimensional, but allows the segmentation algorithm to operate effectively when parts of the anatomy are not well delineated in the image data, typically because of low-contrast and high-correlated and/or uncorrelated noise. A shape is generated from a position \(\beta\) on this low-dimensional shape representation (i.e., shape parameters). To account for the actual complexity of the left atrium, the generative shape model allows smooth deviations or offsets from the low-dimensional shape distribution, formulated as a Markov random field (MRF) prior on their configuration.



(a) Training shapes implicitly represented by distance transforms, (b) projection of shapes onto a linear subspace and (c) estimated surface \(\mathcal{S}\) derived from the deviation, \(\mathcal{S}_o\), of the base shape \(\mathcal{S}_\beta\).


ShapeCut is an effective method for the automatic extraction of multiple coupled surfaces such as the left atrium wall. Compared to the manual segmentations that typically take more than an hour and requires a human expert, ShapeCut achieves significant time improvement (average run-time of 2.3 minutes for the left atrium wall extraction) apart from providing similar results, except around vein regions, not only qualitatively and quantitatively but also from a clinical perspective.



Anterior-posterior/posterior-anterior views of LGE-MRIs depicting fibrosis patterns in manual (top) vs the proposed method (bottom) wall regions. Fibrosis regions are displayed in green and healthy wall regions in blue.


Recently, we formulated the shape prior as a mixture of Gaussians and learn the corresponding parameters in a high-dimensional shape space rather than pre-projecting onto a low-dimensional subspace. We used deep autoencoders to capture the complex intensity distribution while avoiding the careful selection of hand-crafted features. For segmentation, we treated the identity of the mixture component as a latent variable and marginalize it within a generalized expectation-maximization framework. We presented a conditional maximization-based scheme that alternates between a closed-form solution for component-specific shape parameters that provides a global update-based optimization strategy, and an intensity-based energy minimization that translates the global notion of a nonlinear shape prior into a set of local penalties.



(a) Low-dimensional subspace representation of high-demensional shape data. Each sample in the training set is represented by a single point with color representing different components of the GMM. An example segmentation from each component is included to illustrate differences. (b) Intensity gradients across the segmentation surface are taken at each point location.




Example results for 2 different samples with point by point error mapped onto segmentation surface. AE = autoencoder, GMM = mixture modeling, Original = neither (original ASM method).



Related publications:

Gopalkrishna Veni, Shireen Y. Elhabian, and Ross T. Whitaker. ShapeCut: Bayesian Surface Estimation Using Shape-Driven Graph. Medical Image Analysis (2017).

Gopalkrishna Veni, Shireen Y. Elhabian, Ross Whitaker. A Bayesian Formulation of Graph-Cut Surface Estimation With Global Shape Priors. In IEEE 12th International Symposium on Biomedical Imaging (ISBI), 2015, pp. 368-371.

Tim Sodergren, Riddhish Bhalodia, Ross Whitaker, Joshua Cates, Nassir Marrouche, and Shireen Y. Elhabian. Mixture Modeling of Global Shape Priors and Autoencoding Local Intensity Priors for Left Atrium Segmentation. STACOM-MICCAI: Statistical Atlases and Computational Modeling of the Heart workshop, 2018.

Back to top






Copyright © 2022 Shireen Y. Elhabian. All rights reserved.