Generating Probability Maps using CHM

The first step for EM image segmentation is to automatically detecting cell membranes using supervised machine learning, which results in probability maps of cell boundaries. We perform this step using Cascaded Hierarchical Model (CHM) which is a multi-resolution contextual framework and learns contextual information in a hierarchical framework for image segmentation. CHM, combined with LDNN achieves state-of-the-art results on many image segmentation, edge detection and scene labeling applications. This includes segmentation of different structures in EM images. Here, we briefly describe our experiments on cell membrane segmentation using two EM datasets.

 

 

 

Mouse neuropil dataset

This dataset is a stack of 70 images from the mouse neuropil acquired using serial block face scanning electron microscopy (SBFSEM). It has a resolution of 10 × 10 × 50 nm/pixel and each 2D image is 700 by 700 pixels. An expert anatomist annotated membranes, i.e., cell boundaries, in these images. From those 70 images, 14 images were randomly selected and used for training and the 56 remaining images were used for testing. The task is to detect membranes in each 2D section.

Since the task is detecting the boundary of cells, we compared our method with two general boundary detection methods, gPb-OWT-UCM (global probability of boundary followed by the oriented watershed transform and ultrametric contour maps)  and boosted edge learning (BEL). The testing results for different methods are given in the Table 1. The CHM-LDNN outperforms the other methods with a notably large margin.

A few examples of the test images and corresponding membrane detection results using different methods are shown in Figure below. As shown in our results, the CHM outperforms MSANN in removing undesired parts from the background and closing some gaps.

 

 

Test results of the mouse neuropil dataset. (a) Input image, (b) gPb-OWT-UCM, (c) BEL, (d) MSANN, (e) CHM-LDNN, (f) ground truth images. The CHM is more successful in removing undesired parts and closing small gaps. Some of the improvements are marked with red rectangles. For gPb-OWT-UCM method, the best threshold was picked and the edges were dilated to the true membrane thickness.

 

Table 1 - Testing performance of different methods for the mouse neuropil and drosophila VNC datasets.

Drosophila VNC dataset

This dataset contains 30 images from Drosophila first instar larva ventral nerve cord (VNC) acquired using serial-section transmission electron microscopy (ssTEM). Each image is 512 by 512 pixels and the resolution is 4 × 4 × 50 nm/pixel. The membranes are marked by a human expert in each image. We used 15 images for training and 15 images for testing. The testing performance for different methods are reported in Table 1. It can be seen that the CHM outperforms the other methods in terms of pixel error. A few test samples and membrane detection results for different methods are shown in Figure below.

The same dataset was used as the training set for the ISBI 2012 EM challenge. The participants were asked to submit the results on a different test set (the same size as the training set) to the challenge server. We trained the same model on the whole 30 images and submitted the results for the testing volume to the challenge server. The pixel error (1−F-value) of different methods are reported in Table 2. CHM achieved pixel error of 0.063 which is better than the human error, i.e., how much a second human labeling differed from the first one. It also outperformed the convolutional networks. It is noteworthy that CHM is significantly faster than deep neural networks (DNN) at training. While DNN needs 85 hours on GPU for training, CHM only needs 30 hours on CPU.

 

Test results of the Drosophila VNC dataset (second row). (a) Input image, (b) gPb-OWT-UCM, (c) BEL, (d) MSANN, (e) CHM, (f) ground truth images. The CHM is more successful in removing undesired parts and closing small gaps. Some of the improvements are marked with red rectangles. For gPb-OWT-UCM method, the best threshold was picked and the edges were dilated to the true membrane thickness.

Table 2 - Pixel error (1−f-value) and training time (hours) of different methods on ISBI challenge test set. Numbers are available on the challenge leader board.

References:

 

[1] Mojtaba Seyedhosseini, Mehdi Sajjadi, and Tolga Tasdizen. "Image segmentation with cascaded hierarchical models and logistic disjunctive normal networks." Proceedings of the IEEE International Conference on Computer Vision. 2013.

 

[2] Mojtaba Seyedhosseini, and Tolga Tasdizen. "Semantic image segmentation with contextual hierarchical models." IEEE transactions on pattern analysis and machine intelligence 38.5 (2016): 951-964.

© 2014

Scientific Computing and Imaging Institute