Figures
Abstract
Intensity inhomogeneity causes many difficulties in image segmentation and the understanding of magnetic resonance (MR) images. Bias correction is an important method for addressing the intensity inhomogeneity of MR images before quantitative analysis. In this paper, a modified model is developed for segmenting images with intensity inhomogeneity and estimating the bias field simultaneously. In the modified model, a clustering criterion energy function is defined by considering the difference between the measured image and estimated image in local region. By using this difference in local region, the modified method can obtain accurate segmentation results and an accurate estimation of the bias field. The energy function is incorporated into a level set formulation with a level set regularization term, and the energy minimization is conducted by a level set evolution process. The proposed model first appeared as a two-phase model and then extended to a multi-phase one. The experimental results demonstrate the advantages of our model in terms of accuracy and insensitivity to the location of the initial contours. In particular, our method has been applied to various synthetic and real images with desirable results.
Citation: Huang C, Zeng L (2015) An Active Contour Model for the Segmentation of Images with Intensity Inhomogeneities and Bias Field Estimation. PLoS ONE 10(4): e0120399. https://doi.org/10.1371/journal.pone.0120399
Academic Editor: Xuhui Huang, Hong Kong University of Science and Technology, HONG KONG
Received: August 21, 2014; Accepted: January 21, 2015; Published: April 2, 2015
Copyright: © 2015 Huang, Zeng. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited
Data Availability: The original images (Fig. 1 to Fig. 12) are available on Figshare. The DOIs of the data are http://dx.doi.org/10.6084/m9.figshare.1300262. The original images in Fig. 13 can be download from http://www.bic.mni.mcgill.ca/brainweb/.
Funding: This work is supported by the National Natural Science Foundation of China under grant (61271313) (http://www.nsfc.gov.cn), National Instrumentation Program of China (2013YQ030629) (http://www.most.gov.cn), and Chongqing science and technology research plan project (cstc2012gg-yyjs70016) (http://www.ctin.ac.cn/). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Image segmentation still plays an important role in image understanding and computer vision. Active contour models (ACMs) have been widely applied to image segmentation since their introduction [1]. ACMs can obtain closed object contours as segmentation results, which can be conveniently used for shape analysis and recognition. The active contours can utilize various types of prior knowledge, such as image intensity distribution information, boundary shape information, and texture information [2–4], to obtain accurate results for object boundaries in image analysis.
ACMs can be categorized as edge-based models [5–8] or region-based models [9–15]. Edge-based models often use an image gradient to force the active contours to move toward the desired object’s boundaries. These models are typically sensitive to noise, and weak boundaries, which have small gradient values, may cause edge leakage. Region-based models use image statistical information to attract the active contours to the object boundaries. They outperform edge-based models in many cases, such as computer tomography (CT) and magnetic resonance (MR) images. However, traditional region-based models rely on the idea that the intensity of images is homogeneous, which is not suitable for images with intensity inhomogeneity. For example, Chan and Vese proposed the Chan-Vese (CV) model [10], or piecewise constant (PC) model, under the assumption that an image consists of two statistically homogeneous regions and has a distinct mean pixel intensity in each region.
Intensity inhomogeneities often occur in real images, such as CT and MR images. Jungke et al. [16] illustrated that the most important problem for brain MR image segmentation is the occurrence of intensity inhomogeneities. Spatial intensity inhomogeneity are generally related to the properties of the MR image device and include static field inhomogeneity, bandwidth filtering of the data, eddy currents driven by field gradients, and especially radio frequency (RF) transmission and reception inhomogeneity [17]. The spatial intensity inhomogeneity caused many difficulties for MR image segmentation. Many research applications have been used for images with intensity inhomogeneity [2, 4, 18–22]. However, these methods do not consider the correction of the bias field, which is critical for particular clinical diagnoses. Many bias field correction methods have been widely studied in recent years [23–26]. The most popular methods for bias field estimation are based on image segmentation [27–30]. In these methods, the bias field estimation and segmentation are performed simultaneously in each iteration to obtain a final optimal solution. To solve the segmentation problem of intensity inhomogeneity and estimate the bias field, Li et al. [29] proposed a novel variational level set model, that uses the weighted K-means clustering method to evaluate the bias field of image intensities in a neighborhood around each point in the image domain. By considering the retinex model [31], Li’s model can obtain accurate segmentation results and bias field estimates in the presence of intensity inhomogeneity, such as in MR and camera images. Some related methods, which have capabilities similar to Li’s model in terms of considering intensity inhomogeneity, have been proposed [2, 18, 30]. Zhang et al. proposed a locally statistical active contour model (LSACM) to segment images with intensity inhomogeneity and bias correction [32], which has good performance with bias correction. However, these models are sensitive to the location of the initial contours [33].
In this paper, we introduce the local regional difference to Li’s model. With this regional difference, the modified model can improve both the accuracy of the segmentation results for images with intensity inhomogeneity and the estimation of the bias field. We define a new local clustering criterion by collecting the local region difference in the entire image domain. Under this local clustering criterion, the bias field and segmentation results can be effectively corrected in each iteration for different initial contours. Experiments demonstrate that our model can obtain more accurate results.
The remainder of this paper is organized as follows. In the next section, we review some well-known region-based models and their limitations and then present our modified model and the numerical algorithm. The Results and Discussions section presents and discusses the experimental results. Finally, the Conclusions sections presents the conclusions of this study.
Models
The piecewise constant (PC) model
Chan and Vese proposed the CV model [10] to solve the segmentation of two-phase images whose mean intensities can be distinct. The main concept of the CV model is to search for a particular partition of a given image I(x) into two regions, one representing the objects to be detected and the other representing the background. For the given image I(x), they proposed to minimize the following energy functional [10]
(1)
where λ1 and λ2 are positive constants, ν ≥ 0, in(C) and out(C) represent the inner and outer regions of the contour C, respectively, and c1 and c2 are two constants representing the mean image intensities in in(C) and out(C), respectively, Thus, the CV model is also called the piecewise constant (PC) model. The first and second terms of (1) are the inner and outer data fidelity terms of the contour C, respectively. The third term of (1) is the length term, which is used to regularize the contour C. The CV model performs well in image segmentation due to its ability to detect objects whose boundaries are either smooth or not necessarily defined by a gradient and to obtain a larger convergence range; moreover, it is less sensitive to the initialization. However, when the intensities inside or outside of the curve C are not homogeneous, the constants c1 and c2 may not accurately describe the variance in the local region; thus, the CV model may fail to segment images with intensity inhomogeneity (Fig. 1(b), (d)).
(a), (c) The original image with red initial contours. (b), (d) Segmentation results of the CV model.
Li’s model
Li et al. [29] considered the bias field in the local region to segment an image with intensity inhomogeneity and estimate the bias field. Based on the retinex theory [31], the considered bias field model can be written as follows:
(2)
where I is an image to be measured, b is the bias field, J is the true image and n is the additive noise. The assumptions for the true image J and bias field b are as follows:
- (A1) The bias field b varies slowly over the entire image domain.
- (A2) The true image intensities J are approximately constant within each class of issue, i.e., J(x) ≈ ci for x ∈ Ωi, where is a partition of Ω.
Let Ωx = {y:∣y−x∣ ≤ ρ} be the neighborhood of x in the image domain with a small radius ρ and Ωx∩Ωi represent the partition of Ωx produced by the i-th partition Ωi of the image. Based on assumption (A1), the value b(y) for all Ωx can be close to b(x); then, in the small region Ωx∩Ωi, the product of the bias field b(y) and image intensity J(y) can be an approximated by b(y)I(x) ≈ b(x)ci according to assumption (A2). By using the K-means clustering method, they considered all the N partitions of image, and defined a local energy function as follows:
(3)
where Kσ(s) is a weighted function, which can be expressed as a Gaussian kernel function with standard deviation σ:
(4)
To find an optimal of the entire image domain Ω, an overall energy for all x is defined as
When considering the case in which N = 2 in Li’s model, after introducing the level set function ϕ(x), the overall energy can be written as
where u1(ϕ(x)) and u2(ϕ(x)) are the membership functions of each cluster, u1(s) = H(s), and H(s) is the heaviside function defined as (ε > 0), u2(s) = 1−H(s). For fixed ϕ, c1 and c2, the optimal bias field b can be computed by minimizing the local energy Ex in (3) as follows:
(5)
where ‘∗’ denotes the convolution operation. Similarly, the optimal c1 and c2 can be computed by
(6)
In this case, the image domain Ω is divided into two regions, Ω1 = {ϕ > 0} (objects) and Ω2 = {ϕ < 0} (background). Because the local image intensity information is embedded into the energy function, Li’s method can address some types of images with intensity inhomogeneity; however, it still has inherent drawbacks. From (5) and (6), the intensity means ci (i = 1, 2) and bias field b are related; thus, the estimation of c1 and c2 are critical for obtaining a better estimation of bias field b. However, when the object intensity is close to the background in the local region, the estimation of the bias field may be inaccurate, and thus, estimating the bias field using only the mean intensity in the local region is not sufficient. As shown in Fig. 2, Li’s method can obtain the correct segmentation (Fig. 2(b)) when the initial contour is located in the inner part of the object (Fig. 2(a)). However, when the initial contours contain both object and background (Fig. 2(d)), Li’s method fails to segment the object (Fig. 2(e)), and the false segmentation result leads to worse estimation of the bias field (Fig. 2(f)). Similar results can also be seen in Fig. 3. In other words, Li’s method may drop into local minimums [34] and is sensitive to the location of the initial contour; thus, the segmentation results and bias field estimation may be inaccurate in some cases.
(a), (d) The original image with red initial contours. (b), (e) Segmentation results of Li’s model. (c), (f) The bias field estimation of Li’s model.
(a), (d) The original image with red initial contours. (b), (e) The final segmentation results of Li’s model. (c), (f) The bias field estimation of Li’s model.
Our model
The segmentation result may affect the bias field correction, and using only the local region intensity means and bias field in Li’s model is not sufficient to approximate the measured image well. Thus, motivated by the contributions and methods of [27–30], we present a new ACM to segment images with intensity inhomogeneity and estimate the bias field, and this model incorporates the local difference information between the measured image and Li’s estimate. In Li’s model, the segmentation result is essential for the estimation of the true image J and the bias field; an accurate segmentation result can accurately estimate the bias field, whereas a bad segmentation result cannot do so. In our model, we introduce the difference in the local region of the image domain to improve the accuracy of the segmentation result and bias field estimation. For an input image I, the model can be described as follows:
(7)
where d is the difference between the measured image I and approximated model b⋅J in the local region. According to this model, a K-means clustering-based local energy function of our model is defined as follows:
(8)
where Kσ(s) is the Gaussian kernel function with standard deviation σ. By introducing the level set function ϕ(x) and considering all pixels in image domain, the overall energy can be written as
where ui is the membership function and c = (c1, c2, …, cN). After fixing σ, c and d, we find an optimal bias field b that minimizes Ex in (8):
(9)
where ‘∗’ is the convolution operation, and , ui(ϕ) is the membership function of the partition Ωi. Similarly, the optimal c and d can be obtained from (8):
(10)
where and .
In the process of curve evolving (the zero level set function), we keep the level set function as an approximate signed distance function, especially in the neighborhood of the zero level set [35]. For a general level set function method, the level set function is often computed as a signed distance function and must re-initialize the level set function after some evolution steps, but the re-initialization step is time-consuming and not effective. Li et al. proposed a regularization term [36] as a penalty term to eliminate the re-initialization step. The regularization term can be written as follows:
This regularization term can force the level set function to be closed to a signed distance function in the process of curve evolution. We also choose the length term in the CV model for more accurate computation, so a regularization term can be written as
The final energy functional can be written as follow
(11)
The level set variation formulation of our model
In level set methods, the evolving contour (object contours) is represented by zero level set function ϕ = 0 in the level set formulation. The case of N = 2 and N > 2 in the energy represents two-phase and multi-phase formulations, respectively. In the following subsections, we will consider two-phase and multi-phase cases.
Two-phase level set variation formulation.
For two-phase (N = 2) images, which the whole image domain Ω contains foreground and background. The energy functional can be written as follows:
(12)
where u1 and u2 are the membership functions of Ω1 and Ω2, b, c1, c2 and d are given as follows:
(13)
Note 1: The difference d is a matrix with dimension M × N ([M, N]=size(I)). If the local difference matrix d = 0, our method is similar to Li’s method in [29]; however, there are still differences even for an accurate approximating evaluation of the measured image. Thus, for d ≠ 0, the local difference variable between image I(x) and the approximate form b(y)ci in each Ωx∩Ωi can be corrected in our model. If there is a large d in the local region of x, the measured image I(x) is not well approximated by the local bias field and local piecewise constant, and the active contours will move to the regions that reduce d. Thus, the proposed method will obtain more accurate results in each iteration because the local difference in each clustering of the image is considered.
Note 2: Wherever the region’s initial contours are located, the difference d can always be collected in each local region and corrected by the estimate b and ci to the smaller values between the measured image and approximate bci, which makes our model insensitive to the initialization of the active contour.
Note 3: The size of the Gaussian window is also important for accurate segmentation and bias field correction. In our experiment, we choose a truncated Gaussian window of size (4k+1)×(4k+1), where k is the greatest integer smaller than σ. Thus, the choice of σ is related to the size of the Gaussian window.
In order to facilitate the numerical simulation, we use the membership functions u1(s) = Hε(s) and u( s) = 1−Hε(s), where is the smooth version of Heaviside function and . By using the gradient flows method [37], the formulation of the variation equation with the level set function of (12) can be written as
(14)
where:
and b, ci and d are given by (13). The corresponding initial condition and boundary condition are as follows:
(15)
where n denotes the exterior normal to the image boundary ∂Ω.
We use the explicit finite difference scheme to discretize the level set Equation (14) as follows:
(16)
where ϕk represents the values of the level set function in the k-th iteration and Δt is the time step of the evolving contours.
Multi-phase level set variation formulation.
For the case N > 2, we can obtain the multi-phase level set formation similar to the case N = 2. The difference is that we need to use n level set function ϕ1, ϕ2, …, ϕn to respect N = 2n regions Ω1, Ω2, …, ΩN. The corresponding membership functions ui can be written as
(17)
We denote Φ = (ϕ1(x), ϕ2(x), …, ϕn(x)) for simplicity and he membership function ui(ϕ1(x), ϕ2(x), …, ϕn(x)) can be written as ui(Φ). We focus on the case N = 3 in this paper, which two level set functions ϕ1 and ϕ2 can be used to define the partitions of image domain by the membership functions u1(Φ) = Hε(ϕ1)Hε(ϕ2), u2(Φ) = Hε(ϕ1)(1−Hε(ϕ2)) and u3(Φ) = 1−Hε(ϕ1). Similar to the two-phase case, the final energy of the multi-phase image can be written as
(18)
where b, c, and d can be calculated from (13). Minimization of the energy functional in Equation (18) with respect to Φ = (ϕ1, ϕ2), we obtain the following gradient descent flow equations:
(19)
(20)
The numerical algorithm can be written as the following 6 steps (take the three-phase case for example):
- Set k = 1 and initialize the level set functions ϕ1 and ϕ2 to be binary functions
where c0 is a positive constant, R1 and R2 are arbitrarily given regions in the image domain.
- Initialize the bias field b and local difference matrix d.
- Compute b, c and d from (13).
- Solve the level set function ϕ1 from (19).
- Solve the level set function ϕ2 from (20).
- Check whether the evolution is converged. If not, set k = k+1 and return to step 3.
Performance evaluations
In this paper, we use the jaccard similarity (JS) [17, 38], the dice similarity coefficient (DSC) [17, 39], the false positive ratio (RFP), and the false negative ratio (RFN) to compare the segmentation performances of the models quantitatively. These metrics are defined as:
(21)
where N() represents the pixel numbers of the region. Sg indicates the foreground of the ground truth image and Sm stands for the foreground obtained by the models. O is the common part of Sg and Sm. The closer the JS and DSC values to 1, and the RFP and RFN values to 0, the better the segmentation results.
Statistical Analysis
Statistical analysis is performed using the statistical software MedCalc [40]. To assess the performance evaluations of segmentation quality (JS, DSC, RFP, RFN) presented in (21), the tests of statistical significance are performed using 120 simulated MR images. First, we perform the F-test [41]. If the associated (two-sided) P-value is less than the conventional 0.05, the null hypothesis is rejected and the conclusion is that the two variances do indeed differ significantly. If the P-value is low (P<0.05), the variances of the two samples cannot be assumed to be equal and it should be considered to use the t-test with a correction for unequal variances (Welch t test, [42]). The variables are expressed as Mean ± SD (standard deviations). For Welch t test, when the P-value is less than the conventional 0.05, the null hypothesis is rejected and the conclusion is that the two means do indeed differ significantly.
Results and Discussion
In this section, the two-phase level set formulation of the proposed method is tested with synthetic and real images. All of the experiments were conducted in the MATLAB 7.14 (64bit) programming environment on a personal computer with an Intel Core 2 Duo 2.80 GHz CPU, 4 GB RAM, and Windows 7 (64bit) operating system. In our experiments, we use the following default settings of the parameters for our method unless otherwise specified: σ = 3 (3 ≤ σ ≤ 10), ε = 1, time step Δt = 0.1, α = 0.1/Δt, and β = 0.003 × 255 × 255. Most of the original images in experiments can be found at the website http://www.engr.uconn.edu/~cmli/.
The next experiment considers the segmentation of the same image in Fig. 1 (as shown in Fig. 4). The T-shaped image is a real image with intensity inhomogeneity, which size is 127×96. The initial active contours are set inside the object domain and contain the background. Our method outperforms Li’s model (the code was downloaded from [44]) in some cases. As the local regional difference is considered, incorrect estimations of the true image J in the local region can be corrected in each iteration, which is insensitive to the initial contours in our experiments. Fig. 4 indicates that even the initial contours located inside the objects contain background, the segmentation results and bias field estimate are nearly the same. We choose the absolute value of the local difference d, which is shown in gray images (Fig. 4(c) and Fig. 4(g)) to describe the level between the measured image I and estimated bJ. In Figs. 4(c) and 4(g), ∣d∣ is often large in highlight regions or regions with similar intensities; in these regions, the difference between I and bJ must be corrected to obtain better estimations of b and J. By the quantitative comparison using the above metrics in the second row of Fig. 2 and Fig. 4, the values of JS and DSC in our method are bigger than Li’s method, the value of RFP are equal show that the region Sb/O of Li’s method and our method are the same, while the value of RFN in our model (0.0068) are smaller than Li’s method (0.3186) mean the region Sm/O of our method achieves more accurate segmentation results (see Table 1).
(a), (e) The original image with red initial contours. (b), (f) The final segmentation results of our model. (c), (g) The gray images of ∣d∣. (d), (h) The estimation of the bias field in our model.
Fig. 5 shows the segmentation results and bias field correction of the synthetic image with intensity inhomogeneity shown in Fig. 3 obtained by our model. For the distinct initial contours in Figs. 5(a) and 5(e), the corrected images are shown in Figs. 5(d) and 5(h), and the histograms of bias corrected image for different initial contours are shown in Fig. 5(j) and 5(k), which results in higher-quality image than the original image (Fig. 5(g)). The two histograms of the bias-corrected image with different initial contours are nearly identical.
(a), (e) The original image with red initial contours. (b), (f) The final segmentation results of our model. (c), (g) The estimation of the bias field. (d), (h) The corrected image. (i) The histogram of the original image. (j) The histogram of the corrected image with initial contour (a). (k) The histogram of the corrected image with initial contour (e).
Fig. 6 shows the segmentation results for a synthetic image (the image size is 79×75) with higher intensity inhomogeneity obtained with Li’s method, LSACM (code can be downloaded at [45]) and our model. In this experiment, we chose β = 0.007×255×255. The image contains three objects with high light on the left, and the light also causes the boundary to be fuzzy in the lower region of the star-shaped object. For Li’s method, the estimation of the true image J (piecewise constants) may not be accurate in the fuzzy boundary region, and thus, the estimation of the bias field may also not be accurate. The segmentation results obtained with Li’s method, LSACM and the proposed method for different initial contours are shown in columns 2, 3 and 4, respectively. Li’s method fails to segment the object boundary or estimate the fuzzy bias field with high light even when the initial contour across three objects. While LSACM can obtain the right segment result for the initial contour shown in Fig. 6 (a), but for the initial contour as Fig. 6 (e), LSACM get the undesired results even the iterate number over than 2000. The experimental results of our method are more accurate than those obtained with Li’s method and more robustness than LSACM. In Figs. 6(d) and 6(h), the final contours of our method can converge to the correct boundaries precisely. The bias field estimation and corrected image of Li’s model, LSACM and our model are shown in Fig. 7. In the highlight region of the image, the image is not well corrected by Li’s method, LSACM and our method have the similar bias field estimation based on the right segmentation.
(a), (e) The original image with red initial contours. (b), (f) The results of Li’s model. (c), (g) The results of LSACM. (d), (h) The results of our model.
(a), (b), (c) The estimated bias fields of Li’s model, LSACM and our model, respectively. (d), (e), (f) The corrected images of Li’s model, LSACM and our model, respectively.
The quality of MR images is highly dependent upon the coil used to receive the RF signal emitted from the patient [43]. Fig. 8 shows the comparison between Li’s method and our method for the segmentation results with different initial contours of an MR brain image (the size is 109×119) with a tumor from the Internet. A small black spot is located at the center of the tumor. The first row shows the segmentation results with the initial contour of Fig. 8(a), and the second row shows the results with the initial contour of Fig. 8(d). Columns 1 to 3 are the original image with red initial contours, the segmentation results of Li’s method and the results of our method, respectively. Li’s method fails to acquire the tumor and spot boundaries with the initial contour shown in Fig. 8(a). The tumor boundaries obtained by Li’s model are inaccurate when the initial contour lies inside the tumor (Fig. 8(d)). Our method can obtain the boundaries of the tumor and small spot accurately because it collects the local regional difference in the entire image domain.
(a), (d) The original image with initial contours. (b), (e) The results of Li’s model. (c), (f) The results of our model.
To test the impaction of the segmentation results and corrected images for different σ in the Gaussian kernel function, we compare the experimental results for an MR image obtain from Li’s method and our method with different σ. From assumption (A1), the local region difference d will be more accurately collected when the size of the Gaussian window is small. Thus, in this case, more details can be considered, and the segmentation may contain more details. In contrast, choosing a larger Gaussian window size may lead to a less accurate segmentation result and increased computational. To record the computational cost for each method, we set the iteration number n = 200. Fig. 9 shows the comparison between Li’s method and our method with different scales σ = 3, 10. The original MR image with the initial contour is shown in the first row of Fig. 9, the size of the image is 180×107, and columns 1, 2 and 3 provide the final segmentation results, the estimated bias field and the corrected images, respectively. Rows 2 and 3 are the final results by Li’s method, and rows 4 and 5 are the results of our method; rows 2 and 4 consider the scale σ = 3, whereas rows 3 and 5 consider σ = 10. Fig. 9 illustrates that Li’s method fails to segment the small gray object at the center of the MR image for σ = 3 and σ = 10. Our method can accurately segment the small object in the image for the scale σ = 3 and segment the small object in further detail when σ = 10. Furthermore, the corrected images obtained with our method are superior to those obtain with Li’s method. Fig. 9 also shows that the corrected image of our method obtained when σ = 3 is better than that obtained when σ = 10; thus, the results may be more accurate for smaller σ than in for larger σ. Table 2 compares the CPU time for Li’s model and our model for different scales σ. The CPU time for σ = 10 is approximately twice that of scale σ = 3. Our model is more computationally burdensome than Li’s model because of the local regional difference estimation in each iteration.
Row 1 is the original image with red initial contours. The first column is the final segmentation results. The second column is the estimated bias field images. The third column is the corrected images. Row 2 and 3 are the corresponding results of Li’s model in σ = 3 and σ = 10. Row 4 and 5 are the final results of our model in σ = 3 and σ = 10 respectively.
For the observed image synthesized from the retinex model [31], we compare the performance of Li’s model, LSACM and our model for two synthesize images (which sizes are 50×50 and 64×61) multiplied by bias fields shown in Fig. 10. Column 1 in Fig. 10 shows the synthesize images (row 1 and row 4), the corresponding multiplicative bias fields (row 2 and row 5) and the initial contours for the multiplied images (row 3 and row 6), respectively. Column 2 to 4 shows the segment results, the estimated bias fields and the corrected images. Rows 1 and 4 are the results of Li’s model, rows 2 and 5 show the results of LSACM, and rows 3 and 6 are the results of our model. As shown in Fig. 10, three models can obtain similar bias fields under the premise of right segmentations (rows 1, 2 and 3). However, for the initial contour shown in row 6 (column 1), the estimated bias field of our model are more similar to the given bias field than other two models (rows 4, 5 and 6). That is to say, our model may show better performance than Li’s model and LSACM in this statement.
Column 1 is the original images (row 1, 4), given bias fields (row 2, 5) and the initial contours in the images multiplied by bias fields (row 3, 6). Column 2 to 4 is the segmentation results, estimated bias fields and corrected images, respectively. Row 1 and 4 are the results of Li’s model. Row 2 and 5 are the results of LSACM. Row 3 and 6 are the results of our model.
In the following experiments, we compare our model with Li’s model [29] and LSACM [30] (the code was downloaded from [45]) in the performance of multi-phase MR images. Fig. 11 shows the segmentation and bias-correction results on 3T MR image (from [45], the image size is 174×238), which contains white matter (WM), gray matter (GM), cerebrospinal fluid (CSF) and background (CSF usually as the background in our method). We use red contour to represent ϕ1 = 0, and blue to represent ϕ2 = 0. The first column shows the original image and initial contours, the second column shows the segmentation results, the third and fourth columns show the corrected images and bias fields, respectively. The first, second and third rows show the results of Li’s method, LSACM and our method, respectively. As we see from Fig. 11, our method and LSACM capture more CSF than Li’s method and our method obtain more accuracy of GM (WM) in the center of image than other two methods. The corrected images of LSACM and our method seem similar, which are better than Li’s method. For the brain MR image in Fig. 12, the original image (the image size is 141×202) and initial contours are shown in column 1, row 1, 2 and 3 are the results of Li’s method, LSACM and our method, respectively. LSACM can obtain the accurate boundaries of WG (blue contours), but the segmentation of CSF is unexpected (red contours), which can not be well separated the WM and GM. Li’s method can not segment the GM in the center of image. Fig. 11 and Fig. 12 show that our method have more capacity of WM and GM segmentation. The corrected images of Li’s method and our method seem similar and more vivid than LSACM.
Column 1 is the original image with red and blue initial contours. Column 2 to 4 is the final segmentation results, the corrected images and the estimated bias field images, respectively. Row 1 to 3 is the results of Li’s model, LSACM and our model, respectively.
Column 1 is the original image with red and blue initial contours. Column 2 to 4 is the final segmentation results, the corrected images and the estimated bias field images, respectively. Row 1 to 3 is the results of Li’s model, LSACM and our model, respectively.
To further support the conclusion statements, we compare the performance of WM and GM segmentation of simulated MR image volumes for normal brain (the data can be download from [46], the image size of each slice is 217×181). We use 120 slices, which contain both WM and Gm for comparison. As shown in Fig. 13, three original images are shown in column 1, column 2 shows the ground truth of WM and GM, column 3 to 5 shows the results of Li’s method, LSACM and our method, respectively. Rows 1, 3 and 5 are WM, and rows 2, 4 and 6 are GM, all WM and GM are displayed in white. As mentioned above, we first perform the F-test. Since the P-values are lower than 0.05, the variances of these samples cannot be assumed to be equal. We need to perform the Welch’s t test. The statistical analysis results of performance evaluations of segment accuracy for WM and GM between different method are summarized in Table 3 and Table 4, respectively. The variables are expressed as Mean±SD. From Table 3, there are obvious statistical differences of JS values between Li’s method and LSACM (P<0.0001), Li’s method and our method (P = 0.0022). The values of JS for LASM (0.7146±0.0091) and our method (0.6994±0.0175) is higher than Li’s method (0.6560±0.0060). There is no obvious statistical difference in the value of JS between LSACM and our method (P = 0.3095>0.05). There are obvious statistical differences of DSC values between Li’s method and LSACM (P = 0.0001), LSACM and our method (P = 0.0003). The value of DSC for our method (0.8620±0.0019) is higher than Li’s method (0.8140±0.0141) and LSACM (0.8286±0.0081). There is no obvious statistical difference in the value of DSC between LSACM and Li’s method (P = 0.2826>0.05). There is no obvious statistical difference of RFP between Li’s method and LSACM (P = 0.0948>0.05), LSACM and our method (P = 0.6486>0.05). There is obvious statistical difference in the of RFP between Li’s method and our method (P = 0.0454<0.05). The value of RFP for our method (0.1598±0.0012) is lower than that of Li’s method (0.1714±0.0028). There is no obvious statistical difference in the value of RFN between Li’s method and LSACM (P = 0.3407>0.05). There is obvious statistical difference of RFN between Li’s method and our method (P<0.0001), LSACM and our method (P<0.0001). The value of RFN for our method (0.1130±0.0041) is lower than Li’s method (0.1876±0.0238) and LSACM (0.1710±0.0127). From Table 4, there are obvious statistical differences in the values of JS and DSC between any two methods of Li’s method, LSACM and our method (the minimized P = 0.0137). And the values of JS and DSC for our method (JS: 0.6842±0.0014, DSC: 0.8608±0.0004) are significantly higher than that of Li’s method (JS: 0.6442±0.0048, DSC: 0.7814±0.0028) and LSACM (JS: 0.6137±0.0132, DSC: 0.7534±0.0107). There is no obvious statistical differences in the value of RFP between Li’s method and LSACM (P = 0.9731>0.05), LSACM and our method (P = 0.0511>0.05). And there is obvious statistical differences in the value of RFP between Li’s method and our method (P = 0.0337). The value of RFP for our method (0.1293±0.0007) is significantly lower than that of Li’s method (0.1360±0.0005). There are obvious statistical differences in the values of RFN between any two methods of Li’s method, LSACM and our method (the minimized P = 0.0080). And the value of RFN for our method (0.1480±0.0008) is significantly lower than that of Li’s method (0.2826±0.0065) and LSACM (0.3200±0.0170).
Column 1 are the original images. Column 2 is the ground truth of original images. Column 3 to 5 is the final segmentation results of Li’s method, LSACM and our method, respectively. Rows 1, 3 and 5 are WM, and rows 2, 4 and 6 are GM. All WM and GM are displayed in white.
Conclusions
In this paper, we developed a new model for image segmentation with intensity inhomogeneity and bias field estimation. We firstly defined a local intensity clustering criterion function by considering the local difference between the measured image and estimated image. Then, the energy is minimized by a level set evolution process. A regularization is used in the level set process to ensure that the active contours are smooth and eliminate the re-initialization of level set function in the evolution of the active contours. We further extend our model into a multi-phase one to segment multi-phase images to segment WM and GM in the simulated normal brain MR image volumes. For 120 MR image slices, our method outperforms Li’s method in terms of JS, DSC, RFP and RFN for WM and GM. Our method outperforms LSACM in terms of DSC and RFN for WM, JS, DSC and RFN for GM. Our method can obtain accurate segmentation results and accurate estimations of the bias field. Experimental results on synthetic and real images demonstrate that our model is efficient.
Author Contributions
Conceived and designed the experiments: LZ CCH. Performed the experiments: CCH. Analyzed the data: CCH. Contributed reagents/materials/analysis tools: LZ CCH. Wrote the paper: CCH.
References
- 1. Kass M, Witkin A, Terzopoulos D (1988) Snakes: active contour models. International Journal of Computer Vision 1: 321–331.
- 2.
Li, CM, Kao, CY, Gore, JC, Ding, ZH (2007) Implicit active contours driven by local binary fitting energy. IEEE Conference on Computer Vision and Pattern Recognition: 1–7.
- 3. Xu CY, Prince JL (1998) Snakes, shapes, and gradient vector flow. IEEE Transactions on Image Processing 7: 359–369. pmid:18276256
- 4. Wang X, Huang D, Xu H (2010) An efficient local Chan-Vese model for image segmentation. Pattern Recognition 43: 603–618.
- 5. Caselles V, Catte F, Coll T, Dibos F (1993) A geometric model for active contours in image processing. Numerische Mathematik 66: 1–31.
- 6. Malladi R, Sethian JA, Vemuri BC (1995) Shape modeling with front propagation: a level set approach. IEEE Transactions on Pattern Analysis and Machine Intelligence 17: 158–175.
- 7.
Kichenassamy S, Kumar A, Olver P, Tannenbaum A, Yezzi A (1995) Gradient flows and geometric active contour models. Fifth International Conference on Computer Vision: 810–815.
- 8. Vasilevskiy A, Siddiqi K (2002) Flux maximizing geometric flows. IEEE Transactions on Pattern Analysis and Machine Intelligence 24: 1565–1578.
- 9. Ronfard R (1994) Region-based strategies for active contour models. International Journal of Computer Vision 13: 229–251.
- 10. Chan T, Vese L (2001) Active contours without edges. IEEE Transaction on Image Processing 10: 266–277.
- 11. Paragios N, Deriche R (2002) Geodesic active regions and level set methods for supervised texture segmentation. International Journal of Computer Vision 46: 223–247.
- 12. Vese L, Chan T (2002) A multiphase level set framework for image segmentation using the Mumford and Shah model. International Journal of Computer Vision 50: 271–293.
- 13. Li W, Hu XP (2013) Robust Tract Skeleton Extraction of Cingulum Based on Active Contour Model from Diffusion Tensor MR Imaging. PloS one 8.
- 14. Liao XY, Tuan ZY, Zheng Q, Yin Q, Zhang D, Zhao JH (2014) Multi-Scale and Shape Constrained Localized Region Based Active Contour Segmentation of Uterine Fibroid Ultrasound Images in HIFU Therapy. PloS one 9.
- 15. Liu LH, Zeng L, Luan X (2013) 3D robust Chan-Vese model for industrial computed tomography volume data segmentation. Optics and Lasers in Engineering 51: 1235–1244.
- 16. Jungke M, Seelen WV, Bielke G, Meindl S, Grigat M, Pfannenstiel P (1988) A system for the diagnostic use of tissue characterizing parameters in NMR-tomography. Information Processing in Medical Imaging 39: 471–481.
- 17. Uros V, Franjo P, Bostjan L (2007) A review of methods for correction of intensity inhomogeneity in MRI. IEEE Transactions on Medical Imaging 26: 405–421.
- 18. Li CM, Kao CY, Gore JC, Ding ZH (2008) Minimization of Region-Scalable Fitting Energy for Image Segmentation. IEEE Transactions on Image Processing 17: 1940–1949. pmid:18784040
- 19. Zhang KH, Zhang L, Song HH, Zhou WG (2010) Active contours with selective local or global segmentation: a new formulation and level set method. Image and Vision Computing 28: 668–676.
- 20. Roy K, Bhattacharya P, Suen CY (2011) Iris segmentation using variational level set method. Optics and Lasers in Engineering 49: 578–588.
- 21. Liu SG, Peng YL (2012) A local region-based Chan-Vese model for image segmentation. Pattern Recognition 45: 2769–2779.
- 22. Ge Q, Xiao L, Wei ZH (2013) Active contour model for simultaneous MR image segmentation and denoising. Digital Signal Processing 23: 1186–1196.
- 23. Cohen MS, Dubois RM, Zeineh MM (2000) Rapid and effective correction of RF inhomogeneity for high field magnetic resonance imaging. Human Brain Mapping 10: 204–211. pmid:10949058
- 24. Hatsukami MDTS, Han C, Hatsukami TS, Yuan C (2001) A multi-scale method for automatic correction of intensity nonuniformity in MR images. Journal of Magnetic Resonance Imaging 13: 428–436. pmid:11241818
- 25. Styner M, Brechbühler C, Székely G, Gerig G (2000) Parametric estimate of intensity inhomogeneities applied to MRI. IEEE Transactions on Medical Imaging 19: 153–165. pmid:10875700
- 26.
Salvado O, Hillenbrand C, Zhang SX (2004) MR signal inhomogeneity correction for visual and computerized atherosclerosis lesion assessment. 2004 IEEE international symposium on biomedical imaging: 1143–1146.
- 27. Leemput KV, Maes F, Vandermeulen D, Suetens P (1999) Automated model-based bias field correction of MR images of the brain. IEEE Transactions on Medical Imaging 18: 885–896. pmid:10628948
- 28. Chen YJ, Zhang JW, Macione J (2009) An improved level set method for brain MR images segmentation and bias correction. Computerized Medical Imaging and Graphics 33: 510–519. pmid:19481420
- 29. Li CM, Huang R, Ding ZH, Gatenby JC, Metaxas DN, Gore JC (2011) A level set method for image segmentation in the presence of intensity inhomogeneities with application to MRI. IEEE Transactions on Image Processing 20: 2007–2016. pmid:21518662
- 30. Zhan TM, Zhang J, Xiao L, Chen YJ, Wei ZH (2013) An improved variational level set method for MR image segmentation and bias field correction. Magnetic Resonance Imaging 31: 439–447. pmid:23219273
- 31. Land EH, Mccann JJ (1971) Lightness and retinex theory. Journal of the Optical Society of America 61: 1–11. pmid:5541571
- 32.
Zhang KH, Zhang L, Lam KM, Zhang D (2013) A Local Active Contour Model for Image Segmentation with Intensity Inhomogeneity. arXiv:1305.7053.
- 33. Liu LX, Zhang Q, Wu M, Li W, Shang F (2013) Adaptive segmentation of magnetic resonance images with intensity inhomogeneity using level set method. Magnetic Resonance Imaging 31: 567–574. pmid:23290480
- 34.
Morse BS, Schwartzwald D (2001) Image magnification using level-set reconstructiom. IEEE Conference on Computer Vision and Pattern Recognition: 1–8.
- 35. Tsai YHR, Osher S (2005) Total variation and level set based methods in image science. Acta Numerica 14: 509–573.
- 36.
Li CM, Xu CY, Gui CF, Fox MD (2005) Level set evolution without re-initialization: a new variational formulation. IEEE Conference on Computer Vision and Pattern Recognition: 430–436.
- 37.
Evans LC (1998) Partial Differential Equations. Providence: American Mathematical Society. 662 p.
- 38. Zheng Q, Lu Z, Yang W, Zhang M, Feng Q, Chen W (2013) A robust medical image segmentation method using KL distance and local neighborhood information. Computers in Biology and Medicine 43: 459–470. pmid:23566392
- 39. Wang L, Li CM, Sun QS, Xia DS, Kao CY (2009) Active contours driven by local and global intensity fitting energy with application to brain MR image segmentation. Computerized Medical Imaging and Graphics 33: 520–531. pmid:19482457
- 40.
Statistical software MedCalc website. Available: http://www.medcalc.org/download.php. Accessed 14 February 2015.
- 41. Markowski C, Markowski E (1990) Conditions for the effectiveness of a prelininary test of variance. American Statistician 44: 322–326.
- 42.
Armitage P, Berry G, Matthews JNS (2002) Statistical methods in medical research. 4th ed. Oxford, England: Blackwell Science.
- 43. Mcveigh ER, Bronskill MJ, Henkelman RM (1986) Phase and sensitivity of receiver coils in magnetic resonance imaging. Medical Physics 13: 806–814. pmid:3796476
- 44.
Code website. Available: http://www.engr.uconn.edu/~cmli/. Accessed 14 February 2015.
- 45.
Code website. Available: http://www.comp.polyu.edu.hk/~cslzhang/. Accessed 16 April 2014.
- 46.
MR dataset website. Available: http://www.bic.mni.mcgill.ca/brainweb/. Accessed 18 November 2014.