Figures
Abstract
In this paper, a Bregman iteration based total variation image restoration algorithm is proposed. Based on the Bregman iteration, the algorithm splits the original total variation problem into sub-problems that are easy to solve. Moreover, non-local regularization is introduced into the proposed algorithm, and a method to choose the non-local filter parameter locally and adaptively is proposed. Experiment results show that the proposed algorithms outperform some other regularization methods.
Citation: Xu H, Sun Q, Luo N, Cao G, Xia D (2013) Iterative Nonlocal Total Variation Regularization Method for Image Restoration. PLoS ONE 8(6): e65865. https://doi.org/10.1371/journal.pone.0065865
Editor: Xi-Nian Zuo, Institute of Psychology, Chinese Academy of Sciences, China
Received: November 30, 2012; Accepted: April 29, 2013; Published: June 11, 2013
Copyright: © 2013 Xu et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: This work was supported by grants from Programs of the National Natural Science Foundation of China (61273251, 61003108), (http://www.nsfc.gov.cn) Doctoral Fund of Ministry of Education of China (200802880017) (http://www.cutech.edu.cn), and the Fundamental Research Funds for the Central Universities No. NUST2011ZDJH26. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Image restoration is a classical inverse problem which has been extensively studied in various areas such as medical imaging, remote sensing and video or image coding. In this paper, we focus on the common image acquisition model: an ideal image is observed in the presence of a spatial invariance blur kennel and a additive Gaussian white noise with zero mean and standard deviation . Then, the observed image is obtained by:(1)
Restoring the ideal image from the observed image is ill-posed since the blur kennel matrix is always singular. A common way to solve this problem is to use regularization methods, in which regularization terms can be invited to restrict the solutions. Regularization methods generally have the form as follows:(2)where denotes the Euclidean norm, is a positive regularization parameter balancing the fitting term and the regularization term, is the regularization term. The total variation regularization proposed by Rudin, Osher and Fatemi [1] (also called the ROF model) is a well known regularization method in this field. The total variation norm has a piecewise smooth regularization property, thus the total variation regularization can preserve edges and discontinuities in the image. The unconstrained ROF model has the form(3)where the term stands for the total variation of the image. The continuous form of the total variation is defined as
(4)Many numerical methods was proposed to solve (3). When is an identity matrix, the ROF model (3) turns into a TV denoising problem, methods as Chambolle's projection method [2], semismooth Newton methods [3], multilevel optimization method [4] and split Bregman method [5]. When is a blur kennel matrix, (3) turns into a TV deblurring problem, we have prime-dual optimization algorithms for TV regularization [6]–[9], forward backward operator splitting method [10], interior point method [11], majorization-minimization approach for image deblurring [12], Bayesian framework for TV regularization and parameter estimation [13]–[15], using local information and Uzawa's algorithm [16], [17], using regularized locally adaptive kernel regression [18], augmented Lagrangian methods [19], [20] and so on. However, the problem is far from perfectly solved, problems as edge and detail preserving [21], [22], ringing effect reducing [23]–[25] and varied blur kennels and noise types image restoration [26], [27] still need better solutions.
The purpose of this paper is to propose an effective total variation minimization algorithm for image restoration. The algorithm is based on Bregman iteration which can give significant improvement over standard models [28]. Then, we solve the proposed algorithm by alternately solving a deblurring problem and a denoising problem [29], [30]. In addition, we propose a local adaptive nonlocal regularization approach to improve the restoration results.
The structure of the paper is as follows. In the next section, an iterative algorithm for total variation based image restoration is proposed, moreover we present a nonlocal regularization under the proposed algorithm framework with local adaptive filter parameter to improve the restoration results. Section Experiments shows the experimental results. Section Conculsions concludes this paper.
Methods
Iterative Approach for TV-based Image Restoration
Bregman iteration for image restoration.
We first consider a general minimization problem as follows:(5)where is the regularization parameter, is a convex nonnegative regularization functional and the fitting functional is convex nonnegative with respect to for fixed . This problem is difficult to solve numerically when is non-differentiable, and the Bregman iteration is an efficient method to solve the minimization problem.
Bregman iteration is based on the concept of “Bregman distance”. The Bregman distance of a convex functional between points and is defined as:(6)where is a sub-gradient of at the point . Bregman distance generally is not symmetric, so it is not a distance in the usual sense, but the Bregman distance measures the closeness of two points. for any and , and for all points on the line segment connecting and . Using Bregman distance (6), the original minimization problem (5) can be solved by an iterative procedure:(7)where denotes a sub-gradient of at and . When we choose and , (7) turns into the total variation minimization problem, then (7) can be converted into the following two step Bregman iterative scheme [28]:
In [28], the authors mentioned that the sequence weakly converges to a solution of the unconstrained form of (5), and the sequence converges to zero monotonically. We can see from (8),(9) that,the Bregman iteration just turns the original problem (5) into a iteration procedure and add the noise residual back into the degenerated image at the end of every iteration. Bregman iteration converges fast and gets better results than standard methods. Bregman iteration was widely used in varied areas of image processing [5], [31]–[33].
General framework of the iterative algorithm.
As introduced above, we use the Bregman iteration (8),(9) to bulid the main iterative framework. Rather than considering (3), we consider the problem as follows:(10)
We separated the variable in (3) into two independent variables, so we can split the original problem (3) into sub-problems which are easy to solve. This problem is obviously equivalent to (3). We can replace (10) into the unconstrained form:(11)
and are regularization parameters balancing the three terms. If the regularization parameter is big enough, the problem (11) is close to the problem (10), and the solutions of the problems are similar. If we let and , we can see that and are all convex, and then (10) is a simple application of (7). Thus, the above problem can be solved by using Bregman iteration:(12)
Similar to [28], we can reform the above procedure into a simple two step iteration algorithm:
As we can see in (13), when tends to infinity, the above algorithm is equal to the original Bregman iterative algorithm in [28]. we use an alternating minimization algorithm [29], [30] to solve (13). We split (13) into a deblurring and a denoising sub-problems. Thus, (13) can be solved by the following two step iterative formation:
We can see that (15) is an -norm differentiable problem, we can solve it as follows:(17)where is the identity matrix and the matrix is invertible. Then (15) can be solved by optimization techniques such as Gauss-Seidel, conjugate gradient or Fourier transform. As for (16), it is a exact total variation denoising problem, we can use Chambolle's projection algorithm [2], semismooth Newton method [3] or split Bregman algorithm [5] to solve this problem.
Thus, the proposed alternating Bregman iterative method for image restoration can be formed as follows:
Algorithm 1: Alternating Bregman iterative method for image restoration.
Analysis of the proposed algorithm.
First, we show some important monotonicity properties of the Bregman iteration proposed in [28].
Theorem 1 The sequence obtained from the Bregman iteration is monotonically nonincreasing. And assume that there exists a minimizer of such that . Then.(18)and, in particular, is a minimizing sequence.
Moreover, has a weak-* convergent subsequence in , and the limit of each weak-* convergent subsequence is a solution of . If is the unique solution of , then in the weak-* topology in .
Then, we show that the alternating minimization algorithm (15) and (16) also convergence to the solution of the sub-problem (13) [30]. Let be the difference matrix and denotes the null space of the corresponding matrix, we obtain the following theorem.
Theorem 2 For any initial guess , suppose is generated by (15) and (16), then converges to a stationary point of (10). And when is a matrix of full column rank, converges to a minimizer of (10).
Then, we can get the following convergence theorem of the proposed alternating Bregman iterative method.
Theorem 3 Let be a linear operator, consider the algorithm 1. Suppose is a sequence generated by algorithm 1, converges to a solution of the original constrained problem (3).
Proof. Let and be the sequence obtained from (13), and every is the solution of the (13), moreover with the increase in iterations of the algorithm 1. Suppose in one iteration, there is and satisfying , and let the true solutions of the problem (10) be and , then.(19)
Due to and satisfy (11), and can enable the convex function (11) to obtain its Minimum value. Then.(20)
Owning to and are the true solutions of the problem (10), this inequality implies that and are also the solutions of the problem (10), thus are the solutions of the original unconstrained problem (3).
Connection with other methods.
We noticed that the equation (17) can be rewrite as follows:(22)thus, the proposed algorithm 1 can be interpreted as follows:
(23)The preconditioned Bregmanized nonlocal regularization (PBOS) algorithm [33] can be formed as:(24)the left and right pseudo inverse approximation are equal:
(25)Compare these two methods, we can see the only difference between them is the way to calculate the noise and add it back to the iteration. The PBOS method calculates the undeconvolutioned noise and only add it back to the calculation of g, while the proposed method calculate the deconvolutioned noise and add it back to both the calculations of g and u, we believe that is why the proposed algorithm have a faster converge speed and better restoration results according to the experiments in section 0.
Adaptive Nonlocal Regularization
Nonlocal regularization.
Recently, nonlocal methods have been extensively studied, the nonlocal means filter was first proposed by Buades et al [34]. The main idea of the nonlocal means denoising model is to denoise every pixel by averaging the other pixels with similar structures (patches) to the current one. Based on the nonlocal means filter, Kindermann et al. [35] tried to investigate the use of regularization functionals with nonlocal correlation terms for general inverse problems. Inspired by the graph Laplacian and the nonlocal means filter, Gilboa and Osher defined a variational framework based nonlocal operators [36]. In the following, we use the definitions of the nonlocal regularization functionals introduced in [36].
Let , , is a real function and is a nonnegative symmetric weight function. Then the nonlocal gradient is defined as the vector of all partial differences at :and the graph divergence of a vector can be defined as:
the weight function is defined as the nonlocal means weight function:(26)where is the Gaussian kernel with standard deviation , is the filtering parameter related to the standard variance of the noise, and the in stands for a square patch centered by point . When the reference image is known, the nonlocal means filter is a linear operator. The definition of the weight function (26) shows that the value of the weight is significant only when the patch around has similar structure as the corresponding patch around .
The nonlocal TV norm can be defined as isotropic norm of the weighted graph gradient :
The main idea of the nonlocal regularization is to generalize the local gradient and divergence concepts to the nonlocal form. Then the nonlocal means filter is generalized to the variational framework.
The nonlocal means filter and the nonlocal regularization functionals can reduce noise efficiently and preserve textures and contrast of the image. Generally, it is good to choose a reference image as close as possible to the original ideal image to calculate the weights. However, the original image structures are broken in the degraded image, we can not get the precise weights between the pixels, thus the weights should be calculated from a preprocessed image [37]. In our alternating minimization framework, we get the deblurred image at the first step, then denoise the deblurred image at the second step. As the nonlocal regularization functionals are robust to the noise, and the structures of the deblurred image are close to the original ideal image, we can calculate the weights by using the deblurred image as the reference image, then apply a nonlocal denoising step to obtain the restored image.
Adaptive nonlocal parameter selection.
Within the alternating Bregman iterative method, we can use as the reference image to calculate the weights of the nonlocal regularization functionals, then use the weights to denoise the deblurred image at every iteration. Note that the nonlocal filter parameter is related to the standard variance of the noise, however we do not know the exact noise of the image . Moreover, when we use the single filter parameter for the whole image, there will be regions oversmoothed or undersmoothed in the restored image, because a single filter parameter is not optimal for all the patches in the image. As the nonlocal TV norm defined in (27), we will calculate the filter parameter adaptively using local information and get the local for every pixel in the image.
Inspired by local regularization in [21], we define the local power as:(29)and is a normalized smoothing window, here we use a Gaussian window. is the expected image, is a region to calculate the local power centered at .
Then we use the local power to calculate the local as follows:(30)
The advantage of localizing the filter parameter is that it can control the denoising process over image regions according to their content, the smooth regions have average between there neighbors, texture and edge regions have big only when the patches are similar. Besides, we do not have to know or estimate the noise condition. In this paper, we use a preprocessed oversmoothed image as the expected image instead of the mean of the patch to get more accurate results. The oversmoothed image is obtained by a standard TV model using a large regularization parameter.
By applying the above adaptive nonlocal regularization, the algorithm 1 can be reformed as the following algorithm, where is the function to calculate the weights between points and is the preprocessed oversmoothed image:
Algorithm 2: Adaptive nonlocal alternating Bregman iterative method for image restoration.
Experiments
In this section, we present some experimental results of the proposed alternating Bregman method and the adaptive nonlocal alternating Bregman method, and compare them with the operator splitting TV regularization [10], NLTV based BOS algorithm, FTVd algorithm [38], FAST-TV [30] and ForWaRD algorithm [39]. The ForWaRD algorithm is a hybrid Fourier-wavelet regularized deconvolution (ForWaRD) algorithm that performs noise regularization via scalar shrinkage in both the Fourier and wavelet domains.
We use the conjugate gradient method to solve the first subproblem in algorithm 1 and algorithm 2, use the Chambolle's projection algorithm to solve the second subproblem in algorithm 1 and the nonlocal version of the Chambolle's projection algorithm in algorithm 2. In algorithm 1 and algorithm 2, we set and by experiment results, the inner iteration times can be set as or and the stopping condition is . There are lots of work on determining the parameters in the regularization [40], [41], but this work is out of the scope of this paper and we will get in to it later. In algorithm 2, we set the patch size as , searching window as and set the Gaussian variance parameter as to calculate the local variance. And we set the nonlocal parameter factor as . For the operator splitting method, the regularization parameter is set as . For the NLTV based BOS method, the regularization parameter is set as , searching window is set as , the patch size is set as and the nonlocal filter parameter is set as . For the FTVd algorithm, we set . For the FAST-TV, we set as and according to the degeneration of the image. And the best valve of . For the ForWaRD algorithm, we set the threshold as , is the standard deviation of the noise, and the regularization parameter is set to .
First, we compare the convergence speed of the proposed algorithm 1 with the preconditioned BOS algorithm, the FTVd algorithm and the FAST-TV algorithm in Figure 1. We can find that the proposed algorithm 1 converge faster than the other three methods at first, and still much faster than the preconditioned BOS algorithm and the FTVd algorithm later, close or a little bit slower than the FAST-TV at the end of the iterations. Usually, the stopping condition of the relative difference is set to or . Thus, the proposed algorithm 1 can reach the stopping condition with fewer iterations than other algorithms. In terms of the computation time, the FTVd algorithm is the fastest owning to its strategies and code optimization. And the proposed algorithm 1 is faster than the operator splitting algorithm and the FAST-TV algorithm. As for the nonlocal methods, convergence can not be promised after some iterations, so we compare the computation time between these methods. As the computation of the nonlocal weights, the nonlocal based algorithms cost more computing time than the non nonlocal ones. The NLTV based BOS algorithm stops with 25 steps for 180 seconds, and the preconditioned NLTV based BOS algorithm stops with 8 steps for 75 seconds, however the proposed algorithm 2 stops with 5 steps for only 47 seconds.
A. Gaussian kernel with and gaussian noise with B. average kernel and gaussian noise with . The figure shows the convergence speed between four methods using the Cameraman image and two different blur kennels. Axis X stands for the iteration times, Axis Y stands for the relative difference between restored images in two iterations, that is .
Next, we show some image restoration results of these methods to illustrate the effectiveness of the proposed algorithms. We use the classical Cameraman image, so as to be comparable to other image restoration works. The Cameraman image can be found at http://www.imageprocessingplace.com/root_files_V3/image_databases.htm. Figure 2 and Figure 3 show the restoration results on the Cameraman with two kind of blur kernels. We can see from the results that, the ForWard method can get a good restoration result when the image is not slightly blurred, but poor on the heavily blurred situation, besides the ForWard method can not restore edges clearly. The restoration results of the operator splitting TV method have artificial strips which affect the visual appearance of the restored images. FTVd method and FAST-TV can effectively remove noise from the degenerated images, and have higher PSNRs than the ForWard method and the operator splitting TV method, however, a lot of details are also smoothed. The results of the proposed algorithm 1 have good visual appearance, clear edges and preserved image contrast. The NLTV based BOS method (the preconditioned BOS has almost the same result) and the proposed algorithm 2 have better restoration results than the not nonlocal ones, and the proposed algorithm 2 have more details restored and a higher PSNR.
. A. Original Image B. Degraded Image C. Operator Splitting TV D. ForWard E. FTVd F. FAST-TV G. NLTV+BOS H. Algorithm 1 I. Algorithm 2.
. A. Original Image B. Degraded Image C. Operator Splitting TV D. ForWard E. FTVd F. FAST-TV G. NLTV+BOS H. Algorithm 1 I. Algorithm 2.
The Table 1 and Table 2 shows the restoration results on different images and different degradation situations. We can see that, the PSNR and SSIM of the proposed algorithms are generally higher than the methods being compared.
Conclusions
In this paper, we propose a Bregman iteration based total variation image restoration algorithm. We split the restoration problem into a three step iteration process, and these steps are all easy to solve. In addition, we propose a nonlocal regularization under the framework of the proposed algorithm using a point-wise local filter parameter, and a method to adaptively determine the filter parameter. Experiments show that the algorithm converges fast and the adaptive nonlocal regularization method can obtain better restoration results. In the future, we will consider the weights updating problem in a theoretical way and apply the proposed algorithms for other regularization problems such as compressed sensing.
Author Contributions
Conceived and designed the experiments: HX QS. Performed the experiments: HX GC. Analyzed the data: HX QS DX. Wrote the paper: HX NL GC.
References
- 1. Rudin L, Osher S, Fatemi E (1992) Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena 60: 259–268.
- 2. Chambolle A (2004) An Algorithm for Total Variation Minimization and Applications. Journal of Mathematical Imaging and Vision 20: 89–97.
- 3. Ng MK, Qi L, Yang Yf, Huang Ym (2007) On Semismooth Newton's Methods for Total Variation Minimization. Journal of Mathematical Imaging and Vision 27: 265–276.
- 4. Chan T, Chen K (2006) An optimization-based multilevel algorithm for total variation image denoising. Multiscale Model Simul 5: 615–645.
- 5. Goldstein T, Osher S (2009) The split Bregman method for L1 regularized problems. SIAM Journal on Imaging Sciences 2: 323–343.
- 6. Hintermller M, Stadler G (2006) An infeasible primal-dual algorithm for tv-based inf-convolutiontype image restoration. SIAM Journal on Scientific Computing 28: 1–23.
- 7.
Esser E, Zhang X (2009) A general framework for a class of first order primal-dual algorithms for TV minimization. UCLA CAM Report : 1–30.
- 8.
Carter J (2001) Dual methods for total variation-based image restoration. University of California Los Angeles.
- 9.
Esser J (2010) Primal Dual Algorithms for Convex Models and Applications to Image Restoration, Registration and Nonlocal Inpainting. University of California Los Angeles.
- 10. Combettes PL, Wajs VR (2005) Signal recovery by proximal forward-backward splitting. Multiscale Modeling Simulation 4: 1168–1200.
- 11. Nikolova M (2006) Analysis of half-quadratic minimization methods for signal and image recovery. SIAM Journal on Scientific computing 27: 937–966.
- 12. Oliveira JaP, Bioucas-Dias JM, aT Figueiredo M (2009) Adaptive total variation image deblurring: A majorization-minimization approach. Signal Processing 89: 1683–1693.
- 13. Chantas G, Galatsanos N, Likas A, Saunders M (2008) Variational Bayesian image restoration based on a product of t-distributions image prior. IEEE transactions on image processing 17: 1795–805.
- 14. Babacan SD, Molina R, Katsaggelos AK (2008) Parameter estimation in TV image restoration using variational distribution approximation. IEEE transactions on image processing 17: 326–39.
- 15. Chantas G, Galatsanos NP, Molina R, Katsaggelos AK (2010) Variational bayesian image restoration with a product of spatially weighted total variation image priors. IEEE transactions on image processing 19: 351–62.
- 16. Almansa A, Ballester C, Caselles V (2008) A TV based restoration model with local constraints. Journal of Scientific Computing 34: 612–626.
- 17. Bertalmio M, Caselles V, Rougé B (2003) TV based image restoration with local constraints. Journal of Scientific Computing 19: 95–122.
- 18. Takeda H, Farsiu S, Milanfar P (2008) Deblurring using regularized locally adaptive kernel regression. IEEE transactions on image processing 17: 550–63.
- 19. Wu C, Tai XC (2010) Augmented Lagrangian method, dual methods, and split Bregman iteration for ROF, vectorial TV, and high order models. SIAM Journal on Imaging Sciences 3: 300–339.
- 20. Pang ZF, Yang YF (2011) A projected gradient algorithm based on the augmented Lagrangian strategy for image restoration and texture extraction. Image and Vision Computing 29: 117–126.
- 21.
Gilboa G, Sochen N (2003) Texture preserving variational denoising using an adaptive fidelity term. Proc VLSM.
- 22. Li F, Shen C, Shen C, Zhang G (2009) Variational denoising of partly textured images. Journal of Visual Communication and Image Representation 20: 293–300.
- 23.
Prasath VS, Singh A (2009) Ringing Artifact Reduction in Blind Image Deblurring and Denoising Problems by Regularization Methods. 2009 Seventh International Conference on Advances in Pattern Recognition : 333–336.
- 24. Liu H, Klomp N, Heynderickx I (2010) A perceptually relevant approach to ringing region detection. IEEE transactions on image processing 19: 1414–26.
- 25.
Nasonov A (2010) Scale-space method of image ringing estimation. IEEE International Conference on Image Processing (ICIP) : 2793–2796.
- 26. Chen DQ, Zhang H, Cheng LZ (2010) Nonlocal variational model and filter algorithm to remove multiplicative noise. Optical Engineering 49: 077002.
- 27. Nikolova M (2004) A variational approach to remove outliers and impulse noise. Journal of Mathematical Imaging and Vision 20: 99–120.
- 28. Osher S, Burger M, Goldfarb D, Xu J, Yin W (2006) An Iterative Regularization Method for Total Variation-Based Image Restoration. Multiscale Modeling & Simulation 4: 460.
- 29. Wang Y, Yang J, Yin W, Zhang Y (2008) A New Alternating Minimization Algorithm for Total Variation Image Reconstruction. SIAM Journal on Imaging Sciences 1: 248.
- 30. Huang Y, Ng MK, Wen YW (2008) A Fast Total Variation Minimization Method for Image Restoration. Multiscale Modeling & Simulation 7: 774.
- 31. Yin W, Osher S, Goldfarb D (2008) Bregman iterative algorithms for l1-minimization with applications to compressed sensing. SIAM J Imaging Sci 1: 143168.
- 32. Cai JF, Osher S, Shen Z (2009) Linearized Bregman Iterations for Frame-Based Image Deblurring. SIAM Journal on Imaging Sciences 2: 226–252.
- 33. Zhang X, Burger M, Bresson X, Osher S (2010) Bregmanized Nonlocal Regularization for Deconvolution and Sparse Reconstruction. SIAM Journal on Imaging Sciences 3: 253–276.
- 34. Buades A, Coll B, Morel JM (2005) On image denoising methods. SIAM Multiscale Modeling and Simulation 4: 490–530.
- 35. Kindermann S, Osher S, Jones PW (2005) Deblurring and Denoising of Images by Nonlocal Functionals. Multiscale Modeling & Simulation 4: 1091–1115.
- 36. Gilboa G, Osher S (2008) Nonlocal operators with applications to image processing. Multiscale Model Simul 7: 1005–1028.
- 37. Lou Y, Zhang X, Osher S, Bertozzi A (2009) Image Recovery via Nonlocal Operators. Journal of Scientific Computing 42: 185–197.
- 38.
Wang Y, Yin W (2007) A fast algorithm for image deblurring with total variation regularization. CAAM Technical Report TR07–10.
- 39. Neelamani R, Choi H, Baraniuk R (2002) Forward: Fourier-wavelet regularized deconvolution for ill-conditioned systems. IEEE Trans on Signal Processing 52: 418–433.
- 40. Wen Y (2009) Adaptive Parameter Selection for Total Variation Image Deconvolution. Numerical Mathematics: Theory, Methods and Applications 2: 427–438.
- 41. Liao H, Li F, Ng MK (2009) Selection of regularization parameter in total variation image restoration. Journal of the Optical Society of America A 26: 2311–2320.