Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

A New Variational Approach for Multiplicative Noise and Blur Removal

  • Asmat Ullah ,

    asmatullah@hhu.edu.cn (AU); chenwen@hhu.edu.cn (WC)

    Affiliation State Key Laboratory of Hydrology-Water Resources and Hydraulic Engineering, Center for Numerical Simulation Software in Engineering and Sciences, College of Mechanics and Materials Hohai University, Nanjing, Jiangsu 210098, P. R. China

  • Wen Chen ,

    asmatullah@hhu.edu.cn (AU); chenwen@hhu.edu.cn (WC)

    Affiliation State Key Laboratory of Hydrology-Water Resources and Hydraulic Engineering, Center for Numerical Simulation Software in Engineering and Sciences, College of Mechanics and Materials Hohai University, Nanjing, Jiangsu 210098, P. R. China

  • Mushtaq Ahmad Khan,

    Affiliation State Key Laboratory of Hydrology-Water Resources and Hydraulic Engineering, Center for Numerical Simulation Software in Engineering and Sciences, College of Mechanics and Materials Hohai University, Nanjing, Jiangsu 210098, P. R. China

  • HongGuang Sun

    Affiliation State Key Laboratory of Hydrology-Water Resources and Hydraulic Engineering, Center for Numerical Simulation Software in Engineering and Sciences, College of Mechanics and Materials Hohai University, Nanjing, Jiangsu 210098, P. R. China

Abstract

This paper proposes a new variational model for joint multiplicative denoising and deblurring. It combines a total generalized variation filter (which has been proved to be able to reduce the blocky-effects by being aware of high-order smoothness) and shearlet transform (that effectively preserves anisotropic image features such as sharp edges, curves and so on). The new model takes the advantage of both regularizers since it is able to minimize the staircase effects while preserving sharp edges, textures and other fine image details. The existence and uniqueness of a solution to the proposed variational model is also discussed. The resulting energy functional is then solved by using alternating direction method of multipliers. Numerical experiments showing that the proposed model achieves satisfactory restoration results, both visually and quantitatively in handling the blur (motion, Gaussian, disk, and Moffat) and multiplicative noise (Gaussian, Gamma, or Rayleigh) reduction. A comparison with other recent methods in this field is provided as well. The proposed model can also be applied for restoring both single and multi-channel images contaminated with multiplicative noise, and permit cross-channel blurs when the underlying image has more than one channel. Numerical tests on color images are conducted to demonstrate the effectiveness of the proposed model.

1 Introduction

Image de-noising and de-blurring are two fundamental tasks in image processing and signal fields. In many real world practices, contamination effects are inevitable during image transmission and acquirement. For example, the images captured by astronomical telescopes are usually be influenced by atmospheric turbulence and results in blurred images. In order to further improve image processing tasks, image de-noising and image de-blurring continue to attract the attention of many researchers in the modern imaging sciences community [1, 2]. In the literature, various kinds of noise were considered, such as additive noise, impulse noise, Cauchy noise, and Poisson noise. The interested reader is referred to [315] and references therein for more details of those noise removal models and the restoration methods. Based on the imaging systems, let u be an original m1 × m2 image, H be a convolution (or blurring) operator, η1 be an additive Gaussian noise, and f be a degraded image which obeys the following formulation (1)

Given H (the blurr operator), our objective is to restored u from f, which is called as de-convolution or de-blurring process. When H is the unit (identity) operator, recovering u from f is referred as Gaussian (additive) noise removal.

In practice, there are other types of noise as well such as multiplicative noise. It can also degrade an image. In recent years, many scientists have studied the issue of restoring images corrupted with multiplicative noise. For a computational formulation of such degradations, assume that an image u is defined on Ω ⊆ R2 with compact Lipschitz boundary that is u: Ω → R. The model of the contamination process of image f can be reformulated as follows (2) where H is a linear and continuous blurring operator and η2 is a multiplicative noise which follows a Gamma distribution with mean one. Here, u is degraded by the blurring operator H to capture f > 0, and is then contaminated by the multiplicative noise η2. The degradation process leads to the multiplicative noise removal problem, when H is the identity operator. There are numerous applications for multiplicative noise reduction. For instance, magnetic field inhomogeneity in magnetic resonance imaging (MRI), speckle noise in synthetic aperture radar (SAR) images and in ultrasound. In these applications, η2 is supposed to follow some probability distributions. For example, the Rayleigh distribution is considered in ultrasound imaging, while the Gamma distribution is studied in SAR images. On the other hand, the contamination by multiplicative noise and blur occurs in many optical coherent imaging systems. Such non-linear image restoration problems can be transformed into the model given in Eq (2), where both multiplicative noise and blur appear.

1.1 Multiplicative Noise Removal

Images propagated by coherent imaging systems, for example, due to the coherent nature of the scattering phenomena, ultrasound imaging, laser and SAR imaging, inevitably come with multiplicative noise. The presence of this issue restrict us from studying valuable information of images, such as sharp edges, point target and textures, and hence multiplicative noise removal is a necessary pre-processing task for successful utilization of image processing tools involving detection, classification and image segmentation. Multiplicative noise seriously interrupts with fundamental tasks, such as object recognition or target detection, image segmentation and classification. Due to the coherent nature of the image acquisition systems, in the multiplicative noise models, the noise field is multiplied by the observed image, which is described by a probability density functions (non-Gaussian), with Gamma and Rayleigh being common models [16, 17].

Multiplicative noise is a non-Gaussian, signal independent and spatially dependent i.e variance is a function of signal amplitude. In the case of multiplicative noise, the variance of the noise is higher when an amplitude of the signal is higher. In other words, noise in bright regions has higher variations and could be interpreted as features in the given image. Hence multiplicative noise removal which is one of most complex images noise, is an important task when it comes to smoothing noise without degrading true image features. The popular despeckling methods include spatial, wavelet-based, non-local filtering and variational. In this work, we will focus on developing a new variational method for both multiplicative noise and blur removal. For more details see [1830].

To the best of our knowledge, there exist several variational approaches devoted to the problem of multiplicative noise reduction. The first total variation based approach for multiplicative noise removal is the one introduced by Rudin et al. [23] which used a constrained optimization approach with two Lagrange multipliers. By applying a MAP-approximation, Aubert and Aujol [31] developed an energy functional whose minimization leads to the restored image to be recovered. Shi and Osher [32] utilize the fidelity term of the AA-model and used the TV(logu) filter instead of TV(u) and letting w = logu, they developed the total variation based strictly convex model (SO-model). Similarly, Bioucas and Figueiredo with SO-model, converted the multiplicative noise removal model into an additive one by taking logarithms and introduced Bayesian type variational model [33]. Steidl and Teuber [34] proposed a variational model which consist the I-divergence as data fidelity term and the TV-semi-norm as a regularization term. For multiplicative Gamma noise reduction, a variational model involving Curvelet coefficients was introduced in [35]. Most recent works include, both additive and multiplicative noise removal model by Chumchob et al. [36], a higher-order MRF based variational model for removing multiplicative noise presented by Chen et al. [37], speckle removal via higher-order TV based approach introduced by Feng et al. [16] and speckle noise removal via non-convex high total variation approach by Wu et al. [17] and so on.

1.2 Blur Reduction

There are some works dealing with joint multiplicative noise and blur removal; see [1, 2, 23, 3842]. In this regard, both the RLO [23] and AA [31] models can be extended to handle the deblurring problem. Dong et al. [1] developed a convex variational model for recovering multiplicative noisy images with blur which consists a quadratic term, a MAP estimator based on Gamma noise and the total variational filter. The quadratic term is based on the statistical principle of the Gamma distribution noise. The uniqueness of the proposed model is assured under a mild condition, where the multiplicative noise follows a Gamma distribution. Recently, Zhao et al. [2] proposed a new convex minimization model for restoring blurred images with multiplicative noise, which consist the total variational penalty, the inverse of noise variance, the l1-norm of the data-fidelity term among the noise, observed image and image variables. Numerical outputs demonstrate that the considered model can tackle the blur and multiplicative noise reduction quite well. In [38], Wang et al. proposed on log-image domain a constrained image restoration model which is based on total variational filter and estimated a set of non-convex constraints by a set of convex constraints. The alternating direction method of multipliers was employed to handle the resulting minimization problem. Huang et al. [42] introduced a new variational approach by adding a Gaussian noise term in the functional to handle the multiplicative denoising and deblurring problem while used the alternating minimization method to find the minimizer of the energy functional efficiently.

1.3 Total Generalized Variation (TGV) Functional

Since solution of optimization problems with total variational filter is very effective for preserving sharp edges, corners and other fine details of an image. However, it has some disadvantages as well, most notably the so-called staircase effect that is the unwanted occurrence of edges. This fact arose from the assumption that the structure of the image is modeled as a function belonging to bounded variation space and hence it favors a piecewise constant function in bounded variation space. Under such an assumption, the total variation based model produces optimal results. As a matter of fact, staircase effect normally occurs since most of the natural images are not piecewise constant functions. To overcome the weakness of the total variation functional i.e the stair-casing effect, the modification of TV-model, which generalizes the differential order in regularization term, has attracted the more and more attentions of numerous researchers.

Recently, Bredies et al. [43] developed the total generalized variation (TGV) regularizer, which is assumed to be the generalization of the total variational filter. Total generalized variation consists and balances higher-order derivatives of u. Defining total generalized variation (TGV) according to Eq (3), one can prove that it constitutes a convex, proper, and weak lower semi-continuous regularizer on each Lp(Ω) space, where 1 ≤ p ≤ ∞. TGV being a rotation and translation invariant, can be utilized as a penalty functional for variational imaging problems. Denoising and image restoration with TGV-filter often leads to piecewise polynomial intensities and sharp edges which efficiently tackle the blocky effect and produces better visual quality images than the traditional TV [44]. Therefore, the total generalized variation penalty has been extensively used in the variational models for deblurring, the reduction of Gaussian noise, multiplicative noise, and some other applications.

In this subsection, we give a brief introduction and summarize the results on total generalized variation (TGV) functional which are relevant for the present paper, and then proposed a new variational model for both multiplicative noise and blur removal based on it. First, redefine the TGV. For this purpose, consider Ω ⊂ ℜd is a non-empty, connected and open set. Remember that, dZ, d ≥ 1 is a fixed dimension space. Here, we choose d = 2 for the observed images. The TGV of order g and positive weights γ = (γ0, γ1, γ2, …γg−1) can be reformulated as follows (3) where Symg(ℜd) is the symmetric tensors space of order g with arguments in ℜd and represents compactly supported symmetric tensor field vector space and sup(⋆) denotes the least upper bound (l.u.b) of a set ⋆. When g = 2, Sym2(ℜd) corresponds to all symmetric d × d matrices space and so on. Note that is a semi-norm that is zero for all polynomials of degree equal to or less than g − 1. Hence reconstruction with total generalized variation regularization results to images with piecewise polynomial intensities and hence is enable to preserve sharp edges well.

In this study, our focal point is the total generalized variation of second order that is (). In this case, Eq (3) can be formulated as follows (4)

We need discrete version of , to analyze its properties. In this regard, the basic corresponding operators should be reviewed at the earlier stage [16]. First, the space is denoted by U, the space by V that is the elements of υV are υ1U and υ2U and by W. Similarly, the elements of wW are w11U, w12U and w22U. Secondly, the first order forward and backward difference operators for uU are described as . Thirdly, the gradient, the symmetrized gradient and the divergence operator are reformulated as follows (5) (6)

Recall that the above operators are adjoint to each other that is (div1)* = −∇ and . According to [45], the discretized form of can be represented as (7)

Here, the balance between the first and second order derivatives of u is provided by the definition in Eq (7). According to [17], the multiplicative noise removal with TGV regularization both can results to piecewise polynomial smoothness and can preserve edges of the images. See [16, 17, 4345] for more details on TGV.

Remark: The bounded generalized variation space can be defined as where BGVg(Ω) which is independent of the weight vector γ is a Banach space. On this space is a semi-norm that is zero for all polynomials of degree up to g − 1. Thus recovering images with results to piecewise polynomial intensities and its convex property makes it numerically optimal.

1.4 Shearlet Transform

Labate et al. [46] proposed shearlet transform based on wavelet which yields nearly optimal approximation properties. Based on isotropic dilations, the well-known wavelet transform is able to identify singular points of signals. However, it has limited ability to describe the geometry of multi-dimensional functions, for example, the edge orientation, while depend on anisotropic dilation, Shearlet transform is a directional representation system that provides more geometrical image information [47]. Shearlets have been proven mathematically to demonstrate distributed discontinuities such as sharp edges better than wavelets and are a useful tool for edge characterization. In other words, the shearlet transform is a very effective tool for tackling the piecewise smooth images containing corners, edges and spikes etc. The shearlets transform can completely approximate the piecewise smooth images’ singular structures. Such property of shearlets is suitable particularly in image processing task since irregular structures and singularities carry important details in an observed image. For example, discontinuities in the intensity of an image show the presence of edges. We remark that shearlets play the role of like the gradient, surely it can be employed to indicate the presence of an edge.

Shearlet transforms are designed to encode anisotropic features such as singularities concentrated on lower dimensional embedded manifolds. To obtain optimal sparsity, shearlets are scaled according to a parabolic scaling law encoded in the parabolic scaling matrix Aa, a > 0, and exhibit directionality by parameterizing slope encoded in the shear matrix Ss, sR, defined by where, Aa is an anisotropic dilation operator and Ss is a shear operator. Hence the shearlet system {ψast: aR+, sR, tR2} based on three parameters namely a > 0, sR and tR2. a > 0 is the scale parameter measured the resolution level, sR is the shear parameter approximate the directionality, and tR2 is the translation parameter measured the position which is produced by using the operations of dilation, shear transformation and translation on ψL2(R2): where . Hence to each matrix Ma,s two distinct actions are associated that is the anisotropic dilation is produced by the matrix Aa and the non-expansive matrix Ss produced shearing. The corresponding continuous shearlet transform of a function fL2(R2) can be reformulated as where a > 0, sR, tR2 and the analyzing elements ψa,s,t called shearlets are defined above. The shearlet transform is invertible if ψ possesses the following property where w1, w2R2 and the Fourier transform of ψ is denoted by . Other directional representation systems, such as ridgelets [48], contourlets [49], curvelets [50], have links with shearlets. For example, shearlets and curvelets both produce good results in showing images with edges, while both have different spatial-frequency tilings. Shearlets and curvelets are relatively close to contourlets, which are developed in a purely discrete format. In this paper, we develop shearlets due to their theoretical relation to the multi-resolution analysis, directional sensitivity and availability of efficient implementation. From the perspective of computational complexity, the discrete shearlet needs FLOPS for an m1 × m1 image. We refer to [51, 52] for more details regarding the shearlet transform.

The aim of this work, is to develop a new variational model for removing multiplicative noise (MN) and blur. We incorporate the total generalized variation functional and the shearlet transform into the existing data fidelity term for restoring images, which leads to good recovering results. The TGV and shearlet transform-based variational model outperform the traditional total variation methods by minimizing the blocky effects, as such total generalized variation involves and balances higher order derivatives of the image and the shearlet filter show anisotropic features such as curves and edges well. The proposed energy functional is efficiently solved by the alternating direction method of multiplier (ADMM). Experimental results illustrate advantages of our proposed model, both visually and quantitatively in multiplicative noise and blur removal simultaneously comparing to other recent methods.

The outline of this paper is as follows. In section 2, we give a brief review of the image restoration models M1, M2 and M3 for recovering blurred images with multiplicative noise. Then a new variational model for joint multiplicative denoising and deblurring is proposed. Afterwards, in section 3, numerical method is employed to solve the proposed variational problem efficiently. Subsequently, section 4 shows the choice of the regularization parameters and experimental results. Section 5 concludes the paper.

2 Problem Formulation

In this section, we commence with a brief introduction of the three current total variation based models for multiplicative noise and blur removal, and then a new variational model is proposed for the same problem.

2.1 Review of Total Variation Based Image Restoration Models

Total variation based regularization models have been proven to be a very valuable technique for image restoration, and are applied in many practical applications. In this subsection, the following three current state of the art TV-based models for removing multiplicative noise and blur are briefly reviewed namely; (a) M1: a fast minimization method for blur and multiplicative noise removal in [38](2012), (b) M2: a convex variational model for restoring blurred images with multiplicative noise in [1](2013) and (c) M3: a new convex optimization model for multiplicative noise and blur removal in [2](2014).

2.1.1 Wang-Model (M1).

In [38], an efficient algorithm is developed by Wang et al., for solving the total variation based optimization model to restore images from input multiplicative noisy and blurred images. They used logarithmic function to transform multiplicative noise and blurring problems into additive noisy image problems and then apply l1-norm to measure in the data fidelity term and the total variational filter to measure the regularization term. The alternating direction method of multipliers (ADMM) is employed to solve the associated optimization problem, which can be reformulated as (8)

Herein, the number of pixels is denoted by n, αR+, measures the trade off between the fit to f and ∇: RnRn, the amount of regularization is a discrete form of the gradient ∇ and ‖•‖1 denotes the l1-norm. H is the blur operator and η is a noise.

2.1.2 Dong-Model (M2).

A new variational model for restoration of blurred images with multiplicative noise is introduced by Dong et al. in [1], and then a primal-dual algorithm to tackle the minimization model in Eq (9) is employed. A strictly convex model is developed under a particular condition which assured the uniqueness of the solution and stability of the algorithm. For this purpose, a quadratic penalty function is added in the energy functional. Using Bayesian formula, the following energy functional is proposed for the above mentioned problem (9) where is a linear (assumed to be a non-negative) and continuous blurring operator. α > 0, λ > 0 are the penalty and regularization parameters. In addition, a closed and convex set is defined as .

The detailed proof of existence and uniqueness for M2-model can be found in [1].

2.1.3 Zhao-Model (M3).

Zhao et al. [2] introduced a new convex total variation based model for restoring images contaminated with multiplicative noise and blur. The main notion is to reformulate a blur and multiplicative noise equation such that both the image variable and noise variable are decoupled. As a result, the concluding energy functional involves the total variational filter, the term of variance of the inverse of noise, the l1-norm of the data fidelity term among the observed image, noise and image variables. The convex optimization model is given by (10) where α1, α2 are two regularization parameters. μ denotes the average value of w and e is a vector of all ones. The first term in Eq (10) is to measure the variance of w. The second term is the fidelity term between the observed image, f and w. If H = I, then Eq (10) leads to an unconditional convex multiplicative noise removal model [2].

2.1.4 The Proposed Model (M4).

In the real world problems, often the underlying image f is not only degraded by noise, but it is also by blurred. To handle such problems, variational approaches with total variation regularization have attracted the attention of many researchers which possess many desirable properties, most importantly the preservation of edges. However, the TV-filter also has some disadvantages, most notably the staircase effect. This undesired appearance of artifacts arises from the model assumption that an image contains piecewise constant areas, even in regions with smooth transitions of pixel values. To minimize the staircase effect, preserve sharp edges and other fine details of an image, we propose a new variational model which using total generalized variation filter and shearlet transform as regularizers, for joint deblurring and multiplicative noise removal. This model obviously uses advantages of both and shearlet transform, which leads to good restoration results, because performs well for reducing the staircase effect, preserving sharp edges and textures while the Shearlet filter demonstrates anisotropic image features such as sharp edges and curves well. Hence using this model, reasonable improvement in PSNR values is obtained. The following functional is proposed for minimization (11) Where the first two terms are the data fidelity terms. H is a linear and continuous blurring operator. α and λ are the regularization parameters. u is the restored image from the noisy data f. SHj(u) is the jth sub-band of the shearlet transform of u. For computational implementation purpose, the fast finite shearlet transform (FFST) is adopted [52], in which the restoration is based on the wavelet functions and Meyer scaling. We use the second order as a regularizer. Moreover, all the band wise discrete shearlet transforms can be computed fastly using the fast Fourier transform (FFT) and the discrete inverse Fourier transform (IFT). The discrete shearlets contain a Parseval frame of finite Euclidean space while the inversion of the shearlet transform can be simply performed by employing the adjoint transform. Let the FFT of the discrete 2D-scaling function is denoted by H1 and Hj(j ≥ 2) be those of the discrete shearlets. Let and be the matricizing and vectorizing functions. Therefore, we have where * and.* denote convolution and componentwise multiplication. The above equation in vector form is given by where , and diag is defined as where δhj = 0; if hj and δhh = 1.

With the new formulation of as defined in Eq (7), the proposed model (11) can be reformulated as (12)

For a numerical realization, the discrete version of model (12) reads as follows (13) where, the vector innner product is used, and ‖.‖2 shows the l2-vector norm.

2.1.5 Existence and Uniqueness of the Solution of the Proposed Model (M4).

Based on the properties of TGV (which is a convex, proper and weak lower semi-continuous regularizer), shearlet transform, fidelity term and the space of bounded generalized variation, we can prove the existence and uniqueness of the M4-model given in Eq (11).

Coercivity assumption [16, 53]: For any sequence {un} in Lq(Ω), such that for d = 2, it follows that (14) where with . Then there exists at least one minimizer for Eq (11).

According to the proposition-1 in [16] and the coercivity assumption Eq (14) being satisfied, together with proper, convex and lower semi-continuous G(u). It implies that there is a solution to Eq (11) which can be proved along similar lines to Ref. [1, 16]. The solution is unique due to each term in EH(u) being convex [1, 16, 43, 52, 53].

In the next section, we discuss how to employ the alternating direction method of multipliers, to solve the proposed model (M4).

3 Numerical Implementation

Let us briefly review the ADMM method for linearly constrained problems which solves the model in the form of (15)

Then, the augmented Lagrangian function is , where t is the scaled Lagrange multiplier and β > 0 is a parameter. The algorithm of ADMM starts from s(0) = 0, t(0) = 0 and iterates (16)

For the ADMM scheme, the convergence proofs and its variants can be followed from [38, 5456].

3.1 ADMM Implementation

For each l1-term, we assigned one quadratic penalty term and one auxiliary variable for the implementation of ADMM algorithm to solve the optimization problem Eq (11). Particularly, we introduce auxiliary variables xj, j = 1, 2, …, N, such that Eq (13) may be reformulated equivalently as follows (17) Where the absolute values of all components’ sum in xj is denoted by ‖xj1, while the sum of l2-norms of all 2 × 1 vectors and 2 × 2 matrices is denoted by (‖y1, ‖z1).

After the implementation of ADMM, we formulate the following algorithm given as (18)

The first three sub-problems are solved by shrinkage explicitly. The x-subproblem and y-problem can be solved by (19) (20) where the component of y(n+1)(l) located at l ∈ Ω is denoted by y(n+1)(l) ∈ R2, and the shrinkage operator shrink2 can be formulated as follows

Similarly, solution for the z-problem is formulated as (21) where the component of z(n+1) corresponding to the pixel l ∈ Ω is given by z(n+1)(l) ∈ S2×2 and where 0 is a square null matrix and the Frobenius norm of a matrix is denoted by ‖⋆‖F.

To summarize, the numerical algorithm is done in the following steps

Algorithm 1 and Shearlet Transform Based Image Restoration by ADMM

1: procedure Input (γ0, γ1, β, λ, μj, μ, j = 1,2,3.)

2: Initialize ,j = 1,2,,j = 1,2,3.

3: Computing x(n+1), y(n+1) and z(n+1) by Eqs (19), (20) and (21) respectively.

4: Calculate from (22)

5: Calculate for n = 0, 1, 2, ….. by (23)

6: Output u(n+1), If the stopping criteria is satisfied.

7: end procedure

4 Experimental Results

In this section, some numerical tests are presented to show the performance of our proposed model (M4) for restoring images contaminated with multiplicative noise and blur simultaneously. The results are compared with three models namely, (a)M1-model [38] (b) M2-model [1] (c) M3-model [2]. The proposed ADMM-algorithm is tested on different images of size (2562). In this work, two quality tools are used to obtain the restoration results. The first one is the peak signal to noise ratio (PSNR) and the other one is the structural similarity index (SSIM). These measures are given by (24) (25) where the maximal gray level of the image is denoted by N, u is the clean image, is the restored image, , μu, , , , a1, a2 are the average values of , variance of , covariance of and two small positive constants respectively. The SSIM value range lies in [0, 1] with 1 for the optimal quality. The relative error (RelErr) of the image is given by (26)

All simulations listed here are employed in MATLAB-R2013a and all tests were carried out on Intel(R) Core(TM) i3-4160 CPU @360GHz 3.60 GHz, 8.00 Gb of RAM and 64-bit operating system.

4.1 Choosing the Regularization Parameters

Like the total variation based models, the M4-model need a proper balance between the data fitting term and the regularization term. We set the values of regularization parameter λ and penalty parameter α empirically. Indeed, better restoration results may be obtained by performing optimal tuning of the parameters. The restoration results are promising already under the following adjusted and tuned parameter settings. For images, the choice of λ ∈ [1.5, 10] and while number of iterations are considered to be in the range [100, 700] for images according to noise variance 0.01, 0.03 and L = 6, 10. Upon these ranges, better restoration results could be achieved with improved PSNR, SSIM and RelErr results. Results are presented in Tables 17 and Figs 114.

thumbnail
Table 1. Restoration models (M1 and M4) evaluated using PSNR and RelErr.

Recall that the PSNR and RelErr of M1-model are reported in [38].

https://doi.org/10.1371/journal.pone.0161787.t001

thumbnail
Table 2. The number of iterations and time (in seconds) for distinct blurs and gamma noise levels showed in [38] using M1-model for ADM scheme and the number of ADMM iterations and time (in seconds) for different blurs and gamma noise levels reported using our model-M4.

https://doi.org/10.1371/journal.pone.0161787.t002

thumbnail
Table 3. Performance of M4-model using SSIM on Lena, Cameraman and Rice images.

https://doi.org/10.1371/journal.pone.0161787.t003

thumbnail
Table 4. Restoration models (M2 and M4) evaluated using PSNR, number of iterations and time.

Recall that the PSNR, number of iterations and time of M2-model are reported in [1].

https://doi.org/10.1371/journal.pone.0161787.t004

thumbnail
Table 5. Restoration models (M2 and M4) for deblurring with denoising evaluated using PSNR, number of iterations and CPU-time in seconds.

Recall that the PSNR, number of iterations and CPU-time in seconds of M2-model are reported in [1].

https://doi.org/10.1371/journal.pone.0161787.t005

thumbnail
Table 6. Restoration results of M3-model and M4-model by evaluating PSNR, a no: of iterations and CPU-time in seconds.

Recall that the PSNR, a no: of iterations and CPU-time in seconds of M3-model are displayed in [2].

https://doi.org/10.1371/journal.pone.0161787.t006

thumbnail
Table 7. Restoration results of M3 and M4 for multiplicative deblurring, evaluated using PSNR, number of iterations and CPU-time in seconds.

Recall that the PSNR, no. of iterations and CPU-time in seconds of M3-model are shown in [2].

https://doi.org/10.1371/journal.pone.0161787.t007

thumbnail
Fig 1. Performance of M4-model on Lena image.

(a) original image; (b) blur image with (fspecial(‘disk’,5)); (c) noisy image with gamma noise variance 0.03; (d) the clean image contaminated by motion blur and the Gamma noise (Gn) with variance 0.03; (e) the restored image by our proposed model (M4).

https://doi.org/10.1371/journal.pone.0161787.g001

thumbnail
Fig 2. Performance of M4-model on Cameraman image.

(a) original image; (b) blurry image with (fspecial(‘gaussian’,7,5)); (c) gamma noise variance is 0.03; (d) the given image degraded by motion blur and the Gamma noise (Gn) with variance 0.03; (e) the image restored by our proposed model (M4).

https://doi.org/10.1371/journal.pone.0161787.g002

thumbnail
Fig 3. Performance of M4-model on Rice image.

(a) original image; (b) blur image with (fspecial(‘motion’,7)); (c) gamma noise variance is 0.03; (d) the given image degraded by the Gamma noise (Gn) with variance 0.03 and motion blur; (e) the output image by M4-model.

https://doi.org/10.1371/journal.pone.0161787.g003

thumbnail
Fig 4. Performance of M4-model on Phantom image.

(a) clean image; (b) degraded image with L = 10; (c) the restored image by our proposed model; (d) the observed image corrupted with L = 6 by multiplicative noise; (e) the recovered image by our proposed model (M4).

https://doi.org/10.1371/journal.pone.0161787.g004

thumbnail
Fig 5. Performance of M4-model on Cameraman image.

(a) original image; (b) degraded image with L = 10; (c) the restored image by our proposed model; (d) the observed image corrupted with L = 6; (e) the restored image by our proposed model (M4).

https://doi.org/10.1371/journal.pone.0161787.g005

thumbnail
Fig 6. Performance of M4-model on Parrot image.

(a) observed image; (b) noisy image with L = 10; (c) the restored image by our proposed model; (d) the observed image corrupted with L = 6; (e) the restored image by our proposed model (M4).

https://doi.org/10.1371/journal.pone.0161787.g006

thumbnail
Fig 7. Performance of M4-model on Parrot image.

(a) original image; (b) blur image with (fspecial(‘motion’,5,30)); (c) corrupted image with L = 10; (d) the original image contaminated by multiplicative noise with L = 10 and motion blur; (e) the recovered image by M4-model.

https://doi.org/10.1371/journal.pone.0161787.g007

thumbnail
Fig 8. Performance of M4-model on Cameraman image.

(a) original image; (b) blur image with (fspecial(‘motion’,5,30)); (c) degraded image with L = 10; (d) the clean image degraded by motion blur and the multiplicative noise with L = 10; (e) the restored image by the M4-model.

https://doi.org/10.1371/journal.pone.0161787.g008

thumbnail
Fig 9. Performance of M4-model on Parrot image.

(a) original image; (b) blur image with (fspecial(‘gaussian’,[7,7,2)); (c) contaminated image with L = 10; (d) the given image contaminated by motion blur and the multiplicative noise with L = 10; (e) the restored image by our proposed model (M4).

https://doi.org/10.1371/journal.pone.0161787.g009

thumbnail
Fig 10. Performance of M4-model on Cameraman image.

(a) original image; (b) blur image with (fspecial(‘gaussian’,[7,7],2)); (c) degraded image with L = 10; (d) the original image degraded by multiplicative noise with L = 10 and motion blur; (e) the restored image by our proposed model (M4).

https://doi.org/10.1371/journal.pone.0161787.g010

thumbnail
Fig 11. Performance of M4-model on Cameraman image.

(a) clean image; (b) the observed image contaminated by the Gaussian noise; (c)the observed image contaminated by the Gamma noise; (d) the observed image contaminated by the Rayleigh noise; (e) original image; (f) the recovered image by our proposed model (M4); (g) restoration by our proposed model (M4); (h) the restored image by M4-model.

https://doi.org/10.1371/journal.pone.0161787.g011

thumbnail
Fig 12. Performance of M4-model on Parrot image.

(a) original image; (b) the observed image contaminated by the Gaussian noise; (c)the observed image contaminated by the Gamma noise; (d) the observed image contaminated by the Rayleigh noise; (e) original image; (f) the recovered image by M4-model; (g) restoration by M4-model; (h) the recovered image by M4-model.

https://doi.org/10.1371/journal.pone.0161787.g012

thumbnail
Fig 13. Performance of M4-model on Cameraman image.

(a),(f),(k) original image; (b),(g),(l) blur image with (fspecial(‘motion’,5,30)), (fspecial(‘gaussian’,[7,7],2)), (psfMoffat(‘motion’,[7,7],1,5)); (c),(h),(m) degraded image with L = 10; (d),(i),(n) the clean image contaminated by motion blur, gaussian blur, Moffat and the Gamma noise with (L = 10); (e),(j),(o) the restored result by the M4-model.

https://doi.org/10.1371/journal.pone.0161787.g013

thumbnail
Fig 14. Performance of M4-model on Parrot image.

(a),(f),(k) original image; (b),(g),(l) blur image with (fspecial(‘motion’,5,30)), (fspecial(‘gaussian’,[7,7],2)), (psfMoffat(‘motion’,[7,7],1,5)); (c),(h),(m) noisy image with L = 10; (d),(i),(n) the observed image degraded by motion blur, gaussian blur, Moffat and the Gamma noise with (L = 10); (e),(j),(o) the restored image by our proposed model (M4).

https://doi.org/10.1371/journal.pone.0161787.g014

In the next subsection, we discuss the simulation results below, to demonstrate the effectiveness and feasibility of the M4-model. The experimental results of our M4-model are compared with those of M1-model, M2-model and M3-model.

4.2 Comparison of M4-Model with M1-Model

In this subsection, we report some numerical tests to analyze the performance of our M4-model for restoring images which are degraded by blur and multiplicative noise simultaneously. Here, the proposed model-M4 is compared with the one already discussed in [38] (M1-model). For this purpose, the restoration results of 256 × 256 gray scale images ‘Lena’, ‘Rice’ and ‘Cameraman’, which are already illustrated in [38] are considered. In the experiments, the clean image u is corrupted by a motion blur (Mat-lab function is, “fspecial(’motion’,7)”), a Gaussian blur (Mat-lab function is, “fspecial(’gaussian’,7,5)”) and a disk blur (Mat-lab function is, “fspecial(’disk’,5)”)) respectively and different levels of multiplicative gamma noise with variance 0.01 and 0.03. The PSNR, RelErr, SSIM of restored images, the required computational time and the number of iterations are displayed in Tables 1 and 2. From comparing the restoration results of the two methods in Figs 13 and Tables 13, we conclude that the M4-model provides a better quality of recovered images than M1-model in terms of visual quality, PSNR and RelErr. Moreover, we observe clearly that the M4-model removes noise and blur successfully and preserves more fine image details than the TV-based M1-model.

4.3 Comparison of M4-Model with M2-Model

In this section, we give numerical tests to compare the total generalized variation and Shearlet transform based variational model-M4 with M2-model [1] for recovering observed blurred images with multiplicative noise. For this purpose, the restoring outputs for the 256 × 256 gray level images Phantom, Cameraman and Parrot are shown. We examine the blurred and noisy images given in [1]. By means of PSNR, which is a useful tool for image quality assessment, the visual quality of the restored images is compared with M2-model quantitatively. Note that a high PSNR shows that the restoration is of good quality.

4.3.1 Image Denoising.

First, we show that our proposed model also provides good restoration results for multiplicative noise removal, while it is designed for the joint multiplicative denoising and deblurring. In this regard, the test images are contaminated by multiplicative noise with L = 10 and L = 6 respectively. Regarding the results obtained by the proposed model, we observe that the M4-model produces good restoration results, visually and quantitatively. Comparing with the images denoised by the M2-model given in [1], the proposed model (M4) suppresses noise and avoids staircasing effect successfully better than M2-model and preserves more fine features and details of the test images. With respect to visual quality and PSNR results shown in Figs 46 and Table 4, we conclude that the TGV and shearlet transform-based proposed model provides better restoration results than M2-model.

4.3.2 Image Deblurring and Denoising.

In this subsection, we examine the recovering of blurred images contaminated with multiplicative noise. For this purpose, two blurring functions namely, motion blur with length and angle equal to 5 and 30 and Gaussian blur with a window size and a standard deviation equal to 7 × 7 and 2, are tested respectively. Moreover, the observed images are degraded by multiplicative noise with L = 10 after blurring. In this case, we set 2 × 10−5 as the stopping criteria for our model. For our proposed model, in Figs 710 the degraded images and the recovered results are given, while Table 5 displays the PSNR values, the number of iterations, and CPU times. Comparing the outputs of M2-model reported in [1] with M4-model, we found that the M4-model performs quite well both quantitatively and visually. Moreover it is able to preserve more image details and features. In conclusion, our model is able to restored images successfully from multiplicative noise and blur and hence outperforms the M2-model.

4.4 Comparison of M4-Model with M3-Model

In this subsection, some experimental results are presented to show the performance of the M4-model. The restoration results of the M4-model are compared with M3-model [2]. We analyze the simultaneously blurred and noisy images reported in [2]. The test images are Cameraman and Parrot while the recovered images’ quality is measured by the high PSNR value.

4.4.1 Image Denoising.

In the first example, the performance of the M4-model is demonstrated on three types of multiplicative noise namely: Gaussian, Gamma, and Rayleigh reported in [2]. In Table 6, we have displayed the PSNR values, a number of iterations and computational CPU-time in seconds of the recovered images by M3-model and M4-model respectively. We noted that the M4-model performance in terms of PSNR outcomes is better. In Figs 11 and 12, we further show the restored images by our proposed model. It is cleared from the Table 6 and Figures (Figs 11 and 12) that the denoised outcomes of the M4-model are visually quite better than the M3-model. Recall that the recovered results of M3-model are displayed in [2].

4.4.2 Image Deblurring and Denoising.

In this subsection, we test the performance of our proposed model for image restoration when the observed image is corrupted by Gaussian blur, motion blur, Moffat blur and the Gamma noise simultaneously with L = 10. The MATLAB commands for three kinds of blurs are fspecial(‘gaussian’,[7,7],2), fspecial(‘motion’,5,30), and psfMoffat(‘[7,7],1,5) reported in [2]. In Table 7, we display the PSNR values, the number of iterations, and computational time of the recovered images by M3-model and M4-model. In Figs 13 and 14, we show the recovered images by using the M4- model, where the M3-model results are reported in [2]. It is clear from Figs 13 and 14 and Table 7, that the output results of the proposed M4-model are qualitatively and visually better than M3-model.

4.5 Extension to Multichannel Image Reconstruction

In this section, we extend the application of proposed model (M4) to restoring the color images corrupted with multiplicative noise and blur. We designed a set of tests in terms of different image sizes,resolutions,blurs and noise. We tested several images including Lena (dimensions = 2562, Bit depth = 24), parrot(dimensions = 2162, Bit depth = 24 and resolution = 72 dpi), eye (dimensions = 2788 × 1864, Bit depth = 24 and resolution = 300 dpi) and butterfly (dimensions = 4502 and Bit depth = 32), all of which have nice mixture of details, shading areas, textures and flat regions. We measured the quality of recovered images by the signal-to-noise ratio (SNR), which is measured in decibel (dB) and defined by (27) where u is the original image, is the mean intensity of u and is the restored image. Suppose is an m-channel image of size , where u(j) represents the jth-channel, j = 1, 2, 3, …, m and its noisy and blurry observation is given by Eq (2) in which case is a cross-channel blurring operator. For instance, in the standard RGB (red,green and blue) color system, the image . We restore u via the following problem (28) where m represents number of layers of a vectorial image. The derivation of algorithm is similar to that of Algorithm-1. To demonstrate the performance of the proposed model (M4), we ran four tests with within and cross-channel blurs. Let the cross-channel kernel K be of the form where Kij, i = j are within channel kernels and Kij, ij are cross-channel kernels. For simplicity, let M(length,theta) be the motion blur with a motion length and an angle theta in the anticlockwise direction. Likewise, G(hsize,sigma) be the Gaussian blur with a square support size ‘hsize’ and standard deviation ‘sigma’, and the average blur A(hsize) with a square support size ‘hsize’. In our tests, we generated cross-channel blurring kernels in the same way as in [57] which is given below

  • Generate 9-kernels: [M(11, 45), M(21, 90), M(41, 135), G(7, 5), G(9, 5), G(11, 5), A(13), A(15), A(17)],
  • Randomly assign the above 9-kernels to K11, K12, K13, K21, K22, K23, K31, K32, K33,
  • Multiply each Kij with ωij, where and ωij is the (i, j)th element of W.

Note in the tests, we used within-channel blurring that is setting W to be the unit matrix of order-3 and the cross-channel kernels with

All parameters were set to be identical to those described in Section-4.1. In the first experiment, we used within-channel blurring i.e, setting W to be the identity matrix of order-3 to generate a blurry image and then corrupted its pixels by multiplicative noise (also known as speckle) with σ2 = 0.1. The restored image in Fig 15 and the SNR result given in Table 8, clearly demonstrate the good restoration performance of the proposed model (M4).

thumbnail
Fig 15. RGB Lena image 2562 result from the proposed model (M4).

(a) original Lena image; (b) blur image blurred by the within-channel kernel; (c) the observed image degraded by blurring operator and the multiplicative noise with σ2 = 0.1; (d) the restored image by our proposed model (M4).

https://doi.org/10.1371/journal.pone.0161787.g015

thumbnail
Table 8. Restoration results of the proposed model (M4) for multiplicative deblurring, evaluated using SNR, number of iterations and CPU-time in seconds.

https://doi.org/10.1371/journal.pone.0161787.t008

In addition to the grayscale image parrot, we also tested an RGB image parrot (2162). we first generated a blurry image by the cross-channel kernel and then is contaminated by multiplicative noise with σ2 = 0.2. From Table 8 and Fig 16, it is obvious that the proposed model (M4) achieves the desirable outcome in terms of SNR and qualitative results, which validates the effectiveness of the proposed model for color images.

thumbnail
Fig 16. Performance of M4-model on color parrot image 2162.

(a) original image; (b) blur image contaminated by the cross-channel kernel; (c) the observed image degraded by blur and the multiplicative noise with σ2 = 0.2; (d) the restored image by our proposed model (M4).

https://doi.org/10.1371/journal.pone.0161787.g016

In the third and fourth experiments, an eye and butterfly color images are corrupted by cross-channel kernel and multiplicative noise with σ2 = 0.1, 0.2 respectively. The denoising results shown in Figs 17 and 18 and SNR values in Table 8, clearly show the desirable results of the proposed model (M4). The object edges are well preserved and staircase minimizes, leading to a more faithful denoising for real images.

thumbnail
Fig 17. Performance of M4-model on color eye image 2788 × 1864.

(a) original image; (b) blur image corrupted by the cross-channel kernel; (c) the observed image degraded by blurring kernel and the multiplicative noise with σ2 = 0.1; (d) the restored image by our proposed model (M4).

https://doi.org/10.1371/journal.pone.0161787.g017

thumbnail
Fig 18. Performance of M4-model on color butterfly image 4502.

(a) original image; (b)blur image corrupted by the cross-channel kernel; (c) the observed image degraded by blur and the multiplicative noise with σ2 = 0.2; (d) the restored image by our proposed model (M4).

https://doi.org/10.1371/journal.pone.0161787.g018

5 Conclusion

This paper describes a new variational model-M4 to removing blur and multiplicative noise simultaneously. The proposed model combines a total generalized variational filter with shearlet transform. We developed an efficient alternating direction method of multipliers (ADMM) for the solution of optimization problem arisen from the minimization of proposed energy functional. We analyze the blurred images and noisy images reported in [1, 2, 38], consisting of smooth regions and edges. From the numerical results, we found that the proposed model-M4 is able to preserve sharp edges like the other models while at the same time reducing the blocky effects in smooth regions better than the recent variational models. In a word, the combined method utilizes benefits of both the total generalized variation penalty and shearlet transform in image restoration task. Extensive experiments on real vectorial images validate the very good performance of the proposed model. The idea in the work can be extended to many other mathematical problems in image processing. Developing fast algorithms for solving differential equations arisen from model minimization and to adopt the automated spatially dependent regularization parameter selection framework for image restoration, might be considered in future work.

Author Contributions

  1. Conceptualization: AU WC HS MAK.
  2. Data curation: AU MAK.
  3. Formal analysis: AU WC HS MAK.
  4. Funding acquisition: WC AU.
  5. Investigation: AU MAK.
  6. Methodology: AU MAK.
  7. Project administration: AU.
  8. Resources: AU WC MAK HS.
  9. Software: AU MAK.
  10. Supervision: AU WC.
  11. validation: AU MAK.
  12. visualization: AU WC MAK.
  13. Writing – original draft: AU WC.
  14. Writing – review & editing: WC AU.

References

  1. 1. Dong Y and Zeng T. A convex variational model for restoring blurred images with multiplicative noise. SIAM Journal of Imaging Sciences. 2013; 06:1598–1625.
  2. 2. Zhao X, Wang F and Ng M. A new convex optimization model for multiplicative noise and blur removal. SIAM Journal of Imaging Sciences. 2014; 07(01):456–475.
  3. 3. Chartrand T and Asaki T. A variational approach to reconstructing images corrupted by poisson noise. Journal of Mathematical Imaging and Vision. 2007; 27:257–263.
  4. 4. Rudin L, Osher S, and Fatemi E. Non-linear total variation based noise removal algorithms. Physica D. 1992; 60:259–268.
  5. 5. Sciacchitano F, Dong Y, and Zeng T. Variational approach for restoring blurred images with Cauchy noise. SIAM Journal of Imaging Sciences. 2015; 08:1896–1922.
  6. 6. Xiao Y, Zeng T, Yu J, and Ng M. Restoration of images corrupted by mixed Gaussian-impulse noise via l1-l0 minimization. Pattern Recognition. 2011; 44:1708–1728.
  7. 7. Aubert G and Kornprobst P. Mathematical problems in image processing of applied mathematical sciences. Springer, Berlin, Germany. 2000, 147.
  8. 8. Chambolle A,Caselles V, Novaga M, Cremers D, Pock T. An introduction to total variation for image analysis. https://hal.archives-ouvertes.fr/hal-00437581. 2008; 04: 2278–4209.
  9. 9. Chan T, Marquina A, and Mulet P. High-order total variation based image restoration. SIAM Journal of Scientific Compututing. 2000; 22:503–516.
  10. 10. Chen K and Cheng X. A non-linear multigrid method for total variation minimization from image restoration. Journal of Scientific Computing. 2007; 41(02): 115–138.
  11. 11. Glowinski R., Lin P and Pan X, B. An operator-splitting method for a liquid crystal model. Comp. Phys. Comm. 2003; 152:242–252.
  12. 12. Hirakawa K, and Parks T. Image denoising using total least squares. IEEE Transactions on Image Processing. 2006; 15:2730–2742. pmid:16948317
  13. 13. Lysaker M, Lundervold A, and Tai X. Noise removal using fourth-order partial differential equation with applications to medical magnetic resonance images in space and time. IEEE Transactions on Image Processing. 2003; 12:1057–7149.
  14. 14. Lysaker M, and Tai X. Iterative image restoration combining total variation minimization and a second-order functional. International Journal of Computer Vision. 2006; 66:5–18.
  15. 15. You Y. and Kaveh M. Fourth-order partial differential equations for noise removal. IEEE Transactions on Image Processing. 2000; 09:1723–1730.
  16. 16. Feng W, Lei H, and Gao Y. Speckle reduction via higher order total variation approach. IEEE Transactions on Image Processing. 2014; 23:1831–1843.
  17. 17. Wu Y, and Feng X. Speckle noise reduction via non-convex high total variation approach. Mathematical Problems in Engineering. 2015; 10:1–11.
  18. 18. Huang L, Xiao L, and Wei Z. Multiplicative noise removal via a novel variational model. EURASIP Journal on Image and Video Processing. 2010; 10:768–782.
  19. 19. Huang Y, M, Ng M, and Wen Y. A new total variation method for multiplicative noise removal. SIAM Journal on Imaging Sciences. 2009; 02:22–40.
  20. 20. Jin Z and Yang X. Analysis of a new variational model for multiplicative noise removal. Journal of Mathematical Analysis and Application. 2010; 362:415–426.
  21. 21. Liu G, Zeng X, Tian F, Li Z and Chaibou K. Speckle reduction by adaptive window anisotropic diffusion. Signal Processing. 2009; 89:233–243.
  22. 22. Longa Z, and Younana N. Denoising of images with multiplicative noise. Starkville, MS 39759, USA.
  23. 23. Rudin L, Lions P, and Osher S. Multiplicative denoising and deblurring: Theory and algorithms, in geometric level set methods in imaging, vision, and graphics. Springer, Berlin, Germany. 2003; 04:103–120.
  24. 24. Shen J. On the foundation of vision modeling: I. Weber’s law and weberized total variation restoration. Physica D. 2003; 175:241–251.
  25. 25. Sheng C, Xin Y, Liping Y, and Kun S. Total variation based speckle reduction using multigrid algorithm for ultrasound images. Springer-Verlag Berlin Heidelberg. 2005; 36:245–252.
  26. 26. Burkhardt C. Speckle in ultrasound B-mode scans. IEEE Transactions on Sonics and Ultrasonics. 1978; 25:1–6.
  27. 27. Goodman J. Some fundamental properties of speckle. Journal of the Optical Society of America. 1976; 66:1145–1150.
  28. 28. Shi B, Huang L, Feng Z. Fast algorithm for multiplicative noise removal. Journal of Visual Communication and Image Representation. 2012; 23:126–133.
  29. 29. Munson D and Visentin R. A signal processing view of strip-mapping synthetic aperture radar. IEEE Transactions on Accoustics, Speech, and Signal Processing. 1989; 37:2131–2147.
  30. 30. Guang X, Le J, Huang J, and Jun L. A fast high-order total variation minimization method for multiplicative noise removal. Mathematical Problems in Engineering. 2013; 20:1–13.
  31. 31. Aubertt G and Aujol J. A variational approach to removing multiplicative noise. SIAM Journal on Applied Mathematics. 2008; 68:925–946.
  32. 32. Shi J and Osher S. A non-linear inverse scale space method for a convex multiplicative noise model. SIAM Journal on Imaging Sciences. 2008; 01(03):294–321.
  33. 33. Bioucas D and Figueiredo M. Total variation restoration of speckled images using a split-Bregman algorithm. In Proceedings of IEEE International Conference on Image Processing (ICIP-009). 2009; Cairo, Egypt.
  34. 34. Steidl G and Teuber T. Removing multiplicative noise by Douglas-Rachford splitting methods. Journal of Mathematical Imaging and Vision. 2010; 36:68–184.
  35. 35. Durand S, Fadili J, and Nikolova M. Multiplicative noise cleaning via a variational method involving curve-let coefficients, in scale space and variational methods. Springer, Voss, Norway. 2009; 282–294.
  36. 36. Chumchob N, Chen K and Loeza C. A new variational model for removal of combined additive and multiplicative noise and a fast algorithm for its numerical approximation. International Journal of Computer Mathematics. 2013; 90:140–161.
  37. 37. Chen Y, Feng W, Ranftl R, Qiao H, and Pock T. A higher-order MRF based variational model for multiplicative noise reduction. IEEE Signal Processing Letters. 2014; 21:1370–1374.
  38. 38. Wang F and Ng M. Fast minimization method for blur and multiplicative noise removal. International Journal of Computer Mathematics. 2013; 90: 48–61.
  39. 39. Liu P, Xiao L, and Zhang J. Higher degree total variation revisited: New weighted L1-L2 solvers for image restoration. SIAM Journal of Imaging Sciences. 2014; 1–21.
  40. 40. Marquina A and Osher S. Explicit algorithm for a new time dependent model based on level set motion for non-linear deblurring and noise removal. SIAM Journal of Scientific Computing. 2000; 22:387–405.
  41. 41. Osher S, Burger M, Goldfarb D, Xu J,Yin W. An iterative regularization method for total variation based image restoration, multiscale model. Simululation. 2005; 02:460–489.
  42. 42. Huang Y, Ng M, and Zeng T. The convex relaxation method on deconvolution model with multiplicative noise. Communications in Computational Physics. 2013; 13:1066–1092.
  43. 43. Bredies K, Kunisch K, and Pock T. Total generalized variation. SIAM Journal on Imaging Scieces. 2010; 03:492–526.
  44. 44. He C, Hu C, Yang X, He H and Zhang Q. An adaptive total generalized variation model with augmented Lagrangian method for image denoising. Mathematical Problems in Engineering. 2015; 10:1–11.
  45. 45. Knoll F, Bredies K, Pock T, and Stollberger R. Second order total generalized variation (TGV) for MRI. Magnetic resonance in Medicine. 2011; 65:480–491. pmid:21264937
  46. 46. Labate D, Lim W, Kutyniok G, and Weiss G. Sparse multidimensional representaion using Shearlets. International Society for Optics and Photonics. 2005; 59140U-59140U.
  47. 47. Guo K and Labate D. Optimally sparse multidimensional representation using Shearlets. SIAM Journal on Mathematical Analysis. 2007; 23:298–318.
  48. 48. Candès E and Donoho D. Ridgelets: A key to higher-dimensional intermittency. Philosophical Transactions of the London Royal Society. 1999, 357.
  49. 49. Do M and Vetterli M. The contourlet transform: An efficient directional multiresolution image representation. IEEE Transaction on Image Processing. 2005; 14:2091–2106.
  50. 50. Candès E, Demanet L, Donoho D, and Ying L. Fast discrete curvelet transforms. Multiscale Modeling Simulation. 2006, 05.
  51. 51. Easley G, Labate D, and Lim W. Sparse directional image representations using the discrete Shearlet transform. Applied and Computational Harmonic Analysis. 2008; 25:25–46.
  52. 52. Häuser S, Steidl G. Fast finite Shearlet transform. A tutorial. 2014.
  53. 53. Bredies K. Recovering piecewise smooth multichannel images by minimization of convex functionals with total generalized variation penalty. Institute of Mathematics and Scientific Computing, University of Graz, Austria. 2012, 2012-006.
  54. 54. Boyd S, Parikh N, Chu E, Peleato E B and Eckstein J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning. 2011; 03:1–122.
  55. 55. Deng W and Yin W. On the global and linear convergence of the generalized alternating direction method of multipliers. Rice University CAAM Tech Report. 2012, TR12-14.
  56. 56. He B, Liao L, Han D and Yang H. A new inexact alternating directions method for monotone variational inequalities. Mathematical Programming. 2002; 92:103–118.
  57. 57. Yang J, Yin W, Zhang Y, and Wang Y. A fast algorithm for edge-preserving variational multichannel image restoration. SIAM Journal of Imaging Sciences. 2009; 02:569–592.