Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Poisson noisy image restoration via overlapping group sparse and nonconvex second-order total variation priors

  • Kyongson Jon,

    Roles Software, Writing – original draft, Writing – review & editing

    Affiliations Key Laboratory for Applied Statistics of MOE, School of Mathematics and Statistics, Northeast Normal University, Changchun, P.R. China, Faculty of Mathematics, Kim Il Sung University, Pyongyang, D.P.R. of Korea

  • Jun Liu,

    Roles Conceptualization, Formal analysis

    Affiliation Key Laboratory for Applied Statistics of MOE, School of Mathematics and Statistics, Northeast Normal University, Changchun, P.R. China

  • Xiaoguang Lv,

    Roles Validation, Visualization

    Affiliation School of Science, Jiangsu Ocean University, Lianyungang, Jiangsu, P.R. China

  • Wensheng Zhu

    Roles Methodology, Supervision

    wszhu@nenu.edu.cn

    Affiliation Key Laboratory for Applied Statistics of MOE, School of Mathematics and Statistics, Northeast Normal University, Changchun, P.R. China

Abstract

The restoration of the Poisson noisy images is an essential task in many imaging applications due to the uncertainty of the number of discrete particles incident on the image sensor. In this paper, we consider utilizing a hybrid regularizer for Poisson noisy image restoration. The proposed regularizer, which combines the overlapping group sparse (OGS) total variation with the high-order nonconvex total variation, can alleviate the staircase artifacts while preserving the original sharp edges. We use the framework of the alternating direction method of multipliers to design an efficient minimization algorithm for the proposed model. Since the objective function is the sum of the non-quadratic log-likelihood and nonconvex nondifferentiable regularizer, we propose to solve the intractable subproblems by the majorization-minimization (MM) method and the iteratively reweighted least squares (IRLS) algorithm, respectively. Numerical experiments show the efficiency of the proposed method for Poissonian image restoration including denoising and deblurring.

1 Introduction

In many real applications, during the image recording, the measurement of light inevitably leads to the uncertainty of striking particles on the image sensor. In other words, the finite number of electrons or photons carrying energy in an image sensor may cause statistical fluctuations, which are usually modeled as the Poisson distribution [13]. As another degradation factor, image formation may involve undesirable blurrings such as motion or out-of-focus. By rewriting the concerned images into column-major vectorized form, we can regard the observed image as a realization of Poisson random vector with expected value Hf + b, where H is a n2 × n2 convolution matrix corresponding to the point spreading function (PSF) which models blur effects, is the original image, and is a nonnegative constant background [46].

Poissonian image restoration calls for applying an inverse procedure to approximate f from an observation g degraded by blur and Poisson noise. A general option to tackle this problem is to use the maximum a posteriori (MAP) estimation from Bayesian perspective. Taking the Poisson statistics of noise into account, we can write the conditional distribution of the observed data g as (1) where (⋅)i,j indicates a vector element corresponding to position (i, j) hereafter. On the premise that we adopt a Gibbs prior [79] of the form (2) for f, the use of Bayes’ rule and Stirling’s approximation leads to the following minimization problem [10, 11]: (3) where λ > 0 is a regularization parameter, ϕ(f) is a nonnegative given function (often called the regularizer), and DKL(Hf||g) denotes the generalized Kullback-Leibler (KL) divergence of Hf from g as shown below: (4)

Since the efficiency of the restoration model hinges on the choice of image regularizer, numerous authors have proposed different regularization methods. Tikhonov regularization of form (usually, Γ is an identity matrix or difference matrix) is probably one of the most classical methods. In the literatures, several methods have been proposed to efficiently solve Eq (3) with the Tikhonov regularizer, including the scaled gradient projection method [12], the split-gradient method [13], the projected Newton-conjugate gradient (PNCG) method [2], the quasi-Newton projection method [14], the hybrid gradient projection-reduced Newton method [15, 16] and the scaled gradient projection-type method [17]. Although the KL divergence and Tikhonov regularizer are well-known to be convex and the unique solution to Eq (3) is guaranteed with inexpensive computational cost, it tends to over-smooth the important details in restored images.

Another classical method, the total variation (TV) regularizer ϕ(f) = ‖fTV (see Section 2), has been widely accepted since the noisy signals have larger TV than the original signal [18]. Bardsley et al. [19] proved that minimizing the Poisson likelihood function in conjunction with TV regularization is well-posed, and they also verified its convergence by a nonnegatively constrained and projected quasi-Newton minimization algorithm. Landi et al. [20] also adopted the TV regularizer for denoising the medical images collected by the photon-counting devices and extended the PNCG method introduced in [2]. Tao et al. [21] appended a non-negativity constraint and used the half-quadratic splitting method to solve the related minimization problem. In addition to the methods mentioned above, other authors employed the effective optimization techniques for solving the TV regularized Poisson deblurring model, including the alternating extra-gradient method [22], the primal-dual method [23, 24] and the alternating direction method of multipliers (ADMM) [25, 26]. It is well-known that the TV regularization methods could preserve fine details such as sharp edges, but they often exhibit false jump discontinuities causing spurious staircase artifacts in smooth regions of the restored images. This may be due to the fact that the TV regularization tends to transform the smooth regions of the solution into piecewise constant ones while solving the minimization problem.

One possible remedy for this oversharpening behavior is to introduce high-order TV, which could penalize jumps more. Zhou et al. [27] selected the second-order TV ϕ(f) = ‖fHTV (see Section 2) as a regularizer, and solved by using the alternating minimization scheme. Their numerical experiments indicated the advantage of the second-order TV in avoiding the staircase artifacts, compared with the classical TV regularization [28]. But the high-order TV usually transforms the smooth signal into over-smoothing [29], so the hybrid regularizations combining the high-order TV with other regularizers were also considered. Among the hybrid models, Zhang et al. [30] obtained better results than the model with a strongly convex term proposed in [31], by replacing the first-order TV with a second-order TV. Besides, Jiang et al. [11] combined the first-order and second-order TV priors to restore images contaminated by Poisson noise and solved the minimization model by the ADMM. To further improve the restoration result, Liu et al. [5] studied a spatially adapted regularization parameter updating scheme. As an adaptive balancing scheme between the first and second derivatives, Wang et al. [32] established the Poisson noise removal framework of iterative reweighted total generalized variation (TGV) model based on the EM algorithm for the denoising case. Zhang et al. [33] combined the fractional-order TV with non-local TV to alleviate the staircase artifacts for the cartoon component as well as to preserve the details for the texture component. Ma et al. [34] proposed a hybrid regularizer containing a patch-based sparsity promoting prior over a learned dictionary and a pixel-based total variation prior, but it requires additional strategies to reduce the high computational cost.

Considering that the tight framelet framework can facilitate the sparse representation of images [35], Shi et al. [36] combined the framelet regularization with non-local TV and non-negativity constraint in the non-blind stage of blind Poisson deblurring. Zhang et al. [37] proposed the nonconvex and noncontinuous model with 0 norm of the tight wavelet representation as a regularizer. Fang and Yan [38] combined the 1 norm of framelet coefficients with TV for Poissonian image deconvolution to reduce artifacts yielded by only using TV regularization. In addition to working with transform-domain sparsity mentioned above, the overlapping group sparsity, which describes the natural tendency of large values to arise near other large values rather than in isolation, has been widely concerned in the field of image restoration, due to its remarkable ability to exploit the structural information of the natural image gradient [4, 3941]. Especially, for the Poisson noisy restoration, Lv et al. [4] focused on the regularization by the total variation with overlapping group sparsity (TVOGS) and they showed that the solution obtained by the ADMM framework is superior to the first-order TV regularized methods [25, 42] and the high-order regularized methods [27].

To sum up, the high-order TV or the overlapping group sparse prior can be the most feasible option to alleviate the staircase artifacts. Nonetheless, the high-order TV, due to the over-penalizing tendency of the 1 norm [43], often causes over-smoothing at the edges while the overlapping group sparsity tends to smoothen out blockiness in the restored images more globally. Adam and Paramesran, motivated by the work of Chen et al. [44], proposed a hybrid regularization model that combined the overlapping group sparse total variation with the high-order nonconvex TV to denoise images corrupted by Gaussian noise [40]. Recently, they extended it to non-blind deblurring under Gaussian noise [45]. In this paper, we present an extension of Adam’s regularizer to the problem of restoring the Poisson noisy images. This regularizer takes the advantages of both the high-order nonconvex regularizer and overlapping group sparsity regularizer, i.e., it can simultaneously facilitate the pixel-level and structured sparsities of the natural image in the gradient domain. Therefore, we expect that it can not only effectively reduce staircase artifacts but also preserve well sharp edges in the restored image. However, its optimization is more challenging than Gaussian deblurring due to the ill-conditioned non-quadratic data fidelity term. We employ the alternating direction method of multipliers to solve the derived minimization problem. In particular, during the ADMM iterations, we solve two intractable subproblems: one is from overlapping group sparse prior and solved by majorization-minimization (MM) method with well-known quadratic majorizer; another is from the nonconvex p(0 < p < 1) quasi norm, which is solved by the iteratively reweighted least squares (IRLS) algorithm with the motivation that the IRLS is guaranteed to converge to its local minima provided better theoretical and practical abilities than the iteratively reweighted 1 (IRL1) algorithm [5255].

The rest of this paper is organized as follows. In Section 2, we introduce some notations that will be used to formulate our proposed method. We also briefly review the essential concepts and tools including the overlapping group sparsity prior, the ADMM algorithm, the MM method, and the IRLS algorithm. Section 3 establishes a novel Poissonian image deblurring model which comprises the generalized Kullback Leibler (KL) divergence as data fidelity term, and a combined first-order and second-order total variation priors with overlapping group sparsity as a regularization term. Sequentially, we derive an effective algorithm for minimizing the non-convex and non-smooth objective function under the ADMM optimization framework. Section 4 demonstrates the superiority of our method via numerical experiments, followed by analyzing the parameter setting and convergence behavior. We finally conclude this paper in Section 5.

2 Preliminaries

2.1 Notations

It is necessary to introduce some notations throughout this paper. Assuming that all images under consideration are gray-scale images of size n × n, we lexicographically stack the columns of an image matrix into a vector form. For example, the (i, j)th pixel of image f becomes the ((j − 1)n + i)th entry of the vector, just written as fi,j. Under the periodic boundary conditions for image f, we introduce the discrete forward difference operator defined by (5) and similarly for the backward difference operator , (6) where the definitions of each forward and backward sub-operators are separately: (7)

Second-order difference operators can be recursively defined by using the first-order difference operators such as (8) and other second-order difference operators can be similarly defined. Based on the above definitions, we denote the first and second-order TV of f as (9) where ‖⋅‖2 means the Euclidean norm, .

2.2 TVOGS and MM algorithm

To describe the structural sparsity of image gradient, we define a pixel-group of size K × K (K is called group size) centered at every position (i, j) of a two-dimensional array v = (vi,j)n × n as (10) where , and ⌊⋅⌋ denotes the floor function, i.e., it converts any real number into a nearest integer less than or equal to it. Let v(i,j),K be the column-major vectorized form of , i.e., . Then, as in Liu et al.’s work [46], we can denote the TVOGS regularizer ϕTO(f) as (11) where and denote the horizontal and vertical gradient of f, respectively, and is the K-group OGS function of . For the notational simplicity, we denote ϕO(∇f) = ϕO(∇1 f) + ϕO(∇2 f).

Next, we review a minimization problem of the form (12) and we denote the objective function of Eq (12) as R(v).

To solve the above sophisticated OGS model, the majorization-minimization approach is used with as a quadratic majorizer of ϕO(v), namely, ψ(v|u) ≥ ϕO(v) for all v and u0 and with equality if v = u. Therefore, instead of the intractable direct optimization of R(v), we can approximately solve it through iterative minimizing a sequence of surrogate convex problems: (13) where Λ(u) is a diagonal matrix containing the following entries along its diagonal (14) with l = (r − 1)n + t, for r, t = 1, 2, ⋯, n. Eq (13) has the closed-form solution (15) which is the input for the next MM iteration. The inversion of the matrix can be efficiently done via simple component-wise calculations. In summary, we obtain the following algorithm for Eq (12).

Algorithm 1 MM algorithm for minimizing Eq (12)

input λ > 0, K, Nin(inner iterations)

initialize v(0) = v0, k = 0.

while k < Nin do             ▹ MM inner loop

 1: ;

 2: v0;

 3: k = k + 1;

end while

2.3 Alternating direction method of multipliers

The standard form of the ADMM [47] is designed to tackle the distributed convex optimization problem with linear equality constraints, and many variants have been developed [4850]. Recently, Wang et al. [51] extended it to the problems involving the nonconvex and nonsmooth muti-blocks objective, as follows: (16) where (here ) and are primal variables with the corresponding coefficient matrix and respectively, is a continuous function, and is a smooth function.

In general, can be nonsmooth and nonconvex, and can be nonconvex. By introducing a Lagrangian multiplier and a penalty parameter δ > 0 for the linear constraint , we obtain the augmented Lagrangian in a scaled form, (17)

The ADMM algorithm proceeds to alternatively update each variable (x(k+1), y(k+1)) until it reaches a stationary point of Eq (17) or meets a stopping criterion. Then, we have the following iterative scheme: (18)

The following lemma establishes the convergence result of ADMM with the nonconvex and nonsmooth objective [51].

Lemma 1. Let and A = [A1As]. Suppose that the following assumptions S1–S5 hold, then the sequence generated by Eq (18) with any sufficiently large δ and any initialization has at least one limit point, and each limit point is a stationary point of Eq (17).

S1 (coercivity) Let be the nonempty feasible set and is coercive over ;

S2 (feasibility) Im(A) ⊆ Im(B), where Im(⋅) is defined as the image of a matrix;

S3 (Lipschitz sub-minimization paths)

(a) For any fixed x, there exists a Lipschitz continuous map obeying provided a unique minimizer,

(b) For i = 1, ⋯, s and fixed xi and y, there exists a Lipschitz continuous map obeying provided a unique minimizer, where xi ≔ [x1;⋯;xi−1;xi+1;⋯;xs];

S4 (objective- regularity) is lower semi-continuous or is lower semi-continuous and is restricted prox-regular;

S5 (objective- regularity) is Lipschitz differentiable with the constant .

2.4 Iteratively reweighted least squares algorithm

IRLS algorithm [52, 53] solves the following nonconvex p norm sparsity problem: (19) where , and 0 < p < 1 causes the model to be nonconvex. It is showed that the IRLS is guaranteed to converge to its local minima with better theoretical and practical abilities than the iteratively reweighted 1 (IRL1) [5255]. Then, the problem Eq (19) can be approximated to (20) with the weight and ϵ ≪ 1 being a small positive number to avoid division by zero. Given k-th estimation , IRLS first calculates the weight , then updates z by solving the following problem: (21)

We summarize the IRLS method in Algorithm 2.

Algorithm 2 IRLS algorithm for minimizing Eq (19)

input .

initialize .

while not converged do

 1: update weight: ω(k) = λp((z(k))2 + ϵ)p/2−1;

 2: update z: ;

 3: k = k + 1;

end while

3 Proposed model and optimization method

In this section, we first present the proposed model and then solve it in the framework of nonconvex and nonsmooth ADMM.

3.1 Model formulation

Considering the advantages of the overlapping group sparse total variation and the high-order nonconvex total variation, we investigate the Poisson noisy image restoration problem with the following regularizer: (22) where ϕTO(f) is the TVOGS term, (0 < p < 1) is the nonconvex second-order TV term, and η > 0 is a regularization parameter. Substituting the hybrid regularizer Eq (22) into Eq (3) with the consideration to Eq (11), we obtain (23)

In the model above, λ and η are used to control the data fidelity term and the nonconvex second-order regularizer, respectively. The data fidelity term is an ill-conditioned non-quadratic log-likelihood, while the regularizer is nonconvex and nonsmooth due to the presence of the p(0 < p < 1) quasi norm. This may increase the difficulty in minimizing the model. To circumvent this difficulty, we propose an efficient algorithm based on the framework of nonconvex and nonsmooth ADMM in the following subsection.

3.2 Optimization

In order to apply the variable splitting scheme, we introduce three auxiliary variables to transform the original complicated minimization problem into the following equivalent linear constraint minimization problem: (24)

We establish a corresponding relation between the variables to conform with the ADMM framework Eq (16), as shown below:

  1. (1).
  2. (2).
  3. (3). , , , ,

where 01, 02, 03 denote zero matrix of size 2n2 × 4n2, 4n2 × n2, n2 × 2n2, respectively, and is the identity matrix of order n2. By introducing the Lagrangian multiplier (here ) and the positive penalty parameter vector , we can turn Eq (24) into the unconstrained minimization of the following augmented Lagrangian function: (25)

Then, the ADMM for solving Eq (24) starts its iteration to update every variable by minimizing Eq (25) alternatively. This iterative scheme could be split into several subproblems.

3.2.1 x1-subproblem.

According to the ADMM scheme, pulling out the terms with x1 from Eq (25) yields (26)

It is apparent that each component of x1 can be solved separately. Through the basic mathematical manipulations in the case of b = 0, we obtain (27)

3.2.2 x2-subproblem.

Minimizing with respect to x2 leads to the overlapping group sparse problem (28)

It is easy to see that this minimization problem matches the framework of Eq (12), thus x2 can be solved by Algorithm 1.

3.2.3 x3-subproblem.

By omitting terms irrespective of x3, we have (29)

Since this problem involves the nonconvex p norm, minimizing x3 is not straightforward. But, after some simple modifications, we can apply the IRLS algorithm according to Algorithm 2.

3.2.4 f-subproblem.

Considering Eq (25) with respect to f, we have (30) which can be solved by the following normal equation (31)

We assume that both the image and convolution matrix are periodically extended, thus well-known fast Fourier transform can be adopted to efficiently solve Eq (31) [4, 56, 57], which results in an optimal solution (32) with (33) where denotes the FFT operator, * and ∘ stand for complex conjugate and element-wise multiplication, respectively. The division is computed by component-wise fashion. Note that many subterms need to be computed only once during the iterative updates.

3.2.5 w-subproblem.

Finally, the updating scheme of the Lagrangian multipliers can be written as (34)

In summary, the key steps of the ADMM algorithm for solving the suggested model are described in Algorithm 3.

Algorithm 3 The ADMM algorithm for solving Eq (23)

input p, λ, η, δ, K, Nin.

initialize f(0), k = 0, w(0) = 0.

while a stopping criteria unsatisfied do             ▹ outer loop

 1: Update according to Eq (27).

 2: Update according to Eq (28) using Algorithm 1.

 3: Update according to Eq (29) using Algorithm 2.

 4: Update f(k+1) according to Eq (32).

 5: Update according to Eq (34).

 6: k = k + 1.

end while

We discuss the convergence of the proposed algorithm. Inspired by [51], by verifying the assumptions in Lemma 1, we can obtain the following convergence result for Algorithm 3.

Theorem 1. The sequence of (x1, x2, x3, f) generated by Algorithm 3 with any sufficiently large δ and any start point will converge to a stationary point of the augmented Lagrangian .

Proof. Since our model fits the framework of Eq (16), it remains only to check S1–S5 in Lemma 1. It is easy to verify that DKL(x1||g) is coercive [58] as well as ϕO(x2) and . Thus, S1 holds. Ax + By = 0 implies S2. S3 holds because both A and B are full column rank matrices. DKL(x1,g) is lower semi-continuous and p-quasi norm is restricted prox-regular [51]. Furthermore, ϕO(x2) is convex, hence restricted prox-regular. It is clear that S5 holds, which completes the proof.

The convergence of Algorithm 3 can also be verified experimentally.

4 Numerical experiments

In this section, we present the numerical experiments to illustrate the effectiveness of the proposed method for the Poisson noisy image restoration, compared with the closely related state-of-the-art methods including TVOGS [4], SAHTV [5], SB-FA [38] and FT-ADMM [37]. The TVOGS method introduced the overlapping group sparse TV prior combined with the box-constraint as a regularizer and solved the optimization model under the ADMM framework, but denoising is out of their consideration. In the SAHTV method, Liu et al. [5] combined the first and second-order TV priors, and updated their pixel-wise weighting coefficients with the local information during the consecutive estimation of the latent image. FT-ADMM, whose regularizer is the ℓ0 norm of the wavelet representation of the image, was also proposed to efficiently eliminate the staircase artifacts. Fang and Yan [38] proposed to combine the ℓ1 norm of the framelet representation of the latent image with the TV prior, and solved the resultant model by the split Bregman method. Since they recommended using the SB-FA algorithm without the TV prior through the numerical experiments, we adopt SB-FA as a competitive method rather than SB-FATV with the TV prior.

All experiments in this paper are implemented on Windows 10 64-bit and Matlab R2016b running on a desktop computer with an Intel Core i5-4590 CPU 3.3GHz and 8GB of RAM. We uploaded the code for our algorithm on https://github.com/KSJhon/PoissonDeblur_hybrid.

To proceed with the simulation experiments, we intentionally degrade the clean image to construct the corrupted image. More specifically, an original clean image is first scaled to a preset peak value MAXf, which decides the different levels of Poisson noise. After that, the scaled image is convolved with a given PSF to simulate the blurring effect, and further contaminated by the signal-dependent Poisson noise. In the following experiments, we set MAXf to be 100, 200, 300, and 350, in which the lower MAXf indicates the relatively higher noise level. For the deblurring simulations, we consider three types of blur kernels: (1) a 9 × 9 Gaussian kernel with standard deviation 1; (2) a linear motion blur with a motion length 5 and an angle 45° in the counterclockwise direction; (3) a kernel from Levin et al.’s public dataset [59]. All blurring operations are fulfilled by the Matlab built-in function “fspecial”, and the Poisson noise is generated by “poissrnd” without any additional parameter. The test images with various sizes are shown in Fig 1.

thumbnail
Fig 1. Test images used in our experiments.

(a): “beauty” (256 × 256). This image was republished from https://www.flickr.com/photos/90471071@N00/3201337190 under a CC BY license, with permission from Irina Gheorghita, original copyright 2009; (b): “lotus” (500 × 500). This image was republished from https://www.flickr.com/photos/robertlylebolton/7797693474 under a CC BY license, with permission from Robert Lyle Bolton, original copyright 2012; (c): “dolphin” (512 × 512). This image was republished from https://www.flickr.com/photos/grovecrest/6981622176 under a CC BY license, with permission from Simon Lewis, original copyright 2011.

https://doi.org/10.1371/journal.pone.0250260.g001

We quantitatively evaluate the performances of the proposed method and the competing methods by means of the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM), defined as (35) (36) where f is the original image, is the restored image, are the respective averages, are the respective variances, is the covariance of f and , and by default. Generally, the larger the PSNR and the closer SSIM to 1, the better the quality of the restored image. In the experiments, we terminate the iteration in Algorithm 3 when the relative error (RelErr) between two consecutive estimates falls below the predefined tolerance level ε, as follows: (37)

4.1 Selection of parameters

Since the convergence of the nonconvex ADMM and its suboptimization in Algorithm 3 depends on the values of parameters, they require careful tuning.

For the choice of p, as in the experiments reported in [40], we set it to be 0.1 throughout the experiments, with the consideration of the important aspect of p norm, which says that taking p sufficiently close to 0 favors to preserve sharp edges in the restored image.

The group size K also plays an important role in balancing the trade-off between the global noise filtering performance and the computational cost. In order to find the optimal K, we conduct a denoising experiment by varying K while keeping other parameters remained, as shown in Fig 2. Obviously, K = 3 is the best choice for all noise levels, so we fix K = 3 in the following experiments. In addition, the number of inner iterations, Nin, also affects the MM algorithm. We straightforwardly fix Nin = 5 as indicated in [4, 29, 39, 40]. To balance the accuracy and speed, we empirically set Nout = 50, ε = 10−3. The other parameters, including the regularization parameters (λ, η) and penalty parameter δ, are manually adjusted to achieve the highest improvement in the PSNR and SSIM values. In the following subsections, we compare the proposed method (named HTVp-OGS) with the state-of-the-art methods for denoising and deblurring images under the Poisson noise. Unless otherwise indicated, all parameters involved in the competitive algorithms are carefully selected near the given default values to give the best performance, respectively.

thumbnail
Fig 2. Evolution of PSNR values of denoised images with MAXf = 350, 300, 200 and 100 along the varying group sizes.

(a): “beauty” image; (b): “lotus” image; (c): “dolphin” image.

https://doi.org/10.1371/journal.pone.0250260.g002

4.2 Denoising

Here, we show that our model provides better results in Poisson noise removal through the comparison with the state-of-the-art methods: SB-FA [38], SAHTV [5], and FT-ADMM [37].

To evaluate the denoising performance, we just set H as an identity matrix, and empirically fix δ = (5 ⋅ 10−3, 3 ⋅ 10−2, 2 ⋅ 10−4)T. We can observe that λ = 3 ⋅ MAXf can bring the satisfactory results of Eq (23). The other regularization parameter η is hand-tuned to get the best denoising performance according to the noise level. The denoising results, in terms of the PSNR and SSIM values, are shown in Table 1. From Table 1, it is obvious that the PSNR and SSIM values of the images denoised by our model are higher than those of the other three methods (SB-FA, SAHTV, FT-ADMM). We display in Fig 3. the denoised versions and the zoom-in regions obtained by our method and other competitive methods from the noisy images with the noise level MAXf = 100.

thumbnail
Fig 3. Denoised images and a zoom-in region from the Poisson noisy images with the noise level MAXf = 100.

first column: Poisson noisy images; second column: denoised images by SB-FA [38]; third column: denoised images by SAHTV [5]; fourth column: denoised images by FT-ADMM [37]; fifth column: denoised images by HTVp-OGS.

https://doi.org/10.1371/journal.pone.0250260.g003

thumbnail
Table 1. The PSNR and SSIM values for denoised images by different methods.

https://doi.org/10.1371/journal.pone.0250260.t001

4.3 Deblurring

In our deblurring experiments, we consider three types of blur kernels. As mentioned above, the first two are artificial to respectively mimic the effect of the out-of-focus blur and motion blur, and the last one is the blur kernel from Levin et al.’s dataset [59]. For the sake of a fair comparison with other methods, we equally set the tolerance level ε = 10−3 and iteration number Nout = 50 in all experiments. As in the denoising experiments, we fix λ = 3 ⋅ MAXf and select the penalty parameter δ as (0.01, 0.1, 0.01)T. The values of δ and λ are kept constant, and the other parameter η is selected by the grid search scheme to obtain high-quality restored images in the following three cases:

  1. Setting 1. Gaussian blur case: We set η = 18, 14, 6, and 2 according to the noise level MAXf = 350, 300, 200, and 100, respectively. The detailed comparison of five methods is provided in Table 2.
  2. Setting2. linear motion blur case: We keep δ and λ remained and empirically set η = 8, 6, 4, 1, and the detailed results are summarized in Table 3.
  3. Setting 3. ground truth blur case: In this case, we set η = 4, 3, 2, 0.2, and the detailed results are summaried in Table 4.
thumbnail
Table 2. The PSNR and SSIM values for the Poisson image restoration by different methods in the case of the Gaussian blur with kernel size 9 × 9 and variation 1.

https://doi.org/10.1371/journal.pone.0250260.t002

thumbnail
Table 3. The PSNR and SSIM values for the Poisson image restoration by different methods in the case of the motion blur with length 5 and angle 45°.

https://doi.org/10.1371/journal.pone.0250260.t003

thumbnail
Table 4. The PSNR and SSIM values for the Poisson image restoration by different methods in the case of the ground truth blur.

https://doi.org/10.1371/journal.pone.0250260.t004

In Fig 4, we show the degraded “dolphin” images which are blurred by the corresponding kernels and further corrupted by the Poisson noise of noise level 200, as well as its restored versions obtained through our method and the competitive methods. Fig 5(a) depicts the RelErr curves for the sequence of temporary estimates of the “lotus” image obtained during the HTVp-OGS iterations. Besides, in Fig 5(b), we show that, as the iteration proceeds, the PSNR and SSIM values moderately increase. From the results of the above three experiments, we can see that in most cases, our method provides the best results in terms of the PSNR and SSIM values, and in extreme cases, we obtain a slightly lower PSNR value than others, but the SSIM value still keeps the best. It is worth noting that the transform-based methods (SB-FA and FT-ADMM) are quite time-consuming because they inherently involve some complicated nonlinear operations [3], while our method works in the gradient domain, making it take a relatively short running time.

thumbnail
Fig 4. Restored “dolphin” images and zoom-in regions from the degraded images with different blur kernels and the Poisson noise of level MAXf = 200.

first row: Gaussian blur; second row: motion blur; third row: ground truth blur; first column: degraded images (PSF is shown at the bottom-left corner); second column: restored images by SB-FA [38]; third column: restored images by SAHTV [5]; fourth column: restored images by TVOGS [4]; fifth column: restored images by FT-ADMM [37]; sixth column: restored images by HTVp-OGS.

https://doi.org/10.1371/journal.pone.0250260.g004

thumbnail
Fig 5. Plot of performance along the HTVp-OGS iterations for restoring “lotus” images blurred by different kernels and corrupted by the Poisson noise of level MAXf = 200.

(a) relative error in log scale; (b) PSNR and SSIM values.

https://doi.org/10.1371/journal.pone.0250260.g005

5 Conclusions

In this paper, we propose a new method to restore Poissonian images (including denoising and deblurring) by using a hybrid regularizer, which combines the overlapping group sparse total variation with the high-order nonconvex total variation. This regularizer allows us to exploit their advantages in order to alleviate the staircase artifacts while preserving the original sharp edges simultaneously. The model derived from the Bayesian perspective is effectively solved by the nonconvex and nonsmooth ADMM optimization framework. We adopt the MM and IRLS algorithm for solving the subproblems accompanied by the OGS and nonconvex second-order total variation priors, respectively. Numerical experiments demonstrate that the proposed method outperforms the other related methods in terms of the PSNR and SSIM values. The study on adaptively determining the optimal parameters according to the image content and noise level will be the future work.

Acknowledgments

The authors would like to thank Dr. Houzhang Fang and Dr. Haimiao Zhang for supplying us with the codes used in our numerical comparisons.

References

  1. 1. Vardi Y, Shepp LA, Kaufman L. A statistical model for positron emission tomography. Journal of the American Statistical Association. 1985;80(389):8–20.
  2. 2. Landi G, Piccolomini EL. A projected Newton-CG method for nonnegative astronomical image deblurring. Numerical Algorithms. 2008;48(4):279–300.
  3. 3. James AP, Dasarathy BV. Medical image fusion: A survey of the state of the art. Information Fusion. 2014;19:4–19.
  4. 4. Lv XG, Jiang L, Liu J. Deblurring Poisson noisy images by total variation with overlapping group sparsity. Applied Mathematics and Computation. 2016;289:132–148.
  5. 5. Liu J, Huang TZ, Lv XG, Wang S. High-order total variation-based Poissonian image deconvolution with spatially adapted regularization parameter. Applied Mathematical Modelling. 2017;45:516–529.
  6. 6. Kumar A, Ahmad MO, Swamy M. A framework for image denoising using first and second order fractional overlapping group sparsity (HF-OLGS) regularizer. IEEE Access. 2019;7:26200–26217.
  7. 7. Hebert TJ, Leahy R. Statistic-based MAP image-reconstruction from Poisson data using Gibbs priors. IEEE Transactions on Signal Processing. 1992;40(9):2290–2303.
  8. 8. Sebastiani G, Godtliebsen F. On the use of Gibbs priors for Bayesian image restoration. Signal Processing. 1997;56(1):111–118.
  9. 9. Benvenuto F, La Camera A, Theys C, Ferrari A, Lantéri H, Bertero M. The study of an iterative method for the reconstruction of images corrupted by Poisson and Gaussian noise. Inverse Problems. 2008;24(3):035016.
  10. 10. Jiang L, Huang J, Lv XG, Liu J. Restoring Poissonian images by a combined first-order and second-order variation approach. Journal of Mathematics. 2013;2013:1–11.
  11. 11. Jiang L, Huang J, Lv XG, Liu J. Alternating direction method for the high-order total variation-based Poisson noise removal problem. Numerical Algorithms. 2015;69(3):495–516.
  12. 12. Bertero M, Lantéri H, Zanni L, et al. Iterative image reconstruction: A point of view. Mathematical Methods in Biomedical Imaging and Intensity-Modulated Radiation Therapy (IMRT). 2008;7:37–63.
  13. 13. Lanteri H, Roche M, Aime C. Penalized maximum likelihood image restoration with positivity constraints: multiplicative algorithms. Inverse Problems. 2002;18(5):1397.
  14. 14. Landi G, Piccolomini EL. An improved Newton projection method for nonnegative deblurring of Poisson-corrupted images with Tikhonov regularization. Numerical Algorithms. 2012;60(1):169–188.
  15. 15. Bardsley JM, Vogel CR. A nonnegatively constrained convex programming method for image reconstruction. SIAM Journal on Scientific Computing. 2004;25(4):1326–1343.
  16. 16. Bardsley JM, Laobeul N. Tikhonov regularized Poisson likelihood estimation: theoretical justification and a computational method. Inverse Problems in Science and Engineering. 2008;16(2):199–215.
  17. 17. Bonettini S, Landi G, Piccolomini EL, Zanni L. Scaling techniques for gradient projection-type methods in astronomical image deblurring. International Journal of Computer Mathematics. 2013;90(1):9–29.
  18. 18. Chen K. Introduction to variational image-processing models and applications. International Journal of Computer Mathematics. 2013;90:1–8.
  19. 19. Bardsley JM, Luttman A. Total variation-penalized Poisson likelihood estimation for ill-posed problems. Advances in Computational Mathematics. 2009;31(1-3):35.
  20. 20. Landi G, Piccolomini EL. An efficient method for nonnegatively constrained Total Variation-based denoising of medical images corrupted by Poisson noise. Computerized Medical Imaging and Graphics. 2012;36(1):38–46. pmid:21821394
  21. 21. Tao S, Dong W, Xu Z, Tang Z. Fast total variation deconvolution for blurred image contaminated by Poisson noise. Journal of Visual Communication and Image Representation. 2016;38:582–594.
  22. 22. Bonettini S, Ruggiero V. An alternating extragradient method for total variation-based image restoration from Poisson data. Inverse Problems. 2011;27(9):095001.
  23. 23. Bonettini S, Ruggiero V. On the convergence of primal–dual hybrid gradient algorithms for total variation image restoration. Journal of Mathematical Imaging and Vision. 2012;44(3):236–253.
  24. 24. Wen Y, Chan RH, Zeng T. Primal-dual algorithms for total variation based image restoration under Poisson noise. Science China Mathematics. 2016;59(1):141–160.
  25. 25. Setzer S, Steidl G, Teuber T. Deblurring Poissonian images by split Bregman techniques. Journal of Visual Communication and Image Representation. 2010;21(3):193–199.
  26. 26. Chang H, Lou Y, Duan Y, Marchesini S. Total variation–based phase retrieval for Poisson noise removal. SIAM Journal on Imaging Sciences. 2018;11(1):24–55.
  27. 27. Zhou W, Li Q. Poisson noise removal scheme based on fourth-order PDE by alternating minimization algorithm. Abstract and Applied Analysis. 2012;2012, Special Issue:1–14.
  28. 28. Le T, Chartrand R, Asaki TJ. A variational approach to reconstructing images corrupted by Poisson noise. Journal of Mathematical Imaging and Vision. 2007;27(3):257–263.
  29. 29. Liu G, Huang TZ, Liu J, Lv XG. Total variation with overlapping group sparsity for image deblurring under impulse noise. PloS one. 2015;10(4):e0122562. pmid:25874860
  30. 30. Zhang J, Ma M, Wu Z, Deng C. High-order total bounded variation model and its fast algorithm for Poissonian image restoration. Mathematical Problems in Engineering. 2019;2019:1–11.
  31. 31. Liu X, Huang L. Total bounded variation-based Poissonian images recovery by split Bregman iteration. Mathematical Methods in the Applied Sciences. 2012;35(5):520–529.
  32. 32. Wang XD, Feng Xc, Wang Ww, Zhang WJ. Iterative reweighted total generalized variation based Poisson noise removal model. Applied Mathematics and Computation. 2013;223:264–277.
  33. 33. Zhang Z, Zhang J, Wei Z, Xiao L. Cartoon-texture composite regularization based non-blind deblurring method for partly-textured blurred images with Poisson noise. Signal Processing. 2015;116:127–140.
  34. 34. Ma L, Moisan L, Yu J, Zeng T. A dictionary learning approach for Poisson image deblurring. IEEE Transactions on Medical Imaging. 2013;32(7):1277–1289. pmid:23549888
  35. 35. Fang H, Yan L, Liu H, Chang Y. Blind Poissonian images deconvolution with framelet regularization. Optics Letters. 2013;38(4):389–391. pmid:23455078
  36. 36. Shi Y, Song J, Hua X. Poissonian image deblurring method by non-local total variation and framelet regularization constraint. Computers & Electrical Engineering. 2017;62:319–329.
  37. 37. Zhang H, Dong Y, Fan Q. Wavelet frame based Poisson noise removal and image deblurring. Signal Processing. 2017;137:363–372.
  38. 38. Fang H, Yan L. Poissonian image deconvolution with analysis sparsity priors. Journal of Electronic Imaging. 2013;22(2):023033.
  39. 39. Liu J, Huang TZ, Liu G, Wang S, Lv XG. Total variation with overlapping group sparsity for speckle noise reduction. Neurocomputing. 2016;216:502–513.
  40. 40. Adam T, Paramesran R. Image denoising using combined higher order non-convex total variation with overlapping group sparsity. Multidimensional Systems and Signal Processing. 2019;30(1):503–527.
  41. 41. Ding M, Huang TZ, Wang S, Mei JJ, Zhao XL. Total variation with overlapping group sparsity for deblurring images under Cauchy noise. Applied Mathematics and Computation. 2019;341:128–147.
  42. 42. Figueiredo MA, Bioucas-Dias JM. Restoration of Poissonian images using alternating direction optimization. IEEE Transactions on Image Processing. 2010;19(12):3133–3145. pmid:20833604
  43. 43. Gong P, Zhang C, Lu Z, Huang J, Ye J. A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning; 2013. p. 37–45.
  44. 44. Chen PY, Selesnick IW. Group-sparse signal denoising: non-convex regularization, convex optimization. IEEE Transactions on Signal Processing. 2014;62(13):3464–3478.
  45. 45. Adam T, Paramesran R. Hybrid non-convex second-order total variation with applications to non-blind image deblurring. Signal, Image and Video Processing. 2020;14(1):115–123.
  46. 46. Liu J, Huang TZ, Selesnick IW, Lv XG, Chen PY. Image restoration using total variation with overlapping group sparsity. Information Sciences. 2015;295:232–246.
  47. 47. Chan TFC, Glowinski R. Finite element approximation and iterative solution of a class of mildly non-linear elliptic equations. Computer Science Department, Stanford University Stanford; 1978.
  48. 48. Xiu X, Liu W, Li L, Kong L. Alternating direction method of multipliers for nonconvex fused regression problems. Computational Statistics & Data Analysis. 2019;136:59–71.
  49. 49. Lu C, Feng J, Yan S, Lin Z. A unified alternating direction method of multipliers by majorization minimization. IEEE Transactions on Pattern Analysis and Machine intelligence. 2018;40(3):527–541. pmid:28368818
  50. 50. Deng W, Yin W. On the global and linear convergence of the generalized alternating direction method of multipliers. Journal of Scientific Computing. 2016;66(3):889–916.
  51. 51. Wang Y, Yin W, Zeng J. Global convergence of ADMM in nonconvex nonsmooth optimization. Journal of Scientific Computing. 2019;78(1):29–63.
  52. 52. Lyu Q, Lin Z, She Y, Zhang C. A comparison of typical ℓp minimization algorithms. Neurocomputing. 2013;119:413–424.
  53. 53. Lai MJ, Xu Y, Yin W. Improved iteratively reweighted least squares for unconstrained smoothed q minimization. SIAM Journal on Numerical Analysis. 2013;51(2):927–957.
  54. 54. Foucart S, Lai MJ. Sparsest solutions of underdetermined linear systems via ℓq–minimization for 0 < q ≤ 1. Applied and Computational Harmonic Analysis. 2009;26(3):395–407.
  55. 55. Chartrand R, Yin W. Iteratively reweighted algorithms for compressive sensing. In: IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE; 2008. p. 3869–3872.
  56. 56. Lv XG, Li F. An iterative decoupled method with weighted nuclear norm minimization for image restoration. International Journal of Computer Mathematics. 2020;97(3):602–623.
  57. 57. Figueiredo MA, Bioucas-Dias JM, Nowak RD. Majorization–minimization algorithms for wavelet-based image restoration. IEEE Transactions on Image processing. 2007;16(12):2980–2991. pmid:18092597
  58. 58. Woo H. A characterization of the domain of Beta-Divergence and its connection to Bregman variational model. Entropy. 2017;19(9):482.
  59. 59. Levin A, Weiss Y, Durand F, Freeman WT. Understanding and evaluating blind deconvolution algorithms. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition. IEEE; 2009. p. 1964–1971.