Poisson noisy image restoration via overlapping group sparse and nonconvex second-order total variation priors

The restoration of the Poisson noisy images is an essential task in many imaging applications due to the uncertainty of the number of discrete particles incident on the image sensor. In this paper, we consider utilizing a hybrid regularizer for Poisson noisy image restoration. The proposed regularizer, which combines the overlapping group sparse (OGS) total variation with the high-order nonconvex total variation, can alleviate the staircase artifacts while preserving the original sharp edges. We use the framework of the alternating direction method of multipliers to design an efficient minimization algorithm for the proposed model. Since the objective function is the sum of the non-quadratic log-likelihood and nonconvex nondifferentiable regularizer, we propose to solve the intractable subproblems by the majorization-minimization (MM) method and the iteratively reweighted least squares (IRLS) algorithm, respectively. Numerical experiments show the efficiency of the proposed method for Poissonian image restoration including denoising and deblurring.


Introduction
In many real applications, during the image recording, the measurement of light inevitably leads to the uncertainty of striking particles on the image sensor. In other words, the finite number of electrons or photons carrying energy in an image sensor may cause statistical fluctuations, which are usually modeled as the Poisson distribution [1][2][3]. As another degradation factor, image formation may involve undesirable blurrings such as motion or out-of-focus. By rewriting the concerned images into column-major vectorized form, we can regard the observed image g 2 R n�n as a realization of Poisson random vector with expected value Hf + b, where H is a n 2 × n 2 convolution matrix corresponding to the point spreading function (PSF) which models blur effects, f 2 R n�n is the original image, and b 2 R n�n is a nonnegative constant background [4][5][6].
Poissonian image restoration calls for applying an inverse procedure to approximate f from an observation g degraded by blur and Poisson noise. A general option to tackle this problem is to use the maximum a posteriori (MAP) estimation from Bayesian perspective. Taking the a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 Poisson statistics of noise into account, we can write the conditional distribution of the observed data g as pðgjf Þ ¼ Y n i;j¼1 e À ðHf þbÞ i;j ðHf þ bÞ g i;j i;j where (�) i,j indicates a vector element corresponding to position (i, j) hereafter. On the premise that we adopt a Gibbs prior [7][8][9] of the form for f, the use of Bayes' rule and Stirling's approximation leads to the following minimization problem [10,11]: where λ > 0 is a regularization parameter, ϕ(f) is a nonnegative given function (often called the regularizer), and D KL (Hf||g) denotes the generalized Kullback-Leibler (KL) divergence of Hf from g as shown below: ðHf þ bÞ i;j À g i;j þ g i;j log g i;j ðHf þ bÞ i;j Since the efficiency of the restoration model hinges on the choice of image regularizer, numerous authors have proposed different regularization methods. Tikhonov regularization of form �ðf Þ ¼ kGf k 2 2 (usually, Γ is an identity matrix or difference matrix) is probably one of the most classical methods. In the literatures, several methods have been proposed to efficiently solve Eq (3) with the Tikhonov regularizer, including the scaled gradient projection method [12], the split-gradient method [13], the projected Newton-conjugate gradient (PNCG) method [2], the quasi-Newton projection method [14], the hybrid gradient projection-reduced Newton method [15,16] and the scaled gradient projection-type method [17]. Although the KL divergence and Tikhonov regularizer are well-known to be convex and the unique solution to Eq (3) is guaranteed with inexpensive computational cost, it tends to oversmooth the important details in restored images.
Another classical method, the total variation (TV) regularizer ϕ(f) = kfk TV (see Section 2), has been widely accepted since the noisy signals have larger TV than the original signal [18]. Bardsley et al. [19] proved that minimizing the Poisson likelihood function in conjunction with TV regularization is well-posed, and they also verified its convergence by a nonnegatively constrained and projected quasi-Newton minimization algorithm. Landi et al. [20] also adopted the TV regularizer for denoising the medical images collected by the photon-counting devices and extended the PNCG method introduced in [2]. Tao et al. [21] appended a nonnegativity constraint and used the half-quadratic splitting method to solve the related minimization problem. In addition to the methods mentioned above, other authors employed the effective optimization techniques for solving the TV regularized Poisson deblurring model, including the alternating extra-gradient method [22], the primal-dual method [23,24] and the alternating direction method of multipliers (ADMM) [25,26]. It is well-known that the TV regularization methods could preserve fine details such as sharp edges, but they often exhibit false jump discontinuities causing spurious staircase artifacts in smooth regions of the restored images. This may be due to the fact that the TV regularization tends to transform the smooth regions of the solution into piecewise constant ones while solving the minimization problem.
One possible remedy for this oversharpening behavior is to introduce high-order TV, which could penalize jumps more. Zhou et al. [27] selected the second-order TV ϕ(f) = kfk HTV (see Section 2) as a regularizer, and solved by using the alternating minimization scheme. Their numerical experiments indicated the advantage of the second-order TV in avoiding the staircase artifacts, compared with the classical TV regularization [28]. But the high-order TV usually transforms the smooth signal into over-smoothing [29], so the hybrid regularizations combining the high-order TV with other regularizers were also considered. Among the hybrid models, Zhang et al. [30] obtained better results than the model with a strongly convex term kf k 2 2 proposed in [31], by replacing the first-order TV with a second-order TV. Besides, Jiang et al. [11] combined the first-order and second-order TV priors to restore images contaminated by Poisson noise and solved the minimization model by the ADMM. To further improve the restoration result, Liu et al. [5] studied a spatially adapted regularization parameter updating scheme. As an adaptive balancing scheme between the first and second derivatives, Wang et al. [32] established the Poisson noise removal framework of iterative reweighted total generalized variation (TGV) model based on the EM algorithm for the denoising case. Zhang et al. [33] combined the fractional-order TV with non-local TV to alleviate the staircase artifacts for the cartoon component as well as to preserve the details for the texture component. Ma et al. [34] proposed a hybrid regularizer containing a patch-based sparsity promoting prior over a learned dictionary and a pixel-based total variation prior, but it requires additional strategies to reduce the high computational cost.
Considering that the tight framelet framework can facilitate the sparse representation of images [35], Shi et al. [36] combined the framelet regularization with non-local TV and nonnegativity constraint in the non-blind stage of blind Poisson deblurring. Zhang et al. [37] proposed the nonconvex and noncontinuous model with ℓ 0 norm of the tight wavelet representation as a regularizer. Fang and Yan [38] combined the ℓ 1 norm of framelet coefficients with TV for Poissonian image deconvolution to reduce artifacts yielded by only using TV regularization. In addition to working with transform-domain sparsity mentioned above, the overlapping group sparsity, which describes the natural tendency of large values to arise near other large values rather than in isolation, has been widely concerned in the field of image restoration, due to its remarkable ability to exploit the structural information of the natural image gradient [4,[39][40][41]. Especially, for the Poisson noisy restoration, Lv et al. [4] focused on the regularization by the total variation with overlapping group sparsity (TVOGS) and they showed that the solution obtained by the ADMM framework is superior to the first-order TV regularized methods [25,42] and the high-order regularized methods [27].
To sum up, the high-order TV or the overlapping group sparse prior can be the most feasible option to alleviate the staircase artifacts. Nonetheless, the high-order TV, due to the overpenalizing tendency of the ℓ 1 norm [43], often causes over-smoothing at the edges while the overlapping group sparsity tends to smoothen out blockiness in the restored images more globally. Adam and Paramesran, motivated by the work of Chen et al. [44], proposed a hybrid regularization model that combined the overlapping group sparse total variation with the high-order nonconvex TV to denoise images corrupted by Gaussian noise [40]. Recently, they extended it to non-blind deblurring under Gaussian noise [45]. In this paper, we present an extension of Adam's regularizer to the problem of restoring the Poisson noisy images. This regularizer takes the advantages of both the high-order nonconvex regularizer and overlapping group sparsity regularizer, i.e., it can simultaneously facilitate the pixel-level and structured sparsities of the natural image in the gradient domain. Therefore, we expect that it can not only effectively reduce staircase artifacts but also preserve well sharp edges in the restored image. However, its optimization is more challenging than Gaussian deblurring due to the ill-conditioned non-quadratic data fidelity term. We employ the alternating direction method of multipliers to solve the derived minimization problem. In particular, during the ADMM iterations, we solve two intractable subproblems: one is from overlapping group sparse prior and solved by majorization-minimization (MM) method with well-known quadratic majorizer; another is from the nonconvex ℓ p (0 < p < 1) quasi norm, which is solved by the iteratively reweighted least squares (IRLS) algorithm with the motivation that the IRLS is guaranteed to converge to its local minima provided better theoretical and practical abilities than the iteratively reweighted ℓ 1 (IRL1) algorithm [52][53][54][55].
The rest of this paper is organized as follows. In Section 2, we introduce some notations that will be used to formulate our proposed method. We also briefly review the essential concepts and tools including the overlapping group sparsity prior, the ADMM algorithm, the MM method, and the IRLS algorithm. Section 3 establishes a novel Poissonian image deblurring model which comprises the generalized Kullback Leibler (KL) divergence as data fidelity term, and a combined first-order and second-order total variation priors with overlapping group sparsity as a regularization term. Sequentially, we derive an effective algorithm for minimizing the non-convex and non-smooth objective function under the ADMM optimization framework. Section 4 demonstrates the superiority of our method via numerical experiments, followed by analyzing the parameter setting and convergence behavior. We finally conclude this paper in Section 5.

Notations
It is necessary to introduce some notations throughout this paper. Assuming that all images under consideration are gray-scale images of size n × n, we lexicographically stack the columns of an image matrix into a vector form. For example, the (i, j)th pixel of image f becomes the ((j − 1)n + i)th entry of the vector, just written as f i,j . Under the periodic boundary conditions for image f, we introduce the discrete forward difference operator D þ : R n�n ! ðR n�n ; R n�n Þ defined by and similarly for the backward difference operator D À : R n�n ! ðR n�n ; R n�n Þ, where the definitions of each forward and backward sub-operators are separately: ; Second-order difference operators can be recursively defined by using the first-order difference operators such as and other second-order difference operators can be similarly defined. Based on the above definitions, we denote the first and second-order TV of f as kðrf Þ i;j k 2 and kf k HTV ≔ X n i;j¼1 where k�k 2 means the Euclidean norm, ðrf Þ i;

TVOGS and MM algorithm
To describe the structural sparsity of image gradient, we define a pixel-groupṽ ði;jÞ;K of size where and b�c denotes the floor function, i.e., it converts any real number into a nearest integer less than or equal to it. Let v (i,j),K be the column-major vectorized form ofṽ ði;jÞ;K , i.e., v ði;jÞ;K ¼ṽ ði;jÞ;K ð:Þ. Then, as in Liu et al.'s work [46], we can denote the TVOGS regularizer ϕ TO (f) as where r 1 f ≔ D þ 1 f and r 2 f ≔ D þ 2 f denote the horizontal and vertical gradient of f, respectively, and � O ðvÞ ¼ P n i;j¼1 kv ði;jÞ;K k 2 is the K-group OGS function of v 2 R n 2 . For the notational simplicity, we denote Next, we review a minimization problem of the form and we denote the objective function of Eq (12) where Λ(u) is a diagonal matrix containing the following entries along its diagonal ½LðuÞ� l;l ¼ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffiffi with l = (r − 1)n + t, for r, t = 1, 2, � � �, n. Eq (13) has the closed-form solution which is the input for the next MM iteration. The inversion of the matrix I n 2 þ lLðv ðkÞ Þ T Lðv ðkÞ Þ can be efficiently done via simple component-wise calculations. In summary, we obtain the following algorithm for Eq (12). Algorithm 1 MM algorithm for minimizing Eq (12)

Alternating direction method of multipliers
The standard form of the ADMM [47] is designed to tackle the distributed convex optimization problem with linear equality constraints, and many variants have been developed [48][49][50]. Recently, Wang et al. [51] extended it to the problems involving the nonconvex and nonsmooth muti-blocks objective, as follows: In general, F i can be nonsmooth and nonconvex, and G can be nonconvex. By introducing a Lagrangian multiplier w 2 R M and a penalty parameter δ > 0 for the linear constraint P s i¼1 A i x i þ By ¼ 0, we obtain the augmented Lagrangian in a scaled form, The ADMM algorithm proceeds to alternatively update each variable (x (k+1) , y (k+1) ) until it reaches a stationary point of Eq (17) or meets a stopping criterion. Then, we have the following iterative scheme: The following lemma establishes the convergence result of ADMM with the nonconvex and nonsmooth objective [51].
Suppose that the following assumptions S1-S5 hold, then the sequence generated by Eq (18) with any sufficiently large δ and any initialization has at least one limit point, and each limit point is a stationary point of Eq (17). S1 (coercivity) Let D ¼ fðx; yÞ 2 R NþL : Ax þ By ¼ 0g be the nonempty feasible set and S4 (objective-F regularity) F is lower semi-continuous or F 1 is lower semi-continuous and F i ; i ¼ 2; � � � ; s is restricted prox-regular; S5 (objective-G regularity) G is Lipschitz differentiable with the constant L r G > 0.

Proposed model and optimization method
In this section, we first present the proposed model and then solve it in the framework of nonconvex and nonsmooth ADMM.

Model formulation
Considering the advantages of the overlapping group sparse total variation and the high-order nonconvex total variation, we investigate the Poisson noisy image restoration problem with the following regularizer: where ϕ TO (f) is the TVOGS term, � HTVp ðf Þ ≔ kðr 2 f Þk is the nonconvex second-order TV term, and η > 0 is a regularization parameter. Substituting the hybrid regularizer Eq (22) into Eq (3) with the consideration to Eq (11), we obtain In the model above, λ and η are used to control the data fidelity term and the nonconvex second-order regularizer, respectively. The data fidelity term is an ill-conditioned non-quadratic log-likelihood, while the regularizer is nonconvex and nonsmooth due to the presence of the ℓ p (0 < p < 1) quasi norm. This may increase the difficulty in minimizing the model. To circumvent this difficulty, we propose an efficient algorithm based on the framework of nonconvex and nonsmooth ADMM in the following subsection.

Optimization
In order to apply the variable splitting scheme, we introduce three auxiliary variables to transform the original complicated minimization problem into the following equivalent linear constraint minimization problem: We establish a corresponding relation between the variables to conform with the ADMM framework Eq (16), as shown below: where 0 1 , 0 2 , 0 3 denote zero matrix of size 2n 2 × 4n 2 , 4n 2 × n 2 , n 2 × 2n 2 , respectively, and I n 2 is the identity matrix of order n 2 . By introducing the Lagrangian multiplier w ¼ ½w 1 ; w 2 ; w 3 � 2 R 7n 2 (here w 1 2 R n 2 ; w 2 2 R 2n 2 ; w 3 2 R 4n 2 ) and the positive penalty parameter vector (24) into the unconstrained minimization of the following augmented Lagrangian function:

we can turn Eq
Then, the ADMM for solving Eq (24) starts its iteration to update every variable by minimizing Eq (25) alternatively. This iterative scheme could be split into several subproblems.

x 1 -subproblem.
According to the ADMM scheme, pulling out the terms with x 1 from Eq (25) yields It is apparent that each component of x 1 can be solved separately. Through the basic mathematical manipulations in the case of b = 0, we obtain ðx ðkþ1Þ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi

x 2 -subproblem.
Minimizing L d x 1 ; x 2 ; x 3 ; f ; w ð Þ with respect to x 2 leads to the overlapping group sparse problem It is easy to see that this minimization problem matches the framework of Eq (12), thus x 2 can be solved by Algorithm 1.

x 3 -subproblem.
By omitting terms irrespective of x 3 , we have Since this problem involves the nonconvex ℓ p norm, minimizing x 3 is not straightforward. But, after some simple modifications, we can apply the IRLS algorithm according to Algorithm 2. (25) with respect to f, we have

f-subproblem. Considering Eq
which can be solved by the following normal equation We assume that both the image and convolution matrix are periodically extended, thus well-known fast Fourier transform can be adopted to efficiently solve Eq (31) [4,56,57], which results in an optimal solution f kþ1 where F denotes the FFT operator, � and � stand for complex conjugate and element-wise multiplication, respectively. The division is computed by component-wise fashion. Note that many subterms need to be computed only once during the iterative updates.

w-subproblem.
Finally, the updating scheme of the Lagrangian multipliers can be written as In summary, the key steps of the ADMM algorithm for solving the suggested model are described in Algorithm 3.
Algorithm 3 The ADMM algorithm for solving Eq (23) We discuss the convergence of the proposed algorithm. Inspired by [51], by verifying the assumptions in Lemma 1, we can obtain the following convergence result for Algorithm 3. Theorem 1. The sequence of (x 1 , x 2 , x 3 , f) generated by Algorithm 3 with any sufficiently large δ and any start point will converge to a stationary point of the augmented Lagrangian L d .
Proof. Since our model fits the framework of Eq (16), it remains only to check S1-S5 in Lemma 1. It is easy to verify that D KL (x 1 ||g) is coercive [58] as well as ϕ O (x 2 ) and kr 2 x 3 k p p . Thus, S1 holds. Ax + By = 0 implies S2. S3 holds because both A and B are full column rank matrices. D KL (x 1 ,g) is lower semi-continuous and ℓ p -quasi norm is restricted prox-regular [51]. Furthermore, ϕ O (x 2 ) is convex, hence restricted prox-regular. It is clear that S5 holds, which completes the proof.
The convergence of Algorithm 3 can also be verified experimentally.

Numerical experiments
In this section, we present the numerical experiments to illustrate the effectiveness of the proposed method for the Poisson noisy image restoration, compared with the closely related stateof-the-art methods including TVOGS [4], SAHTV [5], SB-FA [38] and FT-ADMM [37]. The TVOGS method introduced the overlapping group sparse TV prior combined with the boxconstraint as a regularizer and solved the optimization model under the ADMM framework, but denoising is out of their consideration. In the SAHTV method, Liu et al. [5] combined the first and second-order TV priors, and updated their pixel-wise weighting coefficients with the local information during the consecutive estimation of the latent image. FT-ADMM, whose regularizer is the ℓ 0 norm of the wavelet representation of the image, was also proposed to efficiently eliminate the staircase artifacts. Fang and Yan [38] proposed to combine the ℓ 1 norm of the framelet representation of the latent image with the TV prior, and solved the resultant model by the split Bregman method. Since they recommended using the SB-FA algorithm without the TV prior through the numerical experiments, we adopt SB-FA as a competitive method rather than SB-FATV with the TV prior. All experiments in this paper are implemented on Windows 10 64-bit and Matlab R2016b running on a desktop computer with an Intel Core i5-4590 CPU 3.3GHz and 8GB of RAM. We uploaded the code for our algorithm on https://github.com/KSJhon/PoissonDeblur_hybrid.
To proceed with the simulation experiments, we intentionally degrade the clean image to construct the corrupted image. More specifically, an original clean image is first scaled to a preset peak value MAX f , which decides the different levels of Poisson noise. After that, the scaled image is convolved with a given PSF to simulate the blurring effect, and further contaminated by the signal-dependent Poisson noise. In the following experiments, we set MAX f to be 100, 200, 300, and 350, in which the lower MAX f indicates the relatively higher noise level. For the deblurring simulations, we consider three types of blur kernels: (1) a 9 × 9 Gaussian kernel with standard deviation 1; (2) a linear motion blur with a motion length 5 and an angle 45˚in the counterclockwise direction; (3) a kernel from Levin et al.'s public dataset [59]. All blurring operations are fulfilled by the Matlab built-in function "fspecial", and the Poisson noise is generated by "poissrnd" without any additional parameter. The test images with various sizes are shown in Fig 1. We quantitatively evaluate the performances of the proposed method and the competing methods by means of the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM), defined as the quality of the restored image. In the experiments, we terminate the iteration in Algorithm 3 when the relative error (RelErr) between two consecutive estimates falls below the predefined tolerance level ε, as follows:

Selection of parameters
Since the convergence of the nonconvex ADMM and its suboptimization in Algorithm 3 depends on the values of parameters, they require careful tuning. For the choice of p, as in the experiments reported in [40], we set it to be 0.1 throughout the experiments, with the consideration of the important aspect of ℓ p norm, which says that taking p sufficiently close to 0 favors to preserve sharp edges in the restored image.
The group size K also plays an important role in balancing the trade-off between the global noise filtering performance and the computational cost. In order to find the optimal K, we conduct a denoising experiment by varying K while keeping other parameters remained, as shown in Fig 2. Obviously, K = 3 is the best choice for all noise levels, so we fix K = 3 in the following experiments. In addition, the number of inner iterations, N in , also affects the MM algorithm. We straightforwardly fix N in = 5 as indicated in [4,29,39,40]. To balance the accuracy and speed, we empirically set N out = 50, ε = 10 −3 . The other parameters, including the regularization parameters (λ, η) and penalty parameter δ, are manually adjusted to achieve the highest improvement in the PSNR and SSIM values. In the following subsections, we compare the proposed method (named HTVp-OGS) with the state-of-the-art methods for denoising and deblurring images under the Poisson noise. Unless otherwise indicated, all parameters involved in the competitive algorithms are carefully selected near the given default values to give the best performance, respectively.

Denoising
Here, we show that our model provides better results in Poisson noise removal through the comparison with the state-of-the-art methods: SB-FA [38], SAHTV [5], and FT-ADMM [37].
To evaluate the denoising performance, we just set H as an identity matrix, and empirically fix δ = (5 � 10 −3 , 3 � 10 −2 , 2 � 10 −4 ) T . We can observe that λ = 3 � MAX f can bring the satisfactory results of Eq (23). The other regularization parameter η is hand-tuned to get the best denoising performance according to the noise level. The denoising results, in terms of the PSNR and SSIM values, are shown in Table 1. From Table 1, it is obvious that the PSNR and SSIM values of the images denoised by our model are higher than those of the other three methods (SB-FA, SAHTV, FT-ADMM). We display in Fig 3. the denoised versions and the zoom-in regions obtained by our method and other competitive methods from the noisy images with the noise level MAX f = 100.

Deblurring
In our deblurring experiments, we consider three types of blur kernels. As mentioned above, the first two are artificial to respectively mimic the effect of the out-of-focus blur and motion blur, and the last one is the blur kernel from Levin et al.'s dataset [59]. For the sake of a fair comparison with other methods, we equally set the tolerance level ε = 10 −3 and iteration   Table 2.
Setting 3. ground truth blur case: In this case, we set η = 4, 3, 2, 0.2, and the detailed results are summaried in Table 4. In Fig 4, we show the degraded "dolphin" images which are blurred by the corresponding kernels and further corrupted by the Poisson noise of noise level 200, as well as its restored versions obtained through our method and the competitive methods. Fig 5(a) depicts the RelErr curves for the sequence of temporary estimates of the "lotus" image obtained during the HTVp-OGS iterations. Besides, in Fig 5(b), we show that, as the iteration proceeds, the PSNR and SSIM values moderately increase. From the results of the above three experiments, we can see that in most cases, our method provides the best results in terms of the PSNR and SSIM  values, and in extreme cases, we obtain a slightly lower PSNR value than others, but the SSIM value still keeps the best. It is worth noting that the transform-based methods (SB-FA and FT-ADMM) are quite time-consuming because they inherently involve some complicated nonlinear operations [3], while our method works in the gradient domain, making it take a relatively short running time.

Conclusions
In this paper, we propose a new method to restore Poissonian images (including denoising and deblurring) by using a hybrid regularizer, which combines the overlapping group sparse total variation with the high-order nonconvex total variation. This regularizer allows us to exploit their advantages in order to alleviate the staircase artifacts while preserving the original sharp edges simultaneously. The model derived from the Bayesian perspective is effectively solved by the nonconvex and nonsmooth ADMM optimization framework. We adopt the MM and IRLS algorithm for solving the subproblems accompanied by the OGS and nonconvex second-order total variation priors, respectively. Numerical experiments demonstrate that the proposed method outperforms the other related methods in terms of the PSNR and SSIM values. The study on adaptively determining the optimal parameters according to the image content and noise level will be the future work.
Supporting information S1 File.