Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Efficient preconditioning strategies for accelerating GMRES in block-structured nonlinear systems for image deblurring

Abstract

We propose an efficient preconditioning strategy to accelerate the convergence of Krylov subspace methods, specifically for solving complex nonlinear systems with a block five-by-five structure, commonly found in cell-centered finite difference discretizations for image deblurring using mean curvature techniques. Our method introduces two innovative preconditioned matrices, analyzed spectrally to show a favorable eigenvalue distribution that accelerates convergence in the Generalized Minimal Residual (GMRES) method. This technique significantly improves image quality, as measured by peak signal-to-noise ratio (PSNR), and demonstrates faster convergence compared to traditional GMRES, requiring minimal CPU time and few iterations for exceptional deblurring performance. The preconditioned matrices’ eigenvalues cluster around 1, indicating a beneficial spectral distribution. The source code is available at https://github.com/shahbaz1982/Precondition-Matrix.

1 Introduction

For the past three decades, the importance of research in image deblurring has focused on nonlinear variational methods. The use of these techniques on large, blurry, and noisy images faces two primary obstacles: a large-scale system of equations and nonlinearity, which arise when discretization is applied after linearization. The purpose of this paper is to explain the challenges associated with these calculations. Mean Curvature (MC) regularization is a nonlinear variational model commonly used in image deblurring. It is used and explained in detail in previous work [17, 34, 40, 42, 43]. MC-based regularization models have the ability to minimize the staircase effect and preserve edges, which is why they are often used in image deblurring during the image recovery process. Discretizing the Euler-Lagrange equations leads to a large, nonlinear system that is often ill-conditioned ,which poses significant challenges for the stability and convergence of computational methods. In the mean curvature-driven system, the Jacobian matrix displays a segmented, banded configuration with considerable bandwidth, highlighting the need for a robust and efficient computational strategy. The next section introduces the block structure that arises from the discretization of the mean curvature-based image deblurring problem, as discussed in [2022].

Addressing these types of systems presents a significant challenge for computational techniques ,despite using established techniques like Krylov subspace methods, such as the Generalized Minimal Residual (GMRES) method. These methods often exhibit slow convergence, reducing efficiency. A promising solution is the use of preconditioning techniques, as discussed in recent studies [4, 5, 10, 24, 29]. In practice, preconditioners are often applied with some inexactness through inner iterations, frequently using methods such as Flexible GMRES (FGMRES) [13, 26]. Iterative methods rely on specific parameters, but the optimal choice of these parameters may not always be ideal for Krylov subspace methods with preconditioning matrices.

This paper presents novel block preconditioners designed specifically for the five-by-five block structure of the mean curvature-based image sharpening problem , eliminating the necessity of supplementary parameters . These preconditioners aim to improve efficiency while simplifying the solution process. We introduce an innovative partitioning strategy for the coefficient matrix A , ensuring stable and consistent convergence of the iterative method without specific conditions. This results in the development of two novel preconditioners that notably improve the convergence rates of Krylov subspace techniques. The preconditioned matrices exhibit a clustering of eigenvalues around 1, suggesting improved convergence for the GMRES method with preconditioning. The primary contributions of this study are summarized below, highlighting the novel approaches and advancements introduced. These contributions enhance the performance and efficiency of the proposed methods.

We introduce novel block preconditioners for the MC-based image deblurring problem’s five-by-five block structure and an unconditionally convergent partitioning strategy. A frequency-domain analysis ensures better eigenvalue distribution and faster convergence. Comprehensive experiments validate the theoretical findings. We also compare our approach with state-of-the-art methods. The organization of the paper is as follows: Sect 2 presents the problem formulation, the discretization technique, and the block structure related for image deblurring methods rooted in mean curvature principles restoration issue. Sect 3 provides details on Krylov subspace methods. Various iterative methods and the proposed preconditioners, along with an analysis of the eigenvalue distribution, are discussed in Sect 4. Sect 5 evaluates the performance of the proposed preconditioners in comparison with existing numerical techniques for the image deblurring problem. Sect 6 concludes the study.

2 Problem description

The main objective of the present study is to develop a robust solution to the complex and challenging problem of image deblurring, aiming to significantly improve image clarity and quality , focusing on developing effective solutions to improve image quality and clarity , and We begin by providing a brief overview of the problem, highlighting its key aspects and challenges. This paper is dedicated to tackling the challenge of image deblurring, aiming to develop more effective methods for restoring image clarity and details , starting with a brief summary of the problem. From a mathematical perspective, The mathematical formulation relating The connection between the original image u and the observed (measured) image z is represented as follows:

(1)

In this formula, denotes the noise function, which can appear as different forms of noise, salt-and-pepper noise, including Gaussian noise, and so on. Within the context of this paper, We focus specifically on Gaussian noise. The blurring operator is characterized as a Fredholm integral operator of the first kind, which models the blurring process in image processing through integration over a specific domain. This operator is fundamental in defining the transformation from the original image to the blurred image , which acts to model the degradation process in image restoration:

This operator exhibits a property known as translation invariance, meaning its behavior remains consistent when applied across different spatial locations. The problem formulated in (Eq 1) is regarded as ill-posed [2, 3133, 35], mainly because the operator exhibits compactness, which leads to challenges in obtaining a stable and well-defined solution. This compactness introduces challenges in the problem’s solvability, leading to instability in the numerical solution process. The lack of a well-defined solution, coupled with the operator’s inherent properties, results in a system where small perturbations in the data can lead to significant deviations in the outcome. These characteristics highlight the need for specialized techniques to stabilize and solve the problem effectively. Let denote a square region within , where the function u defined on represents the intensity values of the image. In this context, u serves as a spatially dependent variable that encodes the brightness or color intensity of each pixel in the image. This intensity function u is essential for describing the underlying features of the image in a continuous domain, and it is the target of reconstruction or restoration processes in image deblurring tasks. A point in the domain is denoted by , where represents the Euclidean distance of the point from the origin, also known as the Euclidean norm. Additionally, signifies the norm, which is a standard measure used to quantify the magnitude of functions or vectors within the square domain . (Eq 1) describes a problem of inversion, where the goal is to reconstruct the true image u from the observed image data z. This task is classified as ill-posed, as it lacks guarantees of uniqueness or stability in the solution without incorporating further regularization techniques. This arises due to the inherent instability in the reconstruction process, which is often sensitive to small perturbations or noise in the observed data, as discussed in [2, 35].

A frequently adopted strategy to enhance the stability of the image deblurring procedure is the inclusion of a regularization term based on mean curvature (MC) [16, 17, 19, 30, 34, 40, 42, 43]. This regularization technique aims to preserve important features in the image while minimizing unnecessary noise. The MC functional is mathematically formulated as follows:

As a result, the original problem outlined in (1) is reformulated into an optimization task where the goal is to identify the image u that minimizes the following functional. This revised formulation incorporates regularization to achieve a balance between fidelity to the observed data and smoothness in the reconstructed image:

(2)

Within this approach, the regularization constant plays a pivotal role in balancing the data accuracy and the smoothness of the solution. The stability and solvability of the problem described by (Eq 2) are examined in the context of synthetic image denoising, as outlined in [42]. This examination offers essential understanding of the key factors influencing the method’s performance, highlighting both its advantages and possible drawbacks. The Euler-Lagrange equations corresponding to the problem defined in (Eq 2) are subsequently derived, yielding the essential conditions required for achieving an optimal solution. These conditions are fundamental to understanding the relationship between the variables and guiding the solution process effectively.

(3)(4)

In this context, represents the adjoint of the operator , which plays a pivotal role in the optimization process. Additionally, introducing helps to avoid any issues with non-differentiability at zero, ensuring that the optimization procedure remains smooth and stable. Since is second order differentiable functional, consequently, (Eq 3) describes a nonlinear differential equation of fourth order, which captures the core dynamics of the problem in a mathematical framework.

The expression in (3) simplifies to a first-order system

(5)(6)(7)(8)(9)

by the following substitutions

and

In tackling the image deblurring problem through mean curvature, an efficient method is utilized to improve the restoration process. Here, we present a concise description of the discretization discussed in [2022]. The domain is partitioned into grid units, each having a size of . The points represent the centers of the cells, defined as:

Here, nx and ny denote the number of equispaced partitions in the x and y directions, respectively. For simplicity, we assume and . The points and denote the midpoints of the cell edges:

For each , we define the following subdomains:

In the context of discrete functions, we require values at specific points to represent the function accurately. Let denote the discrete representation of , where l and m can take values such as j, , k, or . Note that j and k are non-negative integers. This definition allows us to compute the required values at discrete points for accurate representation. Specifically, we define the discrete derivatives as:

Using the midpoint quadrature approximation, we approximate the operator as:

This partitioning approach facilitates the numerical analysis of the problem. Through the use of a lexicographical ordering of the unknowns,

Through the use of the CCFD method to solve (Eqs 5)–(9), We arrive at the following system:

(10)(11)(12)(13)(14)

The integral term is approximated using the midpoint quadrature rule. The matrix representations Kh, Ah, and Ih each have a size of . The matrix Bh has a size of . The matrices Ch and Dh each have a size of . As a result, We arrive at the following system:

The matrix Kh has a structure known as Block Toeplitz with Toeplitz blocks (BTTB) , which enhances its computational efficiency in large-scale problems , and the product is both symmetric and positive definite (SPD), which ensures its stability in numerical computations. The matrix Ah can be expressed as a diagonal matrix, arranged in the following manner:

In this framework, the matrices A1 and A2 are of size and are specified as follows:

where represents the tensor product. The identity matrix has dimensions . The matrix

has dimensions . The matrix Bh exhibits the following structure:

where both B1 and B2 have dimensions , and

is a matrix with dimensions . The matrix

The matrix is structured as a diagonal matrix, with its elements derived from the discretization of the term . The matrix Cx has dimensions , while Cy is of size . In a similar fashion, the matrix Dh is diagonal, with positive values on its diagonal. The diagonal values of Dh are computed by discretizing the expression . The matrix Dh is structured as follows:

In recent years, significant advancements have been achieved in the development of efficient iterative techniques for solving large block matrix problems [8, 9, 15, 28, 38, 39, 41]. In particular, the integration of Krylov subspace methods with suitable preconditioning strategies has demonstrated remarkable effectiveness in addressing these complex challenges [14, 25, 27].

3 Krylov subspace methods

Krylov subspace methods are a class of iterative algorithms used for solving large, sparse systems of linear equations of the form , where A is a nonsingular matrix. These methods construct approximate solutions in a Krylov subspace, defined as:

where is the initial residual. Krylov methods iteratively refine the solution by projecting the problem onto this subspace. The GMRES method is a widely used Krylov subspace method for solving nonsymmetric or non-Hermitian systems. At the m-th iteration, GMRES computes the solution that minimizes the residual over the subspace . This is achieved using the Arnoldi process to construct an orthonormal basis of , and then solving a small least-squares problem at each iteration.

GMRES is particularly effective for nonsymmetric systems, but its performance can degrade for ill-conditioned problems. To address this, preconditioning is often employed [3, 4, 6, 10, 29]. To accelerate convergence, GMRES is often coupled with a preconditioner, which transforms the original system into

where M is a preconditioner matrix that approximates A but is easier to invert.

Consequently, Krylov subspace methods combined with preconditioning are anticipated to show significantly enhanced convergence behavior. Additionally, in the following section, we introduce two innovative preconditioners aimed at accelerating the rate at which Krylov subspace methods converge when applied to the image deblurring problem can be significantly improved with the use of tailored preconditioners.

4 Novel preconditioned matrices

This section begins by providing an overview of fundamental concepts and key principles associated with iterative methods, which are commonly employed in numerical computations. These methods serve as the foundation for efficiently The solution of large-scale systems plays a crucial role in numerous applications, particularly in tasks such as image deblurring and various other computational problems. Consider a matrix , this can be expressed using an arbitrary factorization of A as A = PR, where P represents an invertible matrix and R represents the residual component that captures the discrepancy. In this factorization, A basic stationary iterative technique employed to solve the system (10)–(14) can be expressed as follows:

(15)

In the wider framework of iterative techniques, it is well-known that the iterative scheme (15) will ultimately converge, irrespective of the initial value , provided that the spectral radius of the iteration matrix P−1R remains strictly below one, i.e., .

For the first preconditioner P1, we explore a block factorization of the matrix A related to the system (10)–(14), given as follows:

(16)

For the initial preconditioner P1, we investigate a block decomposition of the coefficient matrix A associated with the system (10)–(14), expressed as follows:

(17)

where

As a result, the system (10)–(14) naturally leads to the following block structure:

(18)

where and . To solve the system (18) starting with an arbitrary initial guess , We can subsequently formulate the following iterative method

(19)

The matrix P1 in (17) can be expressed in its factored form as:

(20)

Given the decomposition described earlier, it is clear that the matrix P1 is invertible, leading to the following result:

(21)

In order to guarantee the convergence of the iterative process (19), we now introduce the following theorem.

Theorem 4.1

Consider a symmetric positive definite (SPD) matrix and a full-rank matrix . Then, for any initial guess and , the iterative process (19) converges to the unique solution of (18).

Proof

Since is symmetric positive definite (SPD) and I is the identity matrix, it follows that Q is SPD. Furthermore, since Bh is of full rank, M must also be a full rank matrix. Consequently, from (Eq 21), we have

That is

Thus, we can conclude that the eigenvalues of the matrix are indeed equal to zero.

Therefore, it follows that .

This concludes the proof.

In practical applications, The rate of convergence for the stationary iterative method (19) may not be sufficiently rapid to efficiently solve the system (18). To address this limitation, more advanced techniques may be required to achieve faster convergence. The primary goal is to employ the matrix P1 as a preconditioner to improve the efficiency of Krylov subspace techniques, thus speeding up the convergence. This method is designed to enhance the overall computational effectiveness of solving the system, including methods like GMRES. To implement the preconditioner P1, it is necessary to solve a system of equations in the following manner, ensuring that the process remains both effective and efficient in accelerating convergence.

(22)

The simple procedure for calculating is outlined in Algorithm 1.

Algorithm 1. Computation of .

1: Solve for z1;

2: Solve for z2;

3: Solve for z3;

4: Solve for z4;

5: Solve for z5;

     where and .

When utilizing the preconditioner P1, there is a step that requires inverting the matrix , which can be managed with ease. The matrix is symmetric positive definite. Although, is full but the blurring operator K has translation invariant property, which allows the use of Fast Fourier transformation (FFT) to evaluate in O(nlogn) operations [36]. Furthermore, when applying the preconditioner P1, we also encounter a step that involves performing operations with , which requires several matrix-vector multiplications. Addressing the linear system with the coefficient matrix becomes significantly computationally intensive when utilizing the preconditioner P1, primarily because of the increased complexity associated with the operations required. Nevertheless, given that Dh is a matrix with a diagonal structure where all diagonal elements are strictly positive, it inherently exhibits the characteristic of being symmetric positive definite. This property allows for the efficient computation of exact solutions to the system , utilizing methods such as the preconditioned conjugate gradient (PCG) approach for inexact solutions or LU decomposition, thus simplifying the overall system [9, 11, 12, 23]. The inversion of Dh when using P1 is straightforward since Dh is a diagonal matrix.

We now analyze the eigenvalues of the matrix after applying the preconditioner, , which offers crucial insights into optimizing the performance and efficiency of Krylov subspace methods.

Theorem 4.2

Examine a symmetric positive definite (SPD) matrix , and let represent a matrix of full rank. Then, for the matrix A, the preconditioner P1 fulfills the subsequent conditions:

In this case, The symbol represents the collection of eigenvalues associated with a matrix.

Proof

It can be readily confirmed that

This concludes the proof.

Building upon the previous eigenvalue analysis, it becomes evident that the preconditioned matrix has a minimal polynomial of degree five. This implies that utilizing a technique like GMRES with this preconditioner will lead to exact convergence within a maximum of five iterations.

To further enhance the convergence behavior of the GMRES method, we introduce a second preconditioner, labeled P2. In order to implement this, we examine a block decomposition of the matrix A associated with the system (10)–(14) , which can be represented as:

(23)

By applying the substitution outlined above, we can express P2 and R2 as follows:

(24)

In order to solve the system (18) starting from an arbitrary initial estimate , the following iterative approach can be applied:

(25)

The matrix P2 in (24) can be written as:

(26)

The above decomposition yields the following result:

(27)

We present the following theorem, which provides the guarantee for the convergence of the iterative technique (25).

Theorem 4.3

Consider a symmetric positive definite (SPD) matrix , and let be a matrix of full rank. Under these conditions, the iterative process described by (Eq 25), For any arbitrary initial guess and , the method is assured to converge to the unique solution of the system (18).

Proof

Since is symmetric positive definite (SPD) and I is the identity matrix, it follows that Q must also be SPD. Furthermore, as Bh is a full-rank matrix, we can conclude that M is also a matrix of full rank. Therefore, based on (Eq 27), we can deduce the following:

That is

We have demonstrated that the eigenvalues of the matrix are equal to zero.

As a result, we have .

Thus, the proof is now complete.

We now present the following theorem regarding the eigenvalues of the preconditioned matrix .

Theorem 4.4

Let be a symmetric positive definite (SPD) matrix, and let be a full-rank matrix. Then, for the matrix A, the preconditioner P2 satisfies

Proof

It is straightforward to confirm that

Hence

It is clear that the matrix P2 exhibits a minimal polynomial of order five. As a result, iterative methods like GMRES, when combined with P2, are guaranteed to converge to the exact solution in no more than five iterations. To apply the preconditioner P2, it is necessary to solve a system of linear equations as outlined below.

(28)

The procedure for calculating is outlined in Algorithm 2.

Algorithm 2 closely resembles Algorithm 1 in terms of its simplicity and structure. Similar to the previous algorithm, this method includes a step that involves operations with the matrices , Dh and , which can be performed in the same manner as described in Algorithm 1 above.

Algorithm 2. Procedure for computing .

1: Solve for z1;

2: Solve for z2;

3: Solve for z3;

4: Solve for z4;

5: Solve for z5;

     where and .

5 Numerical experiments

In this section, we conduct a numerical experiment designed to tackle the challenge of image deblurring. Specifically, we evaluate the performance of our proposed P1-GMRES and P2-GMRES methods in comparison with some existing image deblurring approaches [3, 7, 18]. This comparison aims to provide valuable insights into the performance and effectiveness of the proposed method in the context of image reconstruction. By evaluating the different preconditioned GMRES techniques, we will assess how well each method contributes to enhancing the quality and accuracy of the reconstructed images. The suggested methodology centers on leveraging the preconditioned Generalized Minimal Residual Method (PGMRES) as a foundational tool. This approach aims to enhance computational efficiency and convergence speed by carefully incorporating the recommended preconditioner throughout the iterative process. This method is instrumental in enhancing both the efficiency and convergence rate of the iterative process. By utilizing this preconditioned approach, the computational load is effectively managed, leading to faster and more reliable convergence. To address the inherent nonlinearity in the MC model, the Fixed Point Iteration (FPI) method is first applied to the system (10)–(14) for numerical evaluation. This step is crucial for stabilizing the solution procedure and establishing a basis for more precise iterative improvements. Subsequently, the PGMRES algorithm is applied to enhance convergence speed and boost computational efficiency. The numerical experiments in this study were performed on a system with the following specifications: The system used for the numerical experiments is equipped with an Intel® CoreTM i5-6300U processor running at 2.40 GHz, 8.00 GB of RAM, and MATLAB R2022a. This configuration was chosen to maintain consistent performance throughout the trials and ensure reliable benchmarking of the iterative methods. The Peak Signal-to-Noise Ratio (PSNR) is a fundamental measure for evaluating the quality of the restored images. It determines how closely the reconstructed image corresponds to the original by comparing their differences. A higher PSNR indicates a greater similarity between the two images, implying a higher restoration quality and better preservation of the original details.

Example 1

In this work, the Clown image , Cameraman image , Boats image and Moon image serves as a standard example, with various feature visualizations displayed in Figs 5, 6, 7, and 8, respectively. Each subfigure in Figs 5, 6, 7, and 8 has dimensions of . Here, we compare the performance of several methods, including GMRES, P1-GMRES, P2-GMRES, and PS-GMRES, as outlined in [3]. A stopping criterion with a tolerance of was included in the proposed numerical approach. For the numerical simulations, the kegen(N,300,5) kernel was utilized, with parameters and , as outlined in [2022] . The result of the kegen(120,40,4) kernel is shown in Fig 1. It represents a circular Gaussian kernel with a size of , a radius of r = 40, and a standard deviation of .

thumbnail
Fig 2. Distribution of eigenvalues for the matrices A, , , and for Clown image of size .

https://doi.org/10.1371/journal.pone.0322146.g002

thumbnail
Fig 3. Zoom out figures of eigenvalues distribution of matrices , and for the Clown image of size .

https://doi.org/10.1371/journal.pone.0322146.g003

thumbnail
Fig 4. The norm of residuals and the first 25 iterations for GMRES, , , and are provided for the Clown image of size .

https://doi.org/10.1371/journal.pone.0322146.g004

For the Clown, Cameraman, Boats, and Moon images at resolutions of , , and , the blurred PSNR values, the corresponding deblurred PSNR values and additional details are presented in Tables 1, 2, 3, and 4. These values serve as an objective assessment of the level of degradation observed in the images at each resolution.

Remarks

  1. From Figs 2 and 3, it can be discerned that the eigenvalue spectra of and are notably more advantageous compared to that of . Specifically, the eigenvalues of and tend to cluster around 1. Additionally, upon closer inspection of Fig 3, it is apparent that the eigenvalues of exhibit a tighter clustering around 1 compared to those of and .
  2. The effect of applying preconditioning becomes evident upon analyzing Fig 4. It is evident that the PGMRES methods, utilizing preconditioners P1 and P2, require notably fewer iterations compared to the standard GMRES (without preconditioning) and PS to reach the desired accuracy. In contrast, the GMRES method necessitates more than 25 iterations to achieve the desired level of accuracy, The P1GMRES method, on the other hand, reaches the required precision in just a few iterations for matrices of size . A comparable trend is also evident for other matrix sizes.
  3. Table 1 , 2 , 3 and 4 highlights that the preconditioners P1GMRES and P2GMRES not only achieve almost same PSNR values but also significantly reduces the number of iterations required. By leveraging the PGMRES algorithm with preconditioners P1 and P2, there is a notable reduction of over 30% in CPU time. As a result, the PGMRES method with P1 and P2 demonstrates superior overall performance in comparison to other technique.
  4. Figs 5, 6, 7 and 8 provides a clear depiction of the enhanced quality achieved by PGMRES when using the preconditioners P1 and P2, showing a slight improvement in overall results.

Example 2

In this experiment, we utilized three hyperspectral images Yulin, Beijing, and Milan from the dataset used by Pan et al. [37] for image denoising. The blurring process was performed using the kegen(N,300,10) kernel. We also compared the results with the fractional-order TV-based algorithm (TFOV), specifically the F1GMRES and F2GMRES methods, as presented by Adel Al-Mahdi [7]. Figs 9, 10, and 11 illustrate the restored images, each with dimensions of . The blurry PSNR values for the Yulin, Beijing, and Milan images are 24.2248, 24.5601, and 23.07321, respectively. A halting criterion with a tolerance of was applied in the numerical procedure. Further details on the experiment are provided in Table 5 .

thumbnail
Fig 5. Clown image: (top left to right) original image, blurred image, and deblurred image using GMRES, (bottom left to right) deblurred image using PsGMRES, deblurred image using P1GMRES, and deblurred image using P2GMRES.

https://doi.org/10.1371/journal.pone.0322146.g005

thumbnail
Fig 6. Cameraman image: (top left to right) original image, blurred image, and deblurred image using GMRES, (bottom left to right) deblurred image using PsGMRES, deblurred image using P1GMRES, and deblurred image using P2GMRES.

https://doi.org/10.1371/journal.pone.0322146.g006

thumbnail
Fig 7. Boats image (Reprinted from [1] under a CC by license, with permission from D.A. Pados, orginal copy right 2008): (top left to right) original image, blurred image, and deblurred image using GMRES, (bottom left to right) deblurred image using PsGMRES, deblurred image using P1GMRES, and deblurred image using P2GMRES.

https://doi.org/10.1371/journal.pone.0322146.g007

thumbnail
Fig 8. Moon image: (top left to right) original image, blurred image, and deblurred image using GMRES, (bottom left to right) deblurred image using PsGMRES, deblurred image using P1GMRES, and deblurred image using P2GMRES.

https://doi.org/10.1371/journal.pone.0322146.g008

thumbnail
Fig 9. Yulin image: (top left to right) exact image, blurry image, deblurred image using F1 GMRES and deblurred image using F2 GMRES, (bottom left to right) deblurred image using GMRES, deblurred image using PsGMRES, deblurred image using P1GMRES, and deblurred image using P2GMRES.

https://doi.org/10.1371/journal.pone.0322146.g009

thumbnail
Fig 10. Beijing image: (top left to right) exact image, blurry image, deblurred image using F1 GMRES and deblurred image using F2 GMRES, (bottom left to right) deblurred image using GMRES, deblurred image using PsGMRES, deblurred image using P1GMRES, and deblurred image using P2GMRES.

https://doi.org/10.1371/journal.pone.0322146.g010

thumbnail
Fig 11. Milan image: (top left to right) exact image, blurry image, deblurred image using F1 GMRES and deblurred image using F2 GMRES, (bottom left to right) deblurred image using GMRES, deblurred image using PsGMRES, deblurred image using P1GMRES, and deblurred image using P2GMRES.

https://doi.org/10.1371/journal.pone.0322146.g011

Remarks

  1. In this experiment, the MC-based method achieved higher PSNR compared to the TFOV-based methods. This can be observed from Table 5. Additionally, our proposed P1GMRES, and P2GMRES methods exhibited significantly faster CPU processing times while maintaining comparable PSNR values to other approaches. This demonstrates that our techniques are both more efficient and computationally faster for hyperspectral image deblurring.
  2. Figs 9, 10, and 11 provides a clear depiction of the enhanced quality achieved by PGMRES when using the preconditioners P1 and P2, showing a slight improvement in overall results.

Example 3

We used a satellite image from Chowdhury et al. [18] for this example, and we introduced observable artefact by applying blurring and Poisson noise to it. A Gaussian kernel defined by was used to perform the blurring. We compared the outcomes with the TFOV-based methods (F1GMRES and F2GMRES) published by Adel Al-Mahdi [7] and the non-blind fractional order TV-based algorithm (NFOV) published by Chowdhury et al.[18]. Fig 12 displays the restored images, each with a size of . Two differnt sizes were used to test the methods: 128 and 64. This resulted in blurry PSNR values of 20.4559 and 20.2985, respectively.A halting criterion with a tolerance of was employed in the numerical procedure. Table 6 has more details on the experiment.

thumbnail
Fig 12. Satel image: (top left to right) blurry image, deblurred image using NFOV, deblurred image using F1 GMRES and deblurred image using F2 GMRES (bottom left to right) deblurred image using GMRES, deblurred image using PsGMRES, deblurred image using P1GMRES, and deblurred image using P2GMRES.

https://doi.org/10.1371/journal.pone.0322146.g012

Remarks

In this experiment, P1GMRES, and P2GMRES demonstrate significantly faster CPU processing times while achieving the same PSNR values as other methods. This indicates that our techniques are both more efficient and faster compared to GMRES, F1GMRES, F2GMRES, and NFOV methods.

6 Conclusions

This research presents a novel block preconditioning strategy specifically designed to address the block matrix system arising from the discretization of the Euler-Lagrange equations in the context of curvature-driven image deblurring. The proposed preconditioners, P1 and P2, are assessed for performance and contrasted with the GMRES method and with some existing image deblurring approaches [3, 7, 18] . Theoretical insights indicate that the iterative approach achieves guaranteed convergence when paired with a suitable matrix decomposition method, underscoring the robustness and dependability of the proposed solution approach.

Besides verifying convergence, we perform a spectral analysis of the preconditioned matrices, offering a deeper understanding of their eigenvalues and spectral properties, which further reinforces the effectiveness of the preconditioning approach. The performance of the preconditioners is further demonstrated through a comprehensive numerical example, where the experimental findings consistently demonstrate improved performance. The approach introduced in this study enhances the quality of restored images, as evidenced by the improved PSNR values. Specifically, the proposed preconditioners (P1 and P2) demonstrate a modest improvement in convergence speed compared to the standard GMRES method, requiring fewer iterations and reduced computational time for rapid convergence and effective image deblurring. Moreover, the eigenvalues of the preconditioned matrices cluster tightly around 1, highlighting the robustness and efficiency of the proposed methodology.

Acknowledgments

The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through large group Research Project under grant number RGP2/461/45.

The authors extend their appreciation to Northern Border University, Saudi Arabia, for supporting this work through project number (NBU-CRP-2025-3030).

References

  1. 1. Wei L, Pados DA, Batalama SN, Medley MJ. Sum-SINR/sum-capacity optimal multisignature spread-spectrum steganography. Mobile Multim/Image Process Secur Appl. 2008;6982:112–21.
  2. 2. Acar R, Vogel CR. Analysis of bounded variation penalty methods for ill-posed problems. Inverse Probl. 1994;10(6):1217–29.
  3. 3. Shahbaz A. Optimized five-by-five block preconditioning for efficient GMRES convergence in curvature-based image deblurring. Comput Math Appl. 2024;175:174–83.
  4. 4. Shahbaz A, Al-Mahdi AM, Ahmed R. Two new preconditioners for mean curvature-based image deblurring problem. AIMS Math. 2021;6(12):13824–44.
  5. 5. Shahbaz A, Fairag F. Circulant preconditioners for mean curvature-based image deblurring problem. J Algor Comput Technol. 2021;15:17483026211055679.
  6. 6. Shahbaz A, Fairag F, Al-Mahdi ADM, Rahman J ul. Preconditioned augmented Lagrangian method for mean curvature image deblurring. AIMS Math. 2022;7(10):17989–8009.
  7. 7. Al-Mahdi A. Preconditioning technique for an image deblurring problem with the total fractional-order variation model. MCA. 2023;28(5):97.
  8. 8. Bai J, Jia L, Peng Z. A new insight on augmented Lagrangian method with applications in machine learning. J Sci Comput. 2024;99(2):53.
  9. 9. Bakrani Balani F, Hajarian M. A new block preconditioner for weighted Toeplitz regularized least-squares problems. Int J Comput Math. 2023;100(12):2241–50.
  10. 10. Panjeh Ali Beik F, Benzi M. Preconditioning techniques for the coupled Stokes–Darcy problem: spectral and field-of-values analysis. Numer Math. 2022;150(2):257–98.
  11. 11. Benzi M, Faccio C. Solving linear systems of the form (A + γUUT) x = b by preconditioned iterative methods. SIAM J Sci Comput. 2023;46(2):S51–70.
  12. 12. Benzi M, Olshanskii MA. An augmented lagrangian‐based approach to the oseen problem. SIAM J Sci Comput. 2006;28(6):2095–113.
  13. 13. Boman EG, Higgins AJ, Szyld DB. Optimal size of the block in block GMRES on GPUs: computational model and experiments. Numer Algor. 2023;92(1):119–47.
  14. 14. Bouyghf F, Messaoudi A, Sadok H. A unified approach to Krylov subspace methods for solving linear systems. Numer Algor. 2023;88:1–28.
  15. 15. Chen C, Ma C. A generalized shift-splitting preconditioner for saddle point problems. Appl Math Lett. 2015;43:49–55.
  16. 16. Nikolova M. An algorithm for total variation minimization and applications. J Math Imaging Vision. 2004;20(1/2):89–97.
  17. 17. Chen K. Introduction to variational image-processing models and applications. Int J Comput Math. 2013;90(1):1–8.
  18. 18. Chowdhury MR, Qin J, Lou Y. Non-blind and blind deconvolution under poisson noise using fractional-order total variation. J Math Imaging Vis. 2020;62(9):1238–55.
  19. 19. Deng L-J, Glowinski R, Tai X-C. A new operator splitting method for the euler elastica model for image smoothing. SIAM J Imaging Sci. 2019;12(2):1190–230.
  20. 20. Fairag F, Chen K, Ahmad S. Analysis of the CCFD method for MC-based image denoising problems. Electron Trans Numer Anal. 2021;54:108–27.
  21. 21. Fairag F, Chen K, Ahmad S. An effective algorithm for mean curvature-based image deblurring problem. Comput Appl Math. 2022;41(4):176.
  22. 22. Fairag F, Chen K, Brito-Loeza C, Ahmad S. A two-level method for image denoising and image deblurring models using mean curvature regularization. Int J Comput Math. 2022;99(4):693–713.
  23. 23. Golub GH, Greif C. On solving block-structured indefinite linear systems. SIAM J Sci Comput. 2003;24(6):2076–92.
  24. 24. Kim J, Ahmad S. On the preconditioning of the primal form of TFOV-based image deblurring model. Sci Rep. 2023;13(1):17422. pmid:37833460
  25. 25. Li C, Liu Y, Wu F, Che M. Randomized block Krylov subspace algorithms for low-rank quaternion matrix approximations. Numer Algor. 2023;96(2):687–717.
  26. 26. Lin XL, Huang X, Ng MK, Sun HW. A τ-preconditioner for a non-symmetric linear system arising from multi-dimensional Riemann-Liouville fractional diffusion equation. Numer Algor. 2023;92(1):795–813.
  27. 27. Lund K. Adaptively restarted block Krylov subspace methods with low-synchronization skeletons. Numer Algor. 2023;93(2):731–64.
  28. 28. Benzi M, Golub GH. A preconditioner for generalized saddle point problems. SIAM J Matrix Anal Appl. 2004;26:20–41.
  29. 29. Ma S, Li S, Ma F. Preconditioned golden ratio primal-dual algorithm with linesearch. Numer Algor. 2024;1–31.
  30. 30. Mobeen A, Ahmad S, Fairag F. Non-blind constraint image deblurring problem with mean curvature functional. Numer Algor. 2024;1–21.
  31. 31. Peng J, Wang H, Cao X, Jia X, Zhang H, Meng D. Stable local-smooth principal component pursuit. SIAM J Imaging Sci. 2024;17(2):1182–205.
  32. 32. Peng J, Wang Y, Zhang H, Wang J, Meng D. Exact decomposition of joint low rankness and local smoothness plus sparse matrices. IEEE Trans Pattern Anal Mach Intell. 2023;45(5):5766–81. pmid:36063505
  33. 33. Rudin LI, Osher S, Fatemi E. Nonlinear total variation based noise removal algorithms. Physica D. 1992;60(1–4):259–68.
  34. 34. Sun L, Chen K. A new iterative algorithm for mean curvature-based variational image denoising. BIT Numer Math. 2014;54(2):523–53.
  35. 35. Tikhonov AN. Regularization of incorrectly posed problems. Sov Math Dokl. 1963;4:1624–7.
  36. 36. Vogel CR, Oman ME. Fast, robust total variation-based reconstruction of noisy, blurred images. IEEE Trans Image Process. 1998;7(6):813–24. pmid:18276295
  37. 37. Xu S, Ke Q, Peng J, Cao X, Zhao Z. Pan-denoising: guided hyperspectral image denoising via weighted represent coefficient total variation. IEEE Trans Geosci Remote Sens. 2024.
  38. 38. Niu Q, Cao Y, Du J. Shift-splitting preconditioners for saddle point problems. J Comput Appl Math. 2014;272:239–50.
  39. 39. Zheng YL, Cao Y, Jiang MQ. A splitting preconditioner for saddle point problems. Numer Linear Algebra Appl. 2011;18:875–95.
  40. 40. Yang F, Chen K, Yu B, Fang D. A relaxed fixed point method for a mean curvature-based denoising model. Optimiz Methods Softw. 2013;29(2):274–85.
  41. 41. Bai Z-Z, Golub GH. Accelerated Hermitian and skew-Hermitian splitting iteration methods for saddle-point problems. IMA J Numer Anal. 2007;27(1):1–23.
  42. 42. Zhu W, Chan T. Image denoising using mean curvature of image surface. SIAM J Imaging Sci. 2012;5(1):1–32.
  43. 43. Zhu W, Tai XC, Chan T. Augmented Lagrangian method for a mean curvature-based image denoising model. Inverse Probl Imaging. 2013;7(4):1409–32.