Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Improved estimation of motion blur parameters for restoration from a single image

  • Wei Zhou ,

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Resources, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    mczhouwei12@gmail.com (WZ); likang@nwu.edu.cn (KL); xin_cao@163.com (XC)

    Affiliation School of Information Science and Technology, Northwest University, Xi’an, P.R.China

  • Xingxing Hao,

    Roles Conceptualization, Data curation, Writing – review & editing

    Affiliation School of Information Science and Technology, Northwest University, Xi’an, P.R.China

  • Kaidi Wang,

    Roles Data curation, Investigation, Methodology, Writing – review & editing

    Affiliation Xi’an Institute of Optics and Precision Mechanics of CAS, Xi’an, P.R.China

  • Zhenyang Zhang,

    Roles Investigation, Methodology

    Affiliation Xi’an Institute of Optics and Precision Mechanics of CAS, Xi’an, P.R.China

  • Yongxiang Yu,

    Roles Investigation, Methodology, Writing – review & editing

    Affiliation Department of Electrical and Automatic Engineering, East China Jiaotong University, Nanchang, P.R.China

  • Haonan Su,

    Roles Conceptualization, Investigation, Methodology, Writing – review & editing

    Affiliation School of Electronic Engineering, Xidian University, Xi’an, P.R.China

  • Kang Li ,

    Roles Conceptualization, Funding acquisition, Methodology, Writing – review & editing

    mczhouwei12@gmail.com (WZ); likang@nwu.edu.cn (KL); xin_cao@163.com (XC)

    Affiliation School of Information Science and Technology, Northwest University, Xi’an, P.R.China

  • Xin Cao ,

    Roles Conceptualization, Data curation, Formal analysis, Methodology, Writing – review & editing

    mczhouwei12@gmail.com (WZ); likang@nwu.edu.cn (KL); xin_cao@163.com (XC)

    Affiliation School of Information Science and Technology, Northwest University, Xi’an, P.R.China

  • Arjan Kuijper

    Roles Conceptualization, Investigation, Methodology, Writing – review & editing

    Affiliation Fraunhofer IGD, Darmstadt, Germany

Abstract

This paper presents an improved method to estimate the blur parameters of motion deblurring algorithm for single image restoration based on the point spread function (PSF) in frequency spectrum. We then introduce a modification to the Radon transform in the blur angle estimation scheme with our proposed difference value vs angle curve. Subsequently, the auto-correlation matrix is employed to estimate the blur angle by measuring the distance between the conjugated-correlated troughs. Finally, we evaluate the accuracy, robustness and time efficiency of our proposed method with the existing algorithms on the public benchmarks and the natural real motion blurred images. The experimental results demonstrate that the proposed PSF estimation scheme not only could obtain a higher accuracy for the blur angle and blur length, but also demonstrate stronger robustness and higher time efficiency under different circumstances.

1 Introduction

Motion blurring is generated inevitably by camera shake during exposure time. As one of the main causes of image degradation, it seriously affects the performances of computer vision system in various fields. So efficient motion deblurring technology is conducive to improving the reliability of related applications, such as aerospace, medical imaging, traffic monitoring, public safety, military search, satellite and space image [13].

Some of the latest methods adopt deep learning to predict the probabilistic distribution of motion blur and recover the degraded images [410]. These deep learning approaches can be classified into two categories. The first kinds of methods rely on multi-frame images, and they own a complex network structure, so they are time-consuming [7, 9, 10]; the second types of these approaches only use single image to deblur the degraded image, this kind of method is simple in network structure and fast in training [4, 5, 11], but they still have some shortcomings: Aizenberg developed a multi-layer neural network (MLMVN) [4] to conduct blur identification, however, MLMVN is concentrated almost on horizontal blur [5]. Dash has developed a Gabor filter and radial basis function neural network (RBFNN) [5] to estimate the blur parameters in frequency response. Even though Gabor filter and RBFNN work well on the estimation of PSF parameters, they require sufficient Gabor filter masks in various orientations to ensure its accuracy [12]. Two kinds of deep learning methods are either time-consuming to calculate the complex training structure or have special requirements for blur conditions, both of them are not suitable for single image deblurring. In addition, some researchers have proposed efficient infrastructures to increase the operational efficiency of image processing applications, e.g. partial differential equation (PDE), multi-sink distributed power control algorithm (MSDPC-SRMS), wireless sensor networks, wireless mesh networks (WMNs), hidden Markov model (HMM) and ALOHA protocol [1318]. However, these efficient infrastructures are not suitable to the application of image deblurring, since the research theory of image restoration is different from other image processing areas. A popular way to tackle motion deblurring of single image is to deconvolute the blurred image with PSF [19].

Intuitively, the motion blurred image can be usually modeled as a convolution of the original image with PSF, which can also be named blur kernel. In general, the motion blurred image restoration techniques are used to eliminate or minimize the impact of PSF from the degraded image. Early research on motion deblurring are mostly focusing on proposing effective algorithm to inverse the process of image degradation, i.e., deconvolution of the blurred image. These researches can be roughly divided into two categories: blind and non-blind deblurring. The non-blind deconvolution methods are put forwards to conduct image restoration by a predetermined PSF, it is assumed that PSF is already known, such as the non-iterative Wiener filter [20], Iterative Lucy-Richardson algorithm [21], Bayesian deconvolution [22], and their corresponding improved methods etc. These well-known algorithms are widely adopted in deblurring, since most of them could restore the blurred image. The core issues of PSF are the two motion parameters: angle and length of motion. In the real world circumstances, due to the unknown motion information of camera and object, the values of blur angle and length are always unavailable.

Blind image deconvolution is an improved solution for motion blur image restoration, which firstly estimates two motion parameters from the blurred image and then restores the original appearance through the detected PSF. This kind of methods is clearly harder than non-blind counterpart and far more significant in estimating PSF, thus many researchers are focus on it [4, 2328]. In order to result in an efficient iterative process, Figueiredo et al. proposed a method based on the fast Fourier transform (FFT) and the discrete wavelet transform (DWT) to conduct the image restoration [24]. Aizenberg et al. proposed a multi-layer neural network based on multi-valued neurons (MLMVN) to identify both types and parameters of the PSF [4]. Besides, Hong et al. proposed an adaptive PSF estimation method based on an-isotropic regularization to improve the precision of the blur kernels, whose method adopts the estimated blur kernel and the proposed maximum likelihood (ML) estimation deblurring [26]. Most of these algorithms have been proved to be able to estimate very complex blur kernels accurately and yield impressive results. However, in practical applications, these methods are complicated in computational framework and time-consuming, since there are often numerous equations needed to be solved in the calculation processes.

Beyond that, some researchers have proposed some other PSF calculation algorithms that focus on estimating the blur angle and length for blur image restoration. Oliveira and Figueiredo et al. proposed a spectrum-based method to estimate the parameters of two types of blurs (linear-uniform and out-of-focus motions) for blind image restoration [29]. In their method they introduce two modifications to the Radon transform [30] to estimate the blur parameters, and the effectiveness of their proposed method is verified on the real natural blurred images. Sun and Cho et al. introduced a patch-based strategy for blur kernel estimation, their method estimates a “trusted” subset of x by imposing a patch prior specifically tailored towards modeling the appearance of image edge and corner primitives [31]. Deshpande and Patnaik proposed a modified cepstrum domain approach combined with bit-plane slicing method to estimate uniform motion blur parameters [32]. Cho et al. incorporated the Radon transform within the maximum a posteriori (MAP) estimation framework to jointly estimate the blur kernel and deblur the image, this algorithm performs well on a broader variety of scenes [25]. Aiming at improving the robustness for noisy images, Moghaddam and Jamzad combined the Radon transform and bi-spectrum modeling to quantify the motion blur parameters [33]. However, the restoration accuracy of this method is weak, especially for the estimation of large blur length. Wang et al. introduced an improved PSF parameters estimation algorithm which combined bilateral-piecewise estimation strategy and the sub-pixel level image generated with bilinear interpolation in different noisy situations [12]. However, it is too hard for this algorithm to explore effective solution under non-linear and non-uniform motion blur. The Hough-transformation-based method estimates the PSF parameters by means of the log spectrum of the blurred images [34]. This method owns an obvious shortcoming that is the choosing of threshold values during image binarization, and therefore the error in angle estimation will result in erroneous length estimation [12]. Most of the methods are still not able to produce good precision results especially the spectrum image containing interference stripes (see the colored lines of dashes in Fig 1), and large blur length estimation. On the whole, the existing algorithms still can not achieve a satisfactory balance between precision, robustness and time efficiency.

thumbnail
Fig 1. The interference stripes of image.

(a) The interference stripes in the original spectrum image; (b) The interference stripes after the adaptive median filter and binarization on the spectrum image.

https://doi.org/10.1371/journal.pone.0238259.g001

To address these limitations in single blind image restoration, we propose a novel Radon-transform-based blur angle estimation scheme which is inspired by the dark and bright stripes in frequency domain (see also in Fig 2). Based on this blur angle estimation method we also propose an accurate auto-correlation-matrix-based approach to detect the blur length. Comparing with the traditional stripe-based estimation method, the underlying techniques used in this paper are different and more advanced.

  • We firstly design and conduct a set of preprocessing to filter the original spectrum image by the adaptive median filter [35], binarization and Sobel edge detection etc.
  • Secondly, differently from the previous Radon-transform-based methods [25, 36, 37], we not only detect the blur angle but also refine it by our proposed tri-Radon transform method.
  • Thirdly, in the second and third Radon transform, we constructed a Difference value vs Angle curve to estimate the angle.
  • Fourthly, during the blur angle estimation, our own designed minimum-based filtering algorithm is employed to decrease the influence from the interference stripes. Besides, the adaptive median filter, binarization and Sobel edge detection in spectrum image preprocessing are also benefited to weaken the affection from interference stripes.
  • Fifthly, we combine the first order differential of image and the auto-correlation matrix to estimate the blur length accurately and efficiently.
  • In addition to the above, our proposed method can deal with the image of arbitrary sizes of row and column, since the method proposed in [12] can only handle the square image.
thumbnail
Fig 2. Spectrum image processes.

(a) Inputting blurred image; (b) Obtaining spectrum image; (c) Filtering spectrum image; (d) Conducting binarization of the spectrum image; (e) Carrying on Sobel edge detection.

https://doi.org/10.1371/journal.pone.0238259.g002

2 General motion blur model

The degradation process of blurred image can be approximately simulated as a liner degradation system, and the blurred image caused by the relative motion between camera and scene can be described as a two-dimensional convolution model in the linear space translation, which is modeled as: (1) where g(x, y),f(x, y) represent the blurred and original image respectively, h(x, y) is the PSF function or blur kernel, and n(x, y) refers to the additive noise. The notation “*” represents the convolution operator.

From Eq (1), we can see that the keypoint of deblurring is to determine the PSF function g(x, y). Assuming that the scene objects move uniformly relative to the camera, we can deduce that the gray values of any points in the blurred image are related to the gray values of their corresponding adjacent points in the original image, and the PSF for motion blurring can be expressed as [12]: (2) where l is the length of blur, and θ represents the angle of blur.

3 Blur angle estimation

As mentioned previously, the blur angle can be obtained by measuring the direction of the approximately linear dark stripes in frequency spectrum. In this section, we adopt a modified Radon-transform-based method to detect the blur angle. The application of Radon transform in image processing can be expressed as: (3) where Rθ(x′) is the value of Radon transform projection, θ represents the angle of Radon transform, (x, y) and (x′, y′) refer to the coordinates of original image and Radon transformed image respectively.

Radon transform is regularly used to detect the lines in the image. There are several dark stripes tilted at a certain angle in the spectrum figure of motion blurred image, and these dark stripes are parallel and symmetrical. Normally, the blur angle can be obtained by detecting the peak value with Radon transform for these dark stripes in the spectrum image. Most of the Radon-transform-based methods do not deal with the spectrum image before using radon transform to detect the blur angle. In practical application, the light and dark stripes in spectrum image are often ambiguous, especially the blur image with noise, this will lead to large errors in the estimation of blur angle, and thus will affect the detection of blur length and the deblurring of degraded image in the following phases of the whole algorithm. To overcome this problem, our blur angle estimation is based on the diversification processing of spectrum image.

3.1 The spectrum processing

The outline of our spectrum image processing is presented in Fig 2. Here, we take the cameraman.tif with a blur angle of 45° as a sample in the processes. As is shown in Fig 2(b), before our method starts, the spectrum image is obtained by Fourier transform of blurred image.

Due to the influence from noise, camera and other image acquisition equipment, there are many interference stripes in the spectrum image, such as the lines of dashes marked wit red, sky-blue, yellow, dark-blue and green colors in Fig 1(a) and 1(b). To eliminate the disturbances of interference stripes, we adopt adaptive median filter, binaryzation and minimum-based filtering algorithm. The adaptive median filter and binaryzation are carried out on the spectrum image in this section, and the details of minimum-based filtering algorithm are presented in the Section 3.2. According to the above analysis, we then use the adaptive median filter on the spectrum image, so as to eliminate the interference from the noise (see also Fig 2(c)). Following the experiments experience, the appropriate size of adaptive median filter should be 3. After that, in the process of Fig 2(d), we binarize the filtered spectrum image with a threshold τb.

When we get the binarization result for the filtered spectrum image, we consider whether to conduct the edge detection for the spectrum image by a threshold τe, and the Sobel operator is adopted in the edge detection algorithm (see also in Fig 2(e)). Based on the whole experiments experience, the suitable value for threshold τb and τe should be 0.35 and 0.47.

Then the Radon transform is made on the binarization spectrum image to detect the blur angle.

3.2 Tri-Radon transforms

When the binary spectrum image is obtained, the next step is to estimate the blur angle by Radon transform. In the whole algorithm, we totally adopt Radon transform projection three times to detect the blur angle, which we named “Tri-Radon” transforms.

In the first Radon transform process, we firstly detect the blur angle by the two-dimensional color map of Radon transform projection roughly (see also in Fig 3(a)). In the color map of Radon transform projection, the horizontal coordinates θ (from 0° to 180°) represent the angle of Radon transform projection, the vertical coordinates x′ (from -250 to 250) refer to the distance from a straight line to the origin in the binaryzation spectrum, and this line is perpendicular to a straight line with a such angle (the horizontal coordinates’ value), the color bar at the right part of color map means the value of Radon transform projection Rθ(x′). For better understanding, here we take the coordinates value (45°, -27, 238.7) in the color map as an example, where 45° is the value of horizontal coordinates θ, -27 is the value of vertical coordinates x′, and 238.7 is the value of Radon transform projection Rθ(x′), this means that there is a straight line passing through the origin with a slope of 45°, and there also exists a straight line B perpendicular to A and 27 distances away from the origin in the binarization spectrum, since the value of Radon transform projection at this position is 238.7. When we comprehend the meaning of Radon transform projection’s color map, we can observe that there are many local maximum and minimum values of Radon transform projection on straight lines with a slope of 45° in Fig 3(a), especially the neighborhood of origin (vertical coordinate 0) in spectrum image. From this observation, it can be judged that the blur angle θpsf is about θ1 = 45°: (4)

thumbnail
Fig 3. Blur angle detection by double Radon transforms.

(a) The two-dimensional color map of Radon transform projection for spectrum image (first Radon transform); (b) The second rough detection of blur angle (second Radon transform); (c) The affected binarization spectrum with interference stripes in the directions of 2° and 92°. (d) The final precise detection of blur angle (third Radon transform).

https://doi.org/10.1371/journal.pone.0238259.g003

After the rough estimation of blur angle by the two-dimensional color map of Radon transform projection for spectrum image, the next step is to conduct the blur angle detection through the second Radon transform.

As is shown in Fig 3(b), in the second Radon transform projection for the binary spectrum image, we set the angles θ2p of projection from 0° to 180° and step width to 1°. After the value of Radon transform projection for each angle is obtained, we firstly calculate the summations of local maximums and local minimums respectively, then we calculate the difference between these two summations: (5) where Maxθm and Minθn represent the local maximum of Radon transform projection on spectrum image with angle θ, and Dθ is the difference between the summation of local maximums and the summation of local minimums. Based on the data of Dθ and θ, the Difference value vs Angle curve can be obtained (see also in Fig 3(b) and 3(d)). Through this curve, we can obtain the rough estimation of blur angle θ2 by searching the corresponding angle of the projection value’s local maximum: (6)

Through the experiments experience, we can find that in all the Difference value vs Angle curves, there are two additional angles (2°, 92°) detected except the true blur angle (46°), since the binarization spectrum is affected by the two tiny stripes in the direction of 2° (red arrow marked) and 92° (green arrow tagged) in Fig 3(c). Through all experiments’ Difference value vs Angle curves, we can observe that there is a minimum in the neighborhood angle of false positive (FP) results: (7)

So, to eliminate the interference from the noisy stripes, we adopt our proposed minimum-based filtering algorithm as following:

  1. Firstly, we’ll search more than three largest maximums of Dθ in the Difference value vs Angle curves of angle from 0° to 180°;
  2. After that, we will judge whether there is a minimum in the neighborhood angle of each maximum value’s corresponding angle, then we delete the elements with minimum values;
  3. The remaining maximum’s corresponding angle identified as the blur angle θ2 of second estimation.

Based on the second rough blur angle estimation results, we conduct the third Radon transform projections for the binary spectrum image. This time the angles of projection are set from θ2 − 3° to θ2 + 3°, and the step width is set to 0.5° (see also Fig 3(c)). Then we repeat the steps of the second blur angle estimation. Through the Projection value vs angle curve of the third Radon transform projections, we can get the precise blur angle estimation results θ3, and we update θ3 as the ultimate blur angle: (8)

All the above estimations of blur angle are for square images, if the sizes of image’s row and column are not consistent, the blur angle can obtained as following: (9) where Θ is the angle estimation from the rectangle image, θ represents the blur angle, M and N refer to the sizes of image’s row and column respectively.

4 Blur length estimation

The precondition of our blur length estimation is to firstly measure the blur angle and then rotate the blurred image clockwise with the detected blur angle (see also in Fig 4(a)), thus the motion becomes in horizontal direction, and we can obtain the rotated blurred image Z(x, y) in Fig 4(b). Firstly, the first order differential of image Z(x, y) in horizontal direction is given by: (10) where Z′(x, y) is the first order differential of Z(x, y). Then we calculate the auto-correlation matrix of Z′(x, y) in horizontal direction as follows: (11) where M is the size of Z′(x, y)’s row, which equals to the size of image Z(x, y)’s row, and S(x, y) refers to the auto-correlation matrix of Z′(x, y) in horizontal direction. According to the auto-correlation matrix S(x, y), we can calculate the summations of each column elements in S(x, y): (12) where the size of S′(y) is 1 × N. The goal of S′(y)’s calculation is to reduce the influence from noise and improve the accuracy of blur length estimation. The blur length can be estimated through the S′(y) vs y curve in Fig 4(c) and 4(d), where the horizontal coordinate is the data of y, and the vertical coordinate is the corresponding S′(y). We need to search a pair of conjugated-correlated troughs (tagged with green arrows) on the right and left sides of central crest (marked with red arrow) in the S′(y) vs y curve, the half l of the distance D between these two conjugated-correlated troughs is the estimated blur length. Here we take an experiment result as the sample in Fig 4(d), we can see that the horizontal values of these two conjugated-correlated troughs are 339 and 369, and the distance D between these two troughs is 30, so we can estimate the blur length is . In summary, our blur length estimation is proposed as follows:

  1. We rotate the blurred image g(x, y) clockwise to the horizontal motion direction by the detected blur angle, thus we can obtain the rotated blurred image Z(x, y).
  2. Calculating the first order differential Z′(x, y) of Z(x, y) in horizontal direction.
  3. Then calculating the auto-correlation matrix S(x, y) of Z′(x, y).
  4. Computing the summations of each column elements in S(x, y) to get S′(y).
  5. Estimating the blur length by searching a pair of conjugated-correlated troughs in the S′(y) vs y curve.
thumbnail
Fig 4. Blur length estimation processes.

(a) The original blurred image; (b) The rotated blurred image Z(x, y); (c) The S′(y) vs y curve; (d) The enlarged version of S′(y) vs y curve.

https://doi.org/10.1371/journal.pone.0238259.g004

With the blur parameters estimation algorithms mentioned in Section 3 and Section 4, we can construct the PSF function by Eq 2. Then we can restore the blurred image by the non-blind filter method.

5 Experiments

In this section we conduct a series of comparisons between our approach and the state-of-the-art methods (Moghaddam [33], traditional Radon-transform-based method [25], deep learning approach (RBFNN) [5] and Wang [12] are carried out in this section). A total of 18625 images including VOC2012 dataset [38] and real-life images are used to evaluate our approach in Section 5.1, 5.2 and 5.3.

5.1 Performance of the parameters estimation approaches

In this subsection, we evaluate our proposed parameters estimation approaches by the degraded images with various blur angles and lengths, and the blurred images which are generated by varying the blur angle from 0° to 180° and the blur length from 0 to 100 pixels respectively.

First of all, the absolute errors for two blur parameter estimations using the five methods are shown in Table 1. From this table, we can get several observations:

  1. During the whole experiments, all the methods tend to almost no error for min absolute errors of blur angle θ and blur length L.
  2. No matter which cases, Wang’s approach and our proposed method are obviously superior to the first three methods. Moghaddam’s approach obtains the worst performances for the max absolute errors of θ and L among the five methods, but it could produce comparative mean absolute error with traditional Radon-based and Dash’s methods for θ.
  3. In sharp contrast, our method achieves the best estimation accuracy for both θ and L with the least mean absolute errors of 0.11° and 0.14 pixels. Moreover, the max absolute errors of these two parameters generated by our approach are less than 0.22° and 0.28 pixels respectively, significantly smaller than those generate by the other four approaches.
thumbnail
Table 1. Estimation accuracy comparison between five methods for two blur parameters.

https://doi.org/10.1371/journal.pone.0238259.t001

Besides, by varying L from 0 to 100 pixels with a step of 5 pixels at the blur angle of 0°, 30°, 45°, 60° and 90° respectively, the absolute error with respect to the blur length L is observed, the experiment results are shown in Fig 5. Obviously, the absolute error curve fluctuates greatly with a lower value when L is small, and then the error increases slowly when L is larger than 30 pixels. This can be owing to the fact that the interference stripes are generated in the spectrum image when the blur length is large (see also in Figs 1 and 3(c)). Under these circumstances, it may lead to the increasing estimation error of the distance between a pair of conjugated-correlated troughs in Fig 4. Furthermore, our proposed blur length estimation method achieves the best performance of near 0 pixels absolute error with L = 0 pixels & θ = {0°, 30°, 45°, 60°, 90°}, L = 5 pixels & θ = {45°, 90°}, L = 10 pixels & θ = 0°, L = 20 pixels & θ = 90°. Under any instances, the absolute error of L is less than 0.28 pixels no matter how L and θ varies in the range (0 ≤ L ≤ 100, θ = {0°, 30°, 45°, 60°, 90°}), which shows that the performance is very good.

thumbnail
Fig 5. Absolute error of blur length L from our proposed method.

https://doi.org/10.1371/journal.pone.0238259.g005

Similar to the experiment for blur length L, the absolute error in terms of the blur angle is observed. Moghhadam’s [33], Traditional-Radon-based [25], Dash’s [5] and Wang’s [12] methods are adopted in blur angle estimation experiments to evaluate and validate the superiority of our proposed method. Meanwhile, the Gaussian noise with standard deviation σ = 0.0001 is also added into the degraded images to demonstrate the good performance of our method. The experiment results are presented in Fig 6. From Fig 6, we can find that our approach achieves the best accuracy results among the five methods. Besides, we also conduct additional Friedman and Nemenyi tests of blur angle’s absolute errors to compare our method with these algorithms, the critical difference diagram results generated by Friedman and Nemenyi tests are presented in Fig 7. Through Fig 7, we can observe that our proposed method obtain the best results on blur angle accuracy. On the one hand, by introducing the adaptive median filter, binarization and Sobel edge detection to weaken the affection from the interference stripes in Figs 1 and 3(c). On the other hand, our own designed minimum-based filtering algorithm is employed to decrease the influence from the interference stripes. These two measures both contributed to the smaller absolute error comparing with other four methods.

thumbnail
Fig 7. Friedman and Nemenyi tests of blur angle θ’s absolute errors.

https://doi.org/10.1371/journal.pone.0238259.g007

All these observations confirm the high accuracy estimation for blur parameters of our proposed methods.

5.2 Performance of deblurring on noise-free images

The previous subsection shows that our blur parameter estimation approach obtains good results under different circumstances. In this subsection, based on the constructed PSF with the estimated θ and L from Section 5.1, we use regularized filter to deblur the degraded image, so as to demonstrate that our method could yield better performance of deblurring than state-of-the-art approaches intuitively. For this purpose, we will carry out experiments on the VOC2012 dataset [38] and real-life blurred images, and compare our method against the Moghhadam’s [33], Traditional-Radon-based [25], Dash’s [5] and Wang’s [12] methods etc.

The degraded images and restoration results produced by the five approaches in VOC2012 dataset are illustrated in Fig 8. From these figures we can observe that our method obtain the most satisfactory results and demonstrates a good robustness under different circumstances. In all the experiments, Moghaddam, Traditional-Radon-based and Dash’s methods fail to completely remove the blurring effects in the deblurred images. Wang’s approach can only obtain acceptable results in the first three rows of images.

thumbnail
Fig 8. Samples of deblurring results with different methods on VOC2012 dataset.

(a) degraded image, (b) Moghaddam’s results, (c) Traditional-Radon-based results, (d) RBFNN’s results, (e) Wang’s results, (f) our results.

https://doi.org/10.1371/journal.pone.0238259.g008

Moreover, we adopt the peak signal-to-noise ratio (PSNR) and the running time as the evaluation criterion to quantitatively assess all methods’ performance. PSNR are defined as: (13) (14) where MSE is the mean square error, f(x, y) and are the intensity of pixel (x, y) in the image before and after motion deblurring respectively, and M and N represent the size of image. Besides, in order to make the experimental comparison more accurate, we also use structural similarity index (SSIM) [39] to evaluate the deblurring quality of each method: (15) where μf, , σf, and represent the local means, standard deviations and cross-covariance for images f and respectively, C1 and C2 are two constants to avoid formula division by 0. Usually, the larger PSNR and SSIM, the higher restoration quality. That is, an optimal PSNR is infinity, and an ideal SSIM has value 1.

We experiment on the VOC2012 database to calculate each method’s PSNR, SSIM and the running time, the comparison of sample pictures and overall results are presented in Table 2. We can see clearly that our method works well on almost all sample pictures, and we obtain the largest PSNR on each image except 2008_007430.jpg, this is approximately consistent with the result in Fig 8. Wang’s approach barely has a slight advantage in obtaining better results on 2008_007430.jpg than our method. Compared with Wang and our approaches, traditional Radon-based, Moghddam’s methods and RBFNN are very time-consuming. Furthermore, our proposed method’s calculation efficiency is higher than other four approaches. In addition, we also generate the critical difference diagram results of Friedman and Nemenyi tests for PSNR, SSIM and the running time of these five methods on VOC2012 noise-free circumstance in Fig 9. It can be observed clearly that our method is quite different from other five methods, and we still obtain the best results on VOC2012 noise-free circumstances. In total, our method performs best among the five approaches, as we have the best results in 95.91% of the VOC2012 database on the image restoration quality and computing efficiency.

thumbnail
Table 2. PSNR (dB) and SSIM of the deblurred images, and the algorithms’ running time (in seconds) on VOC2012 database.

https://doi.org/10.1371/journal.pone.0238259.t002

thumbnail
Fig 9. Friedman and Nemenyi tests of PSNR, SSIM and time on VOC2012 noise-free circumstances.

https://doi.org/10.1371/journal.pone.0238259.g009

Additionally, we also conduct the experiments for all methods on the real captured blurred images, the experiments results are visualized in Fig 10 (car, scenery and bridge images). The car and scenery images are obtained with a hand-held camera, and the bridge image is captured in a moving car on the motorway. Based on the detection results of blur angle and blur length, the blind deconvolution is adopted to filter and deblur the degraded images. Through the Fig 10, we can also observe the similar superiority of our proposed method presented in Fig 8 and Table 2. Specifically, we can see the five Chinese characters on the horizontal beam of bridge clearly by our method through the bridge images’ enlarged version of last line in Fig 10.

thumbnail
Fig 10. Deblurring results with different methods on real natural blurred images.

(a) degraded image, (b) Moghaddam’s results, (c) Traditional-Radon-based results, (d) Dash’s results, (e) Wang’s results, (f) our results.

https://doi.org/10.1371/journal.pone.0238259.g010

5.3 Performance of deblurring in noisy circumstances

Apart from parameters estimation tests and experiments on noise-free images, we also conduct a set of experiments on noisy images and measure the PSNR and SSIM of the deblurred pictures to validate the robustness of our method. Similar to what we did in the Section 5.2, the images of VOC2012 dataset are blurred with different PSFs, and additive Gaussian noise with standard deviation σ = 0.0001 is added to produce noisy-blurred images (see also in Fig 12(a)). For fair comparison, blind deconvolution is used to deblur the degraded images by the detected PSFs with different methods. The visualization results for five methods are shown in Fig 12, and the PSNR and SSIM of the deblurred images with different approaches are expressed in Table 3 and Fig 11. From Fig 12, it can be observed that the deblurred images with Wang and our method exhibit better results than other three methods. Furthermore, through the PSNR and SSIM of the deblurred images produced by different methods in Table 3 and Fig 11, it can be seen that our method yields the largest PSNR and SSIM on VOC2012 database, since we ranked first in 96.76% of the VOC2012 dataset. These results demonstrate that the robustness of our method is superior to the other four methods in noisy circumstances.

thumbnail
Table 3. PSNR (dB) and SSIM of the deblurred images in noisy circumstances on VOC2012 database.

https://doi.org/10.1371/journal.pone.0238259.t003

thumbnail
Fig 11. Friedman and Nemenyi tests of PSNR and SSIM on VOC2012 noise circumstances.

https://doi.org/10.1371/journal.pone.0238259.g011

thumbnail
Fig 12. Deblurring results with different methods on VOC2012 dataset in noisy circumstances.

(a) degraded image, (b) Moghaddam’s results, (c) Traditional-Radon-based results, (d) Dash’s results, (e) Wang’s results, (f) our results.

https://doi.org/10.1371/journal.pone.0238259.g012

5.4 Summary: Experiments on our method

5.4.1 High precision of blur parameters.

Throughout the whole experiments, our method could achieve the best blur parameter estimation accuracy for both blur angle and blur length with the least mean absolute errors of 0.11° and 0.14 pixels.

5.4.2 High quality of restoration.

In the deblurring on noise-free images, we have the best results in 95.91% of the VOC2012 database on computational efficiency and deblurring quality, whereas in the noisy images we could achieve 96.76% best results in VOC2012 database.

All in all, one can see that the proposed method could estimate the blur parameters accurately and eliminate the influence from the linear motion effectively in the whole experiments.

6 Conclusions

An improved point spread function estimation approach for blind motion deblurring from the spectrum of a single image which is altered due to motion blur, noise and interference stripes was proposed in this research. To solve these problems in single blind image restoration, we propose a novel tri-Radon transforms blur angle estimation scheme which is inspired by the dark and bright stripes in frequency domain (spectrum). Based on this blur angle estimation approach the first order differential of image and auto-correlation matrix were combined to detect the blur length accurately. We performed a set of experiments on VOC2012 dataset and naturally motion blurred images to compare our proposed method against several state-of-the-art methods through the qualitative and quantitative assessments. The results of these experiments show that our method perform best with respect to different blur parameters and is proved to obtain satisfying deblurred images, since we could produce impressive accuracies for both blur angle and length with errors of only 0.11° and 0.14 pixels in the whole experiment respectively.

As there is almost no perfect linear motion between the camera and the scene in practical application, so it is worth to explore an effective solution for the nonlinear and nonuniform motion blur problem. In the future research we plan to improve our method work on nonlinear and nonuniform motion blurred images and extend our method to other datasets.

References

  1. 1. Kim MD, Ueda J. Real-time image de-blurring and image processing for a robotic vision system. In: 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE; 2015. p. 1899–1904.
  2. 2. Liu G, Huang TZ, Liu J, Lv XG. Total variation with overlapping group sparsity for image deblurring under impulse noise. PloS one. 2015;10(4).
  3. 3. Zeng Y, Lan J, Ran B, Wang Q, Gao J. Restoration of motion-blurred image based on border deformation detection: A traffic sign restoration model. PLoS one. 2015;10(4).
  4. 4. Aizenberg I, Paliy DV, Zurada JM, Astola JT. Blur identification by multilayer neural network based on multivalued neurons. IEEE Transactions on Neural Networks. 2008;19(5):883–898.
  5. 5. Dash R, Majhi B. Motion blur parameters estimation for image restoration. Optik - International Journal for Light and Electron Optics. 2014;125(5):1634–1640.
  6. 6. Gong D, Yang J, Liu L, Zhang Y, Reid I, Shen C, et al. From motion blur to motion flow: a deep learning solution for removing heterogeneous motion blur. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2017. p. 2319–2328.
  7. 7. Sun J, Cao W, Xu Z, Ponce J. Learning a convolutional neural network for non-uniform motion blur removal. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2015. p. 769–777.
  8. 8. Ma Z, Liao R, Tao X, Xu L, Jia J, Wu E. Handling motion blur in multi-frame super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2015. p. 5224–5232.
  9. 9. Ma B, Huang L, Shen J, Shao L, Yang MH, Porikli F. Visual tracking under motion blur. IEEE Transactions on Image Processing. 2016;25(12):5867–5876.
  10. 10. Gast J, Sellent A, Roth S. Parametric object motion from blur. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2016. p. 1846–1854.
  11. 11. Wang M, Zhou S, Yan W. Blurred image restoration using knife-edge function and optimal window Wiener filtering. PloS one. 2018;13(1).
  12. 12. Wang Z, Yao Z, Wang Q. Improved scheme of estimating motion blur parameters for image restoration. Digital Signal Processing. 2017;65:11–18.
  13. 13. Wei W, Zhou B, Polap D, Woźniak M. A regional adaptive variational PDE model for computed tomography image reconstruction. Pattern Recognition. 2019;92:64–81.
  14. 14. Wei W, Xia X, Wozniak M, Fan X, Damaševičius R, Li Y. Multi-sink distributed power control algorithm for Cyber-physical-systems in coal mine tunnels. Computer Networks. 2019;161:210–219.
  15. 15. Wei W, Song H, Li W, Shen P, Vasilakos A. Gradient-driven parking navigation using a continuous information potential field based on wireless sensor network. Information Sciences. 2017;408:100–114.
  16. 16. Wei W, Xu Q, Wang L, Hei X, Shen P, Shi W, et al. GI/Geom/1 queue based on communication model for mesh networks. International Journal of Communication Systems. 2014;27(11):3013–3029.
  17. 17. Wei W, Fan X, Song H, Fan X, Yang J. Imperfect information dynamic stackelberg game based resource allocation using hidden Markov for cloud computing. IEEE Transactions on Services Computing. 2016;11(1):78–89.
  18. 18. Wei W, Su J, Song H, Wang H, Fan X. CDMA-based anti-collision algorithm for EPC global C1 Gen2 systems. Telecommunication Systems. 2018;67(1):63–71.
  19. 19. Yang H, Su X, Chen S, Zhu W, Ju C. Efficient learning-based blur removal method based on sparse optimization for image restoration. Plos one. 2020;15(3):e0230619.
  20. 20. Banham MR, Katsaggelos AK. Digital image restoration. IEEE signal processing. 1997;14(2):24–41.
  21. 21. Fergus R, Singh B, Hertzmann A, Roweis ST, Freeman WT. Removing camera shake from a single photograph. In: ACM transactions on graphics (TOG). vol. 25. ACM; 2006. p. 787–794.
  22. 22. Babacan SD, Wang J, Molina R, Katsaggelos AK. Bayesian blind deconvolution from differently exposed image pairs. IEEE Transactions on Image Processing. 2010;19(11):2874–2888.
  23. 23. Li H, Qiu T, Luan S, Song H, Wu L. Deblurring traffic sign images based on exemplars. PloS one. 2018;13(3).
  24. 24. Figueiredo MA, Nowak RD. An EM algorithm for wavelet-based image restoration. IEEE Transactions on Image Processing. 2003;12(8):906–916.
  25. 25. Cho TS, Paris S, Horn BK, Freeman WT. Blur kernel estimation using the radon transform. In: CVPR 2011. IEEE; 2011. p. 241–248.
  26. 26. Hong H, Park IK. Single-image motion deblurring using adaptive anisotropic regularization. Optical Engineering. 2010;49(9):097008.
  27. 27. Cho S, Lee S. Fast motion deblurring. ACM Transactions on graphics (TOG). 2009;28(5):145.
  28. 28. Shan Q, Jia J, Agarwala A. High-quality motion deblurring from a single image. Acm transactions on graphics (tog). 2008;27(3):73.
  29. 29. Oliveira JP, Figueiredo MAT, Bioucas-Dias JM. Parametric blur estimation for blind restoration of natural images: linear motion and out-of-focus. IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society. 2014;23(1):466–77.
  30. 30. Deans SR. The Radon transform and some of its applications. Courier Corporation; 2007.
  31. 31. Sun L, Cho S, Wang J, Hays J. Edge-based blur kernel estimation using patch priors. In: IEEE International Conference on Computational Photography. IEEE; 2013.
  32. 32. Deshpande AM, Patnaik S. A novel modified cepstral based technique for blind estimation of motion blur. Optik - International Journal for Light and Electron Optics. 2014;125(2):606–615.
  33. 33. Moghaddam ME, Jamzad M. Motion blur identification in noisy images using mathematical models and statistical measures. Pattern recognition. 2007;40(7):1946–1957.
  34. 34. Sakano M, Suetake N, Uchino E. A PSF estimation based on Hough transform concerning gradient vector for noisy and motion blurred images. IEICE TRANSACTIONS on Information and Systems. 2007;90(1):182–190.
  35. 35. Hwang H, Haddad RA. Adaptive median filters: new algorithms and results. IEEE Transactions on image processing. 1995;4(4):499–502.
  36. 36. Krahmer F, Lin Y, McAdoo B, Ott K, Wang J, Widemann D, et al. Blind image deconvolution: Motion blur estimation. 2006;.
  37. 37. Moghaddam ME, Jamzad M. Motion blur identification in noisy images using fuzzy sets. In: Proceedings of the Fifth IEEE International Symposium on Signal Processing and Information Technology, 2005. IEEE; 2005. p. 862–866.
  38. 38. Everingham M, Van Gool L, Williams CKI, Winn J, Zisserman A. Visual Object Classes Challenge 2012 Dataset (VOC2012); 2012. http://host.robots.ox.ac.uk/pascal/VOC/voc2012/.
  39. 39. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP, et al. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing. 2004;13(4):600–612. pmid:15376593