Improved estimation of motion blur parameters for restoration from a single image

This paper presents an improved method to estimate the blur parameters of motion deblurring algorithm for single image restoration based on the point spread function (PSF) in frequency spectrum. We then introduce a modification to the Radon transform in the blur angle estimation scheme with our proposed difference value vs angle curve. Subsequently, the auto-correlation matrix is employed to estimate the blur angle by measuring the distance between the conjugated-correlated troughs. Finally, we evaluate the accuracy, robustness and time efficiency of our proposed method with the existing algorithms on the public benchmarks and the natural real motion blurred images. The experimental results demonstrate that the proposed PSF estimation scheme not only could obtain a higher accuracy for the blur angle and blur length, but also demonstrate stronger robustness and higher time efficiency under different circumstances.


Introduction
Motion blurring is generated inevitably by camera shake during exposure time. As one of the main causes of image degradation, it seriously affects the performances of computer vision system in various fields. So efficient motion deblurring technology is conducive to improving the reliability of related applications, such as aerospace, medical imaging, traffic monitoring, public safety, military search, satellite and space image [1][2][3].
Some of the latest methods adopt deep learning to predict the probabilistic distribution of motion blur and recover the degraded images [4][5][6][7][8][9][10]. These deep learning approaches can be classified into two categories. The first kinds of methods rely on multi-frame images, and they own a complex network structure, so they are time-consuming [7,9,10]; the second types of these approaches only use single image to deblur the degraded image, this kind of method is simple in network structure and fast in training [4,5,11], but they still have some shortcomings: Aizenberg developed a multi-layer neural network (MLMVN) [4]  identification, however, MLMVN is concentrated almost on horizontal blur [5]. Dash has developed a Gabor filter and radial basis function neural network (RBFNN) [5] to estimate the blur parameters in frequency response. Even though Gabor filter and RBFNN work well on the estimation of PSF parameters, they require sufficient Gabor filter masks in various orientations to ensure its accuracy [12]. Two kinds of deep learning methods are either timeconsuming to calculate the complex training structure or have special requirements for blur conditions, both of them are not suitable for single image deblurring. In addition, some researchers have proposed efficient infrastructures to increase the operational efficiency of image processing applications, e.g. partial differential equation (PDE), multi-sink distributed power control algorithm (MSDPC-SRMS), wireless sensor networks, wireless mesh networks (WMNs), hidden Markov model (HMM) and ALOHA protocol [13][14][15][16][17][18]. However, these efficient infrastructures are not suitable to the application of image deblurring, since the research theory of image restoration is different from other image processing areas. A popular way to tackle motion deblurring of single image is to deconvolute the blurred image with PSF [19]. Intuitively, the motion blurred image can be usually modeled as a convolution of the original image with PSF, which can also be named blur kernel. In general, the motion blurred image restoration techniques are used to eliminate or minimize the impact of PSF from the degraded image. Early research on motion deblurring are mostly focusing on proposing effective algorithm to inverse the process of image degradation, i.e., deconvolution of the blurred image. These researches can be roughly divided into two categories: blind and non-blind deblurring. The non-blind deconvolution methods are put forwards to conduct image restoration by a predetermined PSF, it is assumed that PSF is already known, such as the non-iterative Wiener filter [20], Iterative Lucy-Richardson algorithm [21], Bayesian deconvolution [22], and their corresponding improved methods etc. These well-known algorithms are widely adopted in deblurring, since most of them could restore the blurred image. The core issues of PSF are the two motion parameters: angle and length of motion. In the real world circumstances, due to the unknown motion information of camera and object, the values of blur angle and length are always unavailable.
Blind image deconvolution is an improved solution for motion blur image restoration, which firstly estimates two motion parameters from the blurred image and then restores the original appearance through the detected PSF. This kind of methods is clearly harder than non-blind counterpart and far more significant in estimating PSF, thus many researchers are focus on it [4,[23][24][25][26][27][28]. In order to result in an efficient iterative process, Figueiredo et al. proposed a method based on the fast Fourier transform (FFT) and the discrete wavelet transform (DWT) to conduct the image restoration [24]. Aizenberg et al. proposed a multi-layer neural network based on multi-valued neurons (MLMVN) to identify both types and parameters of the PSF [4]. Besides, Hong et al. proposed an adaptive PSF estimation method based on an-isotropic regularization to improve the precision of the blur kernels, whose method adopts the estimated blur kernel and the proposed maximum likelihood (ML) estimation deblurring [26]. Most of these algorithms have been proved to be able to estimate very complex blur kernels accurately and yield impressive results. However, in practical applications, these methods are complicated in computational framework and time-consuming, since there are often numerous equations needed to be solved in the calculation processes.
Beyond that, some researchers have proposed some other PSF calculation algorithms that focus on estimating the blur angle and length for blur image restoration. Oliveira and Figueiredo et al. proposed a spectrum-based method to estimate the parameters of two types of blurs (linear-uniform and out-of-focus motions) for blind image restoration [29]. In their method they introduce two modifications to the Radon transform [30] to estimate the blur parameters, and the effectiveness of their proposed method is verified on the real natural blurred images.
Sun and Cho et al. introduced a patch-based strategy for blur kernel estimation, their method estimates a "trusted" subset of x by imposing a patch prior specifically tailored towards modeling the appearance of image edge and corner primitives [31]. Deshpande and Patnaik proposed a modified cepstrum domain approach combined with bit-plane slicing method to estimate uniform motion blur parameters [32]. Cho et al. incorporated the Radon transform within the maximum a posteriori (MAP) estimation framework to jointly estimate the blur kernel and deblur the image, this algorithm performs well on a broader variety of scenes [25]. Aiming at improving the robustness for noisy images, Moghaddam and Jamzad combined the Radon transform and bi-spectrum modeling to quantify the motion blur parameters [33]. However, the restoration accuracy of this method is weak, especially for the estimation of large blur length. Wang et al. introduced an improved PSF parameters estimation algorithm which combined bilateral-piecewise estimation strategy and the sub-pixel level image generated with bilinear interpolation in different noisy situations [12]. However, it is too hard for this algorithm to explore effective solution under non-linear and non-uniform motion blur. The Hough-transformation-based method estimates the PSF parameters by means of the log spectrum of the blurred images [34]. This method owns an obvious shortcoming that is the choosing of threshold values during image binarization, and therefore the error in angle estimation will result in erroneous length estimation [12]. Most of the methods are still not able to produce good precision results especially the spectrum image containing interference stripes (see the colored lines of dashes in Fig 1), and large blur length estimation. On the whole, the existing algorithms still can not achieve a satisfactory balance between precision, robustness and time efficiency.
To address these limitations in single blind image restoration, we propose a novel Radontransform-based blur angle estimation scheme which is inspired by the dark and bright stripes in frequency domain (see also in Fig 2). Based on this blur angle estimation method we also propose an accurate auto-correlation-matrix-based approach to detect the blur length. Comparing with the traditional stripe-based estimation method, the underlying techniques used in this paper are different and more advanced.
• We firstly design and conduct a set of preprocessing to filter the original spectrum image by the adaptive median filter [35], binarization and Sobel edge detection etc.
• Secondly, differently from the previous Radon-transform-based methods [25,36,37], we not only detect the blur angle but also refine it by our proposed tri-Radon transform method.
• Thirdly, in the second and third Radon transform, we constructed a Difference value vs Angle curve to estimate the angle.
• Fourthly, during the blur angle estimation, our own designed minimum-based filtering algorithm is employed to decrease the influence from the interference stripes. Besides, the adaptive median filter, binarization and Sobel edge detection in spectrum image preprocessing are also benefited to weaken the affection from interference stripes.
• Fifthly, we combine the first order differential of image and the auto-correlation matrix to estimate the blur length accurately and efficiently.
• In addition to the above, our proposed method can deal with the image of arbitrary sizes of row and column, since the method proposed in [12] can only handle the square image.

General motion blur model
The degradation process of blurred image can be approximately simulated as a liner degradation system, and the blurred image caused by the relative motion between camera and scene can be described as a two-dimensional convolution model in the linear space translation, which is modeled as: where g(x, y),f(x, y) represent the blurred and original image respectively, h(x, y) is the PSF function or blur kernel, and n(x, y) refers to the additive noise. The notation " � " represents the convolution operator. From Eq (1), we can see that the keypoint of deblurring is to determine the PSF function g(x, y). Assuming that the scene objects move uniformly relative to the camera, we can deduce that the gray values of any points in the blurred image are related to the gray values of their corresponding adjacent points in the original image, and the PSF for motion blurring can be expressed as [12]: where l is the length of blur, and θ represents the angle of blur.

Blur angle estimation
As mentioned previously, the blur angle can be obtained by measuring the direction of the approximately linear dark stripes in frequency spectrum. In this section, we adopt a modified Radon-transform-based method to detect the blur angle. The application of Radon transform in image processing can be expressed as: where R θ (x 0 ) is the value of Radon transform projection, θ represents the angle of Radon transform, (x, y) and (x 0 , y 0 ) refer to the coordinates of original image and Radon transformed image respectively. Radon transform is regularly used to detect the lines in the image. There are several dark stripes tilted at a certain angle in the spectrum figure of motion blurred image, and these dark stripes are parallel and symmetrical. Normally, the blur angle can be obtained by detecting the peak value with Radon transform for these dark stripes in the spectrum image. Most of the Radon-transform-based methods do not deal with the spectrum image before using radon transform to detect the blur angle. In practical application, the light and dark stripes in spectrum image are often ambiguous, especially the blur image with noise, this will lead to large errors in the estimation of blur angle, and thus will affect the detection of blur length and the deblurring of degraded image in the following phases of the whole algorithm. To overcome this problem, our blur angle estimation is based on the diversification processing of spectrum image.

The spectrum processing
The outline of our spectrum image processing is presented in Fig 2. Here, we take the cameraman.tif with a blur angle of 45˚as a sample in the processes. As is shown in Fig 2(b), before our method starts, the spectrum image is obtained by Fourier transform of blurred image.
Due to the influence from noise, camera and other image acquisition equipment, there are many interference stripes in the spectrum image, such as the lines of dashes marked wit red, sky-blue, yellow, dark-blue and green colors in Fig 1(a) and 1(b). To eliminate the disturbances of interference stripes, we adopt adaptive median filter, binaryzation and minimum-based filtering algorithm. The adaptive median filter and binaryzation are carried out on the spectrum image in this section, and the details of minimum-based filtering algorithm are presented in the Section 3.2. According to the above analysis, we then use the adaptive median filter on the spectrum image, so as to eliminate the interference from the noise (see also Fig 2(c)). Following the experiments experience, the appropriate size of adaptive median filter should be 3. After that, in the process of Fig 2(d), we binarize the filtered spectrum image with a threshold τ b .
When we get the binarization result for the filtered spectrum image, we consider whether to conduct the edge detection for the spectrum image by a threshold τ e , and the Sobel operator is adopted in the edge detection algorithm (see also in Fig 2(e)). Based on the whole experiments experience, the suitable value for threshold τ b and τ e should be 0.35 and 0.47.
Then the Radon transform is made on the binarization spectrum image to detect the blur angle.

Tri-Radon transforms
When the binary spectrum image is obtained, the next step is to estimate the blur angle by Radon transform. In the whole algorithm, we totally adopt Radon transform projection three times to detect the blur angle, which we named "Tri-Radon" transforms.
In the first Radon transform process, we firstly detect the blur angle by the two-dimensional color map of Radon transform projection roughly (see also in Fig 3(a)). In the color map of Radon transform projection, the horizontal coordinates θ (from 0˚to 180˚) represent the angle of Radon transform projection, the vertical coordinates x 0 (from -250 to 250) refer to the distance from a straight line to the origin in the binaryzation spectrum, and this line is perpendicular to a straight line with a such angle (the horizontal coordinates' value), the color bar at the right part of color map means the value of Radon transform projection R θ (x 0 ). For better understanding, here we take the coordinates value (45˚, -27, 238.7) in the color map as an example, where 45˚is the value of horizontal coordinates θ, -27 is the value of vertical coordinates x 0 , and 238.7 is the value of Radon transform projection R θ (x 0 ), this means that there is a straight line passing through the origin with a slope of 45˚, and there also exists a straight line B perpendicular to A and 27 distances away from the origin in the binarization spectrum, since the value of Radon transform projection at this position is 238.7. When we comprehend the meaning of Radon transform projection's color map, we can observe that there are many local maximum and minimum values of Radon transform projection on straight lines with a slope of 45˚in Fig 3(a), especially the neighborhood of origin (vertical coordinate 0) in spectrum image. From this observation, it can be judged that the blur angle θ psf is about θ 1 = 45˚: After the rough estimation of blur angle by the two-dimensional color map of Radon transform projection for spectrum image, the next step is to conduct the blur angle detection through the second Radon transform.
As is shown in Fig 3(b), in the second Radon transform projection for the binary spectrum image, we set the angles θ 2p of projection from 0˚to 180˚and step width to 1˚. After the value of Radon transform projection for each angle is obtained, we firstly calculate the summations of local maximums and local minimums respectively, then we calculate the difference between these two summations: where Max θm and Min θn represent the local maximum of Radon transform projection on spectrum image with angle θ, and D θ is the difference between the summation of local maximums and the summation of local minimums. Based on the data of D θ and θ, the Difference value vs Angle curve can be obtained (see also in Fig 3(b) and 3(d)). Through this curve, we can obtain the rough estimation of blur angle θ 2 by searching the corresponding angle of the projection value's local maximum: Through the experiments experience, we can find that in all the Difference value vs Angle curves, there are two additional angles (2˚, 92˚) detected except the true blur angle (46˚), since the binarization spectrum is affected by the two tiny stripes in the direction of 2˚(red arrow marked) and 92˚(green arrow tagged) in Fig 3(c). Through all experiments' Difference value vs Angle curves, we can observe that there is a minimum in the neighborhood angle of false 9y i 2 ½y 2 À 5; y 2 þ 5�; 8jy j À y i j � 5; 9D y i < D y j ) y psf 6 ¼ y 2 So, to eliminate the interference from the noisy stripes, we adopt our proposed minimumbased filtering algorithm as following: 1. Firstly, we'll search more than three largest maximums of D θ in the Difference value vs Angle curves of angle from 0˚to 180˚; 2. After that, we will judge whether there is a minimum in the neighborhood angle of each maximum value's corresponding angle, then we delete the elements with minimum values; 3. The remaining maximum's corresponding angle identified as the blur angle θ 2 of second estimation.
Based on the second rough blur angle estimation results, we conduct the third Radon transform projections for the binary spectrum image. This time the angles of projection are set from θ 2 − 3˚to θ 2 + 3˚, and the step width is set to 0.5˚(see also Fig 3(c)). Then we repeat the steps of the second blur angle estimation. Through the Projection value vs angle curve of the third Radon transform projections, we can get the precise blur angle estimation results θ 3 , and we update θ 3 as the ultimate blur angle: All the above estimations of blur angle are for square images, if the sizes of image's row and column are not consistent, the blur angle can obtained as following: where Θ is the angle estimation from the rectangle image, θ represents the blur angle, M and N refer to the sizes of image's row and column respectively.

Blur length estimation
The precondition of our blur length estimation is to firstly measure the blur angle and then rotate the blurred image clockwise with the detected blur angle (see also in Fig 4(a)), thus the motion becomes in horizontal direction, and we can obtain the rotated blurred image Z(x, y) in Fig 4(b). Firstly, the first order differential of image Z(x, y) in horizontal direction is given by: where Z 0 (x, y) is the first order differential of Z(x, y). Then we calculate the auto-correlation matrix of Z 0 (x, y) in horizontal direction as follows: where M is the size of Z 0 (x, y)'s row, which equals to the size of image Z(x, y)'s row, and S(x, y) refers to the auto-correlation matrix of Z 0 (x, y) in horizontal direction. According to the auto-correlation matrix S(x, y), we can calculate the summations of each column elements in S(x, y): where the size of S 0 (y) is 1 × N. The goal of S 0 (y)'s calculation is to reduce the influence from noise and improve the accuracy of blur length estimation. The blur length can be estimated through the S 0 (y) vs y curve in Fig 4(c) and 4(d), where the horizontal coordinate is the data of y, and the vertical coordinate is the corresponding S 0 (y). We need to search a pair of conjugated-correlated troughs (tagged with green arrows) on the right and left sides of central crest (marked with red arrow) in the S 0 (y) vs y curve, the half l of the distance D between these two conjugated-correlated troughs is the estimated blur length. Here we take an experiment result as the sample in Fig 4(d), we can see that the horizontal values of these two conjugated-correlated troughs are 339 and 369, and the distance D between these two troughs is 30, so we can estimate the blur length is 1 2 � 30 ¼ 15. In summary, our blur length estimation is proposed as follows: 1. We rotate the blurred image g(x, y) clockwise to the horizontal motion direction by the detected blur angle, thus we can obtain the rotated blurred image Z(x, y).
2. Calculating the first order differential Z 0 (x, y) of Z(x, y) in horizontal direction.
3. Then calculating the auto-correlation matrix S(x, y) of Z 0 (x, y).

4.
Computing the summations of each column elements in S(x, y) to get S 0 (y).
5. Estimating the blur length by searching a pair of conjugated-correlated troughs in the S 0 (y) vs y curve.
With the blur parameters estimation algorithms mentioned in Section 3 and Section 4, we can construct the PSF function by Eq 2. Then we can restore the blurred image by the nonblind filter method.

Experiments
In this section we conduct a series of comparisons between our approach and the state-of-theart methods (Moghaddam [33], traditional Radon-transform-based method [25], deep learning approach (RBFNN) [5] and Wang [12] are carried out in this section). A total of 18625 images including VOC2012 dataset [38] and real-life images are used to evaluate our approach in Section 5.1, 5.2 and 5.3.

Performance of the parameters estimation approaches
In this subsection, we evaluate our proposed parameters estimation approaches by the degraded images with various blur angles and lengths, and the blurred images which are generated by varying the blur angle from 0˚to 180˚and the blur length from 0 to 100 pixels respectively.
First of all, the absolute errors for two blur parameter estimations using the five methods are shown in Table 1. From this table, we can get several observations: 1. During the whole experiments, all the methods tend to almost no error for min absolute errors of blur angle θ and blur length L.

2.
No matter which cases, Wang's approach and our proposed method are obviously superior to the first three methods. Moghaddam's approach obtains the worst performances for the max absolute errors of θ and L among the five methods, but it could produce comparative mean absolute error with traditional Radon-based and Dash's methods for θ.
3. In sharp contrast, our method achieves the best estimation accuracy for both θ and L with the least mean absolute errors of 0.11˚and 0.14 pixels. Moreover, the max absolute errors of these two parameters generated by our approach are less than 0.22˚and 0.28 pixels respectively, significantly smaller than those generate by the other four approaches.
Besides, by varying L from 0 to 100 pixels with a step of 5 pixels at the blur angle of 0˚, 30˚, 45˚, 60˚and 90˚respectively, the absolute error with respect to the blur length L is observed,  Fig 5. Obviously, the absolute error curve fluctuates greatly with a lower value when L is small, and then the error increases slowly when L is larger than 30 pixels. This can be owing to the fact that the interference stripes are generated in the spectrum image when the blur length is large (see also in Figs 1 and 3(c)). Under these circumstances, it may lead to the increasing estimation error of the distance between a pair of conjugated-correlated troughs in , which shows that the performance is very good. Similar to the experiment for blur length L, the absolute error in terms of the blur angle is observed. Moghhadam's [33], Traditional-Radon-based [25], Dash's [5] and Wang's [12] methods are adopted in blur angle estimation experiments to evaluate and validate the superiority of our proposed method. Meanwhile, the Gaussian noise with standard deviation σ = 0.0001 is also added into the degraded images to demonstrate the good performance of our method. The experiment results are presented in Fig 6. From Fig 6, we can find that our approach achieves the best accuracy results among the five methods. Besides, we also conduct additional Friedman and Nemenyi tests of blur angle's absolute errors to compare our method with these algorithms, the critical difference diagram results generated by Friedman and Nemenyi tests are presented in Fig 7. Through Fig 7, we can observe that our proposed method obtain the best results on blur angle accuracy. On the one hand, by introducing the adaptive median filter, binarization and Sobel edge detection to weaken the affection from the interference stripes in Figs 1 and 3(c). On the other hand, our own designed minimum-based filtering algorithm is employed to decrease the influence from the interference stripes. These two measures both contributed to the smaller absolute error comparing with other four methods.
All these observations confirm the high accuracy estimation for blur parameters of our proposed methods.

Performance of deblurring on noise-free images
The previous subsection shows that our blur parameter estimation approach obtains good results under different circumstances. In this subsection, based on the constructed PSF with the estimated θ and L from Section 5.1, we use regularized filter to deblur the degraded image,  so as to demonstrate that our method could yield better performance of deblurring than state-of-the-art approaches intuitively. For this purpose, we will carry out experiments on the VOC2012 dataset [38] and real-life blurred images, and compare our method against the Moghhadam's [33], Traditional-Radon-based [25], Dash's [5] and Wang's [12] methods etc.
The degraded images and restoration results produced by the five approaches in VOC2012 dataset are illustrated in Fig 8. From these figures we can observe that our method obtain the most satisfactory results and demonstrates a good robustness under different circumstances. In all the experiments, Moghaddam, Traditional-Radon-based and Dash's methods fail to completely remove the blurring effects in the deblurred images. Wang's approach can only obtain acceptable results in the first three rows of images. Moreover, we adopt the peak signal-to-noise ratio (PSNR) and the running time as the evaluation criterion to quantitatively assess all methods' performance. PSNR are defined as: ðf ðx; yÞ Àf ðx; yÞÞ where MSE is the mean square error, f(x, y) andf ðx; yÞ are the intensity of pixel (x, y) in the image before and after motion deblurring respectively, and M and N represent the size of image. Besides, in order to make the experimental comparison more accurate, we also use structural similarity index (SSIM) [39] to evaluate the deblurring quality of each method: where μ f , m^f , σ f , s^f and s ff represent the local means, standard deviations and cross-covariance

Performance of deblurring in noisy circumstances
Apart from parameters estimation tests and experiments on noise-free images, we also conduct a set of experiments on noisy images and measure the PSNR and SSIM of the deblurred pictures to validate the robustness of our method. Similar to what we did in the Section 5.2, the images of VOC2012 dataset are blurred with different PSFs, and additive Gaussian noise with standard deviation σ = 0.0001 is added to produce noisy-blurred images (see also in Fig 12(a)). For fair comparison, blind deconvolution is used to deblur the degraded images by the detected PSFs with different methods. The visualization results for five methods are shown in Fig 12, and the PSNR and SSIM of the deblurred images with different approaches are expressed in Table 3 and Fig 11. From Fig 12, it can be observed that the deblurred images with Wang and our method exhibit better results than other three methods. Furthermore, through the PSNR and SSIM of the deblurred images produced by different methods in Table 3 and Fig 11, it can be seen that our method yields the largest PSNR and SSIM on VOC2012 database, since we ranked first in 96.76% of the VOC2012 dataset. These results demonstrate that the robustness of our method is superior to the other four methods in noisy circumstances.  Throughout the whole experiments, our method could achieve the best blur parameter estimation accuracy for both blur angle and blur length with the least mean absolute errors of 0.11˚and 0.14 pixels.

High quality of restoration.
In the deblurring on noise-free images, we have the best results in 95.91% of the VOC2012 database on computational efficiency and deblurring quality, whereas in the noisy images we could achieve 96.76% best results in VOC2012 database.
All in all, one can see that the proposed method could estimate the blur parameters accurately and eliminate the influence from the linear motion effectively in the whole experiments.

Conclusions
An improved point spread function estimation approach for blind motion deblurring from the spectrum of a single image which is altered due to motion blur, noise and interference stripes was proposed in this research. To solve these problems in single blind image restoration, we propose a novel tri-Radon transforms blur angle estimation scheme which is inspired by the dark and bright stripes in frequency domain (spectrum). Based on this blur angle estimation approach the first order differential of image and auto-correlation matrix were combined to detect the blur length accurately. We performed a set of experiments on VOC2012 dataset and naturally motion blurred images to compare our proposed method against several state-of-the-art methods through the qualitative and quantitative assessments. The results of these experiments show that our method perform best with respect to different blur parameters and is proved to obtain satisfying deblurred images, since we could produce impressive accuracies for both blur angle and length with errors of only 0.11˚and 0.14 pixels in the whole experiment respectively.
As there is almost no perfect linear motion between the camera and the scene in practical application, so it is worth to explore an effective solution for the nonlinear and nonuniform motion blur problem. In the future research we plan to improve our method work on nonlinear and nonuniform motion blurred images and extend our method to other datasets.