Restoration of Motion-Blurred Image Based on Border Deformation Detection: A Traffic Sign Restoration Model

Due to the rapid development of motor vehicle Driver Assistance Systems (DAS), the safety problems associated with automatic driving have become a hot issue in Intelligent Transportation. The traffic sign is one of the most important tools used to reinforce traffic rules. However, traffic sign image degradation based on computer vision is unavoidable during the vehicle movement process. In order to quickly and accurately recognize traffic signs in motion-blurred images in DAS, a new image restoration algorithm based on border deformation detection in the spatial domain is proposed in this paper. The border of a traffic sign is extracted using color information, and then the width of the border is measured in all directions. According to the width measured and the corresponding direction, both the motion direction and scale of the image can be confirmed, and this information can be used to restore the motion-blurred image. Finally, a gray mean grads (GMG) ratio is presented to evaluate the image restoration quality. Compared to the traditional restoration approach which is based on the blind deconvolution method and Lucy-Richardson method, our method can greatly restore motion blurred images and improve the correct recognition rate. Our experiments show that the proposed method is able to restore traffic sign information accurately and efficiently.


Introduction
With the development of urbanization and the popularization of the automobile, problems associated with road traffic congestion, frequent traffic accidents, and the low efficiency level of road transport has become increasingly more serious [1]. In order to alleviate these problems, a Driver Assistance System (DAS) was designed to help or even substitute human drivers to enhance the safety of driving [2,3]. This system films the road information in its natural scene using a camera that is mounted inside the vehicle, and this information is subsequently processed in real time using a relevant circuit system. Then, the system provides information, such as warnings and tips, to the driver. This can greatly reduce driving risks and enhance road occlusions and other forms of image variance and noise [2]. Support vector machines (SVMs) were utilized to detect and recognize traffic signs by Bascon et al [18]. In addition, Khan et al. [19] proposed a method based on image segmentation and joint transform correlation, which also integrated shape analysis. Barnes et al. [20] also presented the radial symmetry detector to detect speed signs in real time.
All of these algorithms that have been used to restore motion-blurred images took place in the frequency space. Moreover, traffic sign recognition algorithms tend to focus on traffic sign detection and recognition. They do little to deal with traffic signs in blurred images. In order to solve this problem, a new algorithm based on traffic sign border extraction is proposed as a method that can be used to restore motion-blurred images in the spatial domain. The border of the traffic sign is extracted using the image's color information, and then the width of the border can be measured in all directions. According to the width measured and the corresponding direction analyzed, the motion direction and scale of the image can be confirmed, and then it can be used to restore the motion-blurred image. This method has a lower computational cost and better performance. Meanwhile, the restored image ensures accurate and reliable detection of the traffic sign.
The remainder of this paper is organized as follows: Section II presents the generation of the motion-blurred image and the restoration principle of the motion-blurred image. In Section III, the parameters extraction model, which is based on border deformation detection, is given. Border parameter extraction algorithms are discussed in detail in Section IV. Experimental results are presented in Section V. Finally, a conclusion is presented in Section VI.

Methods
All of the experimental images in the paper are from the German Traffic Sign Recognition Benchmark (GTSRB) [21]. Physical traffic sign instances are unique within the dataset. There are more than 40 classes and more than 50000 images in total. In addition, the GTSRB dataset is free to use.

Restoration principle of motion-blurred images
A motion-blurred image is generated by the relative motion between the target and the camera during the image's exposure time. The study of motion blur produced by uniform motion is of general significance, because the variable speed and the linear motion blur can be approximately considered as uniform motion in the shooting moment. Following motion-blurred degradation and additive noise superposition, the output result is a blurred image [22]. This degradation process can be shown in Fig 1. In this model, the output is calculated by means of the following formula [23] gðx; yÞ ¼ f ðx; yÞ hðx; yÞ þ nðx; yÞ ð 1Þ where g(x,y) is the blurred image, f(x,y) is the undegraded image, n(x,y) is system noise, h(x,y) is the point spread function (PSF), and N is the convolution in spatial domain. Since the space domain convolution is equal to frequency domain multiplication, the frequency domain repre- Motion-blurred restoration involves reversing the image degradation process and adopting the inverse process to obtain clear images. Motion-blurred is one case that was featured in the model of Lin et al [22]. The model assumes that the target or camera moves at a certain speed and direction, and a distance, s, is moved during the exposure time, T. Regardless of the effect of noise, it can be presented by the formulagðx; yÞ ¼ 1 In addition x(t),y(t) are the time-varying components of motion in the x-direction and y-direction.
The Fourier transform of g(x,y) is The spectrogram of the blurred image is the modulus square of Eq (2), which leads to an un-degraded image where the phase shift is absorbed, since its value multiplied by its complex conjugate is equal to unity. By defining Hðu; vÞ ¼ 1 The inverse Fourier transform of H is h(x,y) = 1/vT = 1/s. This shows that, the unknown variable is s, which includes the direction and scale. So theoretically, if the two parameters are known, it is possible to obtain the spectrogram and restored image of the blurred image. Therefore, a new method based on the border in the spatial domain is proposed to extract the parameters quickly and efficiently.
From the perspective of image restoration, the minimum mean square error filtering (Wiener Filtering) method is adopted, which can avoid the effect of white noise. The method is shown by the expressionF where K is a modifying factor representing the power ratio of the noise and the signal. Because this value cannot be accurately obtained, a fixed value is replaced in practice. The inverse Fourier transform ofF ðu; vÞ is the restored image.

Parameter extraction based on border in the spatial domain
Through the above description about the restoration algorithms of motion-blurred images, it is obvious that extracting the movement direction and scale is a key step in the process, and that determining how to detect the two parameters quickly and accurately is the key problem. There were already some algorithms in existence that can extract these parameters, and most of these algorithms try to do it in the frequency domain by measuring the zero pattern of the blurred image. However, these methods lack of robustness and are easily affected by noise. Therefore, it is necessary to find a new method that can calculate the two parameters. . When comparing the blurred image with the sharp image, we can conclude that the border will become wider on the front and back sides in the direction of motion, and that the blurred image has lower color saturation. On the right and left sides, the boundary change is small compared with the sharp image.
In order to explain the changing laws of the border clearly, the blurring process is simulated. Fig 3(a) shows the border of the traffic sign. We were then able to produce motion blur in the direction of 0°and 30°, and we were able to extend it by 15 pixels. Lastly, we obtained the border's motion-blurred images [see It can be clearly observed that motion causes the border's regularity to change. Fig 3(a) is the original border before it became motion blurred. Fig 3(b) moved in a direction of 0°. The border around the 0°direction had apparently become blurred and its boundary became wider. Moreover, its color saturation dropped much more markedly than in the border around the 90°direction. Fig 3(c) is an image that presents similar features, though the direction is 30°.  , each line of the image is a sequence that could be expressed as f(x,y), and each sequence is considered as a one-dimensional sequence, which can be expressed as f(x). Assuming that the value of the border pixels is 1, and others are 0..Both a and b are the boundaries of the traffic sign borders.
The sequence moved n pixels towards the right, and n<b-a. Through inverse Fourier transform, it can be known that h(x) = 1/n (0 x n-1).Ignoring the effect of noise, we can express the sequence after motion as By simplifying Eq (4), we can get The corresponding illustration of the function x solution of Eq (5) is shown in Fig 4. From Eq 11 and Fig 4, we can see that when a x a+n, g(x) grows from 0 to 1. When a+n x b, the value of g(x) is 1, and when b<x<b+n, g(x) decreases from 1 to 0. In the image the saturation of the pixels in the middle of the border is the highest, and it decreases gradually from the middle to the edge of the border. The threshold is set as 1, a+n x b, and we consider it as the width of the sequence after blurring. This is to say that the width of the pixels whose saturation equals 1 in original sequence is d, and the scale of motion is n. Thus, the width following motion is d' = d+n.
When considering the entire border, the width of the border along the motion direction would apparently change, while the width of the border perpendicular to the motion direction changes little. Finally, the width between the two directions changes gradually, which is shown in Fig 5. Given that the circle is isotropic, no matter which direction is the image blurred in, the width of the border would change in the same way, which makes it possible to confirm both the blurred direction and scale. Specifically, after measuring the width of all the directions, two maximum values (d max ) and two minimum values (d min ) could be extracted. By connecting two groups of points, respectively, these two lines are perpendicular, and the blurred direction is the direction of the minimum value line.
We can also obtain the scale from the width of the border. Assuming that the width of the border before the motion is d, it is easy to know that the maximum value is d max = d and that the minimum value is d min = d-n, so the scale is n = d max -d min . However, the result is easily affected by threshold determination, so the results should be corrected. An appropriate coefficient (K) is introduced to the result, so we could obtain the corrected result, n = K (d max -d min ).
Thus far, the parameters of direction and scale are extracted from the motionblurred images.

Border deformation detection for image restoration
According to the border deformation characteristics of the motion-blurred images of traffic signs, the process of realizing the algorithm is drawn in Fig 6. The first step in the process is to remodel the image from RGB (Red, Green, Blue) to HIS (Hue, Saturation, Intensity); then, the traffic sign border should be extracted by defining the appropriate threshold of HSI. After setting the center of the border, we can measure the width of the border in all directions. Then, the two parameters of direction and scale can be calculated. At last, the motion-blurred image of the traffic sign can be restored using the results obtained with this method.
2.3.1 Border extraction. There are some methods that can be used in border extraction [24,25]. Since the border is blurred, the traditional method cannot meet the requirements needed to extract the borders with accuracy and integrity.
The traffic sign's borders are red, so it is possible to confirm whether the pixel belongs to the border or not by checking its color. The RGB image cannot confirm the color directly, so the method is based on the HSI model. The HSI color space is well suited to describe color in a way that is practical for human interpretation [26]. In the HSI model, the variation of light does not greatly affects the value of hue [27], so it is easy to confirm what the color of the pixel is. The border of the traffic sign is red, and according to the statistical results, the H values of most border pixels fall in the range of 0°~36°and 324°~360°. In addition, in this method the intensity component was not used in the calculation, which reduces much computational capacity.
There may be some noise in the resulting extraction; as such, we use a filter to remove it. The results of this method are shown in  To measure the width of the border, we compute the border by a step of 1°. Through theoretical analysis and experimental verification, we found that the parameters are the same when the motion blurred directions are θ°and (θ+180)°. So, the direction of the motion-blurred image can be normalized from 0°to 180°. The result of the measurement is shown in Fig 8(a).  In order to eliminate noise, the result is processed by the mean filter. Fig 8(b) is the result after filtering.
Following this, we can then determine the two extreme points (the maximum point and the minimum point), and the direction of the minimum point is the motion direction. To ensure that the results are more precise, we use the direction of the maximum point to correct any errors.

Results and Discussion
Two different kinds of experiments were designed to test the two parameters respectively. One was to change the blurred scale while the blurred direction remained invariant, and the other was to change the blurred direction while the blurred scale was invariant. These experiments could indicate the accuracy, robustness, and scope of the application of these methods.

Evaluation criteria of image restoration
3.1.1 Evaluation criteria of parameter extraction. The absolute error E a is the measured value between the real value V r and the measured value V m . This can be calculated by the formula E a = |V r -V m |.Then the relative error of direction E dr and the relative error of scale E sr are respectively defined as

Quality evaluation of image restoration. (1) Improved assessment of Gray Mean Grads
The purpose of image restoration is to improve the quality of the image. This paper uses the Gray mean grads (GMG) to evaluate the restored image, ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi ½gði þ 1; jÞ À gði; jÞ 2 þ ½gði; j þ 1Þ À gði; jÞ where M and N, respectively, refer to the height and width of the image. The larger the value, the sharper the texture, and that leads to improved image quality. In order to better evaluate the image, we first calculate the GMG of the undegraded image (GMG_ui), the blurred image (GMG_bi) and the restored image (GMG_ri) respectively, and we then calculate the ratio of the GMG of the blurred image (GMG_bi) to the GMG of the undegraded image (GMG_ui), as well as the ratio of the GMG of the restored image (GMG_ri) to the GMG of the undegraded image.
Greater values indicate that the two images are closer.
(3) Recognition rate The restored traffic sign images using this algorithm are recognized using a recognition system that has been used to test the effectiveness of the algorithm in this paper.
where N s represents the total number of test samples, and N c is the number of test samples that are recognized correctly.

Validation of parameter extraction
When the blurred direction is fixed and the scale changes from 2 pixels to 18 pixels, the results of the measurements obtained in this method are shown in Table 1.
In order to make the results more intuitive, the relative error of the parameters is shown in Fig 9. From Table 1. and the corresponding curve in Fig 9, it can be known that when direction is invariant, the results of the direction measurement are more precise than the scale measurement; this is because it is hard to confirm the threshold and the coefficient (K) at the border extraction step when illumination changes. The accuracy of the direction measurement is low when the scale is less than 5 pixels. However, there is no need to restore the image because the image is clear enough to recognize. The accuracy of the scale measurement is acceptable when the scale is between 5 and 12 pixels. When the scale is higher than 12 pixels, the error is too large to accept. Additional observations show that the traffic sign border's width of in the sharp image is about 10 pixels, so the algorithm is not fit for use in a situation when the blurred scale is much larger than the width of the border.
The blurred scale was also fixed at 10 pixels and the direction was changed from 0°to 180°, the results of this measurement are shown in Table 2.   To make this result more intuitive, the relative error of the parameters is shown in Fig 10. From Table 2. and Fig 10, it can be shown that the results are fairly accurate, the relative error of the direction measurement was below 2%, and the scale measurement was below 8%. Compared with the scale, the directions are measured more accurately and are stable.

Validation of image restoration
Using the parameters measured by this method to restore the motion-blurred image, we obtained good restoration results. Fig 11 shows part of the blurred images and the corresponding restored images. We also compared our proposed method with two commonly used methods of restoration. The first one is based on the blind deconvolution method, which was introduced by Jan Kotera et al. in [11], and the second one based on Lucy-Richardson method was introduced by Manoj Kumar Singh et al.in [13]. Both of these methods do not require any information about the image, and the iteration was set to around 30 iterations in the experiment. shows part of the restoration results obtained by both method [11] and [13], which can result in slightly better visualization from these blurred images. Fig 12(a) shows the results obtained using the blind deconvolution method [11], and Fig 12(b) shows the results obtained using the Lucy-Richardson method [13].
From the results of the restoration, we can see that the proposed method yields the best restoration results whenever the overall visual effect and local visual details are less noisy and are of higher integrity. As such, our method is efficient in automatic detection system.. Most of these motion-blurred traffic sign images can be restored well, while some of these restorations led to the generation of stripes, as caused by the Wiener Filter. Our method can also be used for motion blurred images featuring different contrast levels.
In Fig 12, the method proposed by Jan Kotera et al. and the method proposed by Manoj Kumar Singh et al. both led to the problems with noise amplification and ringing effects, and the details of texture and edge were not clear enough for subsequent processing. In addition, they did not work particularly well for motion blurring images, because most the test data cannot be deblurred in these experiments.
Some images were randomly selected from a total of 217 samples and their GMG ratio was subsequently calculated, and the results are shown in Fig 13. The texture information following restoration was worse when compared to that of the blurred image in each individual image, because the direction and scale differences led to errors in parameter extraction. However, it can be determined that most of the traffic sign image restorations can improve the image quality, as it features, clearer texture. This shows that the algorithm is both stable and reliable. Table 3 shows the LS values of the motion blurred images and the restored results in Fig 12. In  [11], and (b) restoration results using method [13].
general, the proposed method obtained the best visual effect and featured the highest values when meeting the assessment criteria, thus, its restoration result is better than that of the blind deconvolution and Lucy-Richardson methods.
The traffic sign images restored by this algorithm are recognized using the recognition system based on the SVM, which is employed to test the effectiveness of the algorithm. There are 217 samples in all, and the recognition rates of the motion-blurred traffic sign images and their corresponding restored images are shown in Table 4. The results of the image recognition show that the motion-blurred traffic signs are hard to recognize, there were only 22 traffic signs that were recognized in a total of 217 images, and the recognition rate was low at 10.14%. However, following restoration, 191 traffic signs can be recognized, and the recognition rate subsequently increases to 88.02%. In real data featuring traffic signs, about 10% images are blurred by motion. Using this method, the recognition rate can be improved to about 77.84%. This demonstrates that the restoration algorithm is effective, and that the restoration of motion-blurred images is necessary in image detection.

Conclusions
This paper proposed a new method to measure two important parameters-direction and scale-of motion-blurred traffic signs in the spatial domain. Through our experiments, it was determined that the results are accurate and that the error rates are acceptable. This method is robust, and it can reduce the impact of changing illumination on parameter extraction. Using the measured parameters to restore the motion-blurred traffic sign images, we obtained good results that could meet the system's requirements in image recognition. The results illustrated that the method can deal with recognition-based problems associated with motion-blurred traffic sign images. Compared with the methods based on the frequency domain, the impact of noise on parameters extraction is much smaller. In conclusion, application of the algorithm offers an advantage in traffic signs recognition. This method can improve the performance of the DAS and help to improve automatic driving and road safety. As for future work, we will continue to investigate this subject by providing a more detailed background of this problem, and we will work to improve the robustness of border extraction with more suitable features in reducing the effects of the environment.