Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Robust visual detection of brake-lights in front for commercialized dashboard camera

Abstract

The collision avoidance system (CAS) is an essential system for safe driving that alerts the driver or automatically applies the brakes in an expected situation of a vehicle collision. To realize this, an autonomous system that can quickly and precisely detect brake-lights of preceding vehicle is essential and this should works well in various environments for safety reason. Our proposed vision algorithm solves these objectives focusing on simple color features rather than a learning algorithm with a high computational cost, since our target system is a real-time embedded device, i.e., forward-facing dashboard camera. However, the existing feature-based algorithms are vulnerable to the ambient noise (noise problem), and cannot be flexibly applied to various environments (applicability problem). Therefore, our method is divided into two stages: rear-lights region detection using gamma correction for noise problem, and brake-lights detection using HSV color space for applicability problem, respectively. (i) Rear-lights region detection: we confirm the presence of the vehicle in front and derive the rear-lights region, and used non-linear mapping of gamma correction to make the detected region robust to noise. (ii) Brake-lights detection: from the detected rear-lights region, we extract color features using the HSV color range so that we can classify brake on and off in various conditions. Experimental results show that our algorithm overcomes the noise problem and applicability problem in various environments.

Introduction

Whether it is used as a driver assistance system or as a component of autonomous driving, one of the most important factors for safe driving is the collision avoidance system (CAS) [1]. CAS recognizes a collision in advance and avoids it or mitigates the damage from it. Therefore, CAS must recognize the situation around the vehicle and infer information from it. When a collision is expected, the system should either send a warning to the driver or act autonomously without any driver input, i.e., by automatically applying the brakes [2]. In particular, CAS is important in rear-end collision, one of the most frequent traffic accidents [3]. The advanced CAS detects and responds to danger in a rear-end collision situation based on distance information using distance measuring sensor or surrounding context information using vision sensor.

Distance measuring sensors, e.g., radars and LiDAR, and their algorithms [46], demonstrates significantly high performance; however, it is too expensive to be commercialized [2]. But more importantly, the sensor-based method has a disadvantage in rear-end collision; they are operated by estimating the distance to the vehicle in front [7]. Therefore, the distance from the vehicle in front must be sufficiently close or close at a fast speed for the sensor-based CAS to recognize and respond to a rear-end collision situation in advance. This constraint makes it difficult for the sensor-based method to respond flexibly in the rear-end collision situations. Furthermore, there is a possibility of missing the right moment when it should have responded more quickly.

On the other hand, vision sensors installed in vehicles and their intelligent methods can see, detect, and distinguish objects directly, owing to state-of-the-art lane detection, vehicle detection, and object identification technologies in computer vision [8]. Especially, vision-based method is much more suitable for the rear-end collision situation than the sensor-based method, because it is independent to distance information; it can acquire high-level information, such as the lane to which the vehicle in front belongs and whether the brake of the vehicle in front is on or off through the optical camera. This high-level information can help predict the braking state of the vehicle in front more accurately and flexibly, allowing the vision-based method to cope with rear-end collision situations more efficiently than the sensor-based method.

In accordance with the above, we propose a vision-based algorithm to efficiently cope with rear-end collision situations that can maximize the strength of the vision-based approach. Various vision-based algorithms have been proposed. Some of these methods use machine learning or deep learning algorithms for the detection of rear-lights region and brake-lights [911]. These methods perform the detection process by extracting spatial information from the input image. Although these learning algorithms show relatively good performance, they are difficult to use in real-time CAS because of their large computational cost [2], especially in the case of deep learning [1]. Additionally, learning-based methods require considerable training time along with the acquisition/annotation of images/videos to train the model. These burdens may be greater depending on the model size [12]. Therefore, we propose an algorithm based on basic features such as color to operate in real-time and do not require model training (feature-based method).

Feature-based methods implement rear-lights region detection and brake-lights detection algorithms based on simple color features or brightness intensity [1319]. They first extract red color features from a specific color space to find the rear-lights region based on the characteristic that rear-lights are being red. From the extracted color features, they conduct a verification process based on the symmetric characteristic of the rear-lights to find the optimal rear-lights pair [9, 20]. In the detected rear-lights region, the vehicle’s brake operation is finally determined based on color features or brightness intensity. However, although these feature-based methods are portable and efficient, they are highly vulnerable to surrounding noise [10, 18, 21, 22]. Input images may contain a significant noise depending on the intensity of light, surrounding objects, and weather conditions. In this case, it is challenging to extract features from the image. It may be effective at nighttime when rear-lights region and brake-lights of cars are clearly identified. However, ambient lights, such as streetlights, may act as noise even at nighttime, resulting in distorted results if proper preprocessing to eliminate noise fails. Optimal rear-lights region verification process such as symmetry test and the size-shape test may be distorted if the features are noisy because these methods are mainly conducted after selecting candidate objects with features extracted from the color space. In feature-based methods, controlling for surrounding noise is important because performance depends on the effectiveness of the extracted features, but existing feature-based methods do not explore noise processing in depth beyond simple noise filtering or morphological operations. In addition, previous feature-based methods mainly evaluated performance at nighttime or daytime. However, we argue that the method should be evaluated in more diverse environments to evaluate the practical applicability of the proposed rear-lights region detection and brake-lights detection algorithms. In summary, the existing feature-based algorithms are sensitive to the effects of ambient noise (noise problem), and cannot be flexibly applied to various environments (applicability problem).

Therefore, we propose additional techniques to enhance the noise robustness and environmental applicability of the algorithm. The overall algorithm is shown in Fig 1. First, an ROI was established using a lane detection algorithm to consider only vehicles in front of the same lane. Setting an appropriate ROI not only reduces unnecessary noise in the input, but also can contribute to performance improvement by focusing attention on key parts. Second, rear-lights region detection is performed to check the presence of a vehicle in front and to find optimal rear-lights pair for brake-lights detection. In this process, we use gamma correction for noise reduction. Non-linear mapping of gamma correction guarantees consistent rear-lights region detection performance in various environments by reducing the effect of noise and making the key object stand out. Finally, based on the detected rear-lights region, the algorithm detects and interprets the preceding vehicle’s brake-lights to determine whether the brakes are operating to prevent rear-end collisions. In this process, classification is made based on the HSV color range. The HSV color range we found from the training samples provides a global decision threshold that can be applied in various environments.

Our main contributions can be summarized as follows:

  1. (1). We design the algorithm based on simple features to avoid the high computational cost of the learning-based algorithm.
  2. (2). We propose a noise reduction technique based on gamma correction to solve the noise problem faced by existing feature-based algorithms. Non-linear mapping of gamma correction reduces noise in color channels and makes the key object stand out.
  3. (3). We propose HSV color range filtering to improve the applicability of feature-based brake-lights detection. The filtering range found from a small number of training samples can be applied in various environments including daytime and nighttime.
  4. (4). Extensive experimental results show that our proposed algorithm effectively improves the noise problem and the applicability problem. Our evaluation scenarios include daytime and nighttime as well as cloudy and rainy conditions. Since each environment has different noise and lighting conditions, we demonstrate the high practical applicability of proposed rear-lights region detection and brake-lights detection methods as well as the noise reduction technique.

Related work

Rear-lights region detection

In most cases, rear-lights detection is performed first for vehicle detection [9, 2028] or brake-lights detection [10, 11, 1319, 29]. Among them, in feature-based methods, rear-lights regions are mainly detected by extracting color features from a specific color space (i.e., RGB [9, 14, 15, 17, 20, 27], YCrCb [10], L*a*b* [22], HSV [21, 23, 28], and YUV [19]) based on their characteristics of being red. Therefore, some studies focus on detecting rear-lights at nighttime, when red color features can stand out against a dark background [14, 17, 20, 21, 23, 27, 28]. After color space conversion of the given image, rear-lights candidate regions are extracted by applying thresholding to the channel containing red color components. Morphological operations [17, 23, 28] or noise filtering [14, 21] may be conducted to remove surrounding noise. Then, verification methods such as symmetry test [20, 21, 23] and aspect ratio test [17, 21] are used to determine the optimal rear-lights region based on the symmetrical characteristics of the rear-lights. This process is almost the same when detecting the rear-lights region at daytime when there is more surrounding noise [9, 10, 15, 19, 22]. Recently, deep learning or machine learning based methods have been proposed for rear-lights region detection. Some of these learning-based methods introduce trained models as part of rear-lights region detection [13, 16, 18, 24, 29, 30]. They use a learning model in performing vehicle detection to improve the accuracy of rear-lights region detection. Adaboost classifier [13, 18], cascade classifier [24], or deformable part model (DPM) [29] is trained to detect the rear of vehicles from given images. The rear-lights region is searched in a space confined to the rear of the vehicle, making detection easier. Conversely, Adaboost classifiers are also used to determine optimal pairs for rear-lights candidate regions found in a feature-based manner [16, 30]. In some methods, the entire process of rear-lights region detection is conducted by the learning model [11, 25]. They either use different models (i.e., R-CNN [31] and FCN [32]) to perform vehicle detection and rear-lights region detection [11], or use a single model (i.e., CNN-LSTM [33, 34]) to implement a one-stage rear-lights region detection process [25].

Brake-lights detection

Brake-lights detection refers to methods that determine whether the brakes of vehicles operate, which is more challenging than simply detecting the rear-lights region. In feature-based methods, brake-lights detection is conducted following rear-lights region detection [1319]. Color features are also useful for brake-lights detection [14, 18, 35, 36]. They determine whether the vehicle’s brakes operate based on the red color intensity of the rear-lights region in a defined color space. Besides color features, brightness intensity can also be used for brake-lights detection [13, 15, 19, 35]. Since the brightness of the rear-lights region is generally greater in brake-on situations than in brake-off situations, they compare the brightness intensity of the rear-lights region to a certain threshold to determine whether the vehicle’s brakes operate. In addition, brake-lights detection is also conducted based on the ROI size of the rear-lights region [17] or the frequency domain [16]. A number of learning-based methods have been proposed for brake-lights detection [10, 11, 29, 3742]. Some methods extract color features from the derived rear-lights region or vehicle region, and train a classifier that predicts braking conditions with the extracted features [10, 11, 29, 38]. By fine-tuning pre-trained object detectors (i.e., YOLO [43, 44], Mask R-CNN [45]), they recognize vehicle braking conditions directly from a given image [37, 3942].

Discussion

As mentioned earlier, we propose a feature-based rear-lights region detection and brake-lights detection algorithms. Our rear-lights region detection algorithm is similar to previous feature-based methods [9, 10, 13, 14, 17, 1923] in that it finds candidate rear-lights regions through color channel thresholding. However, they rarely attend to noise reduction and only apply simple filtering or morphological operations to color channels. On the other hand, we apply a noise reduction technique based on gamma correction before conducting thresholding to reduce noise in color channels and improve the accuracy of rear-lights region detection. We use the symmetry test [20, 21, 23] and aspect ratio test [17, 21] to find the optimal rear-lights pair, but it should be noted that we conduct verification on rear-lights candidate regions where noisy blobs are effectively removed. In addition, our brake-lights detection method is similar to previous methods [14, 18, 35, 36] in that it uses color features, but we use HSV color range filtering to improve the applicability of the algorithm. Color features extracted by HSV color range filtering can be used as global decision values under various environmental conditions, unlike previous methods limited to nighttime [14, 18] or daytime [13, 35].

Proposed method

This section presents an algorithm that performs the detection of rear-lights region and brake-lights in a given image. The algorithm was divided into three steps. First, ROI based on lane detection was set to match the rear-end collision situation. Second, noise-robust detection of rear-lights region was performed using gamma correction and L*a*b* color space. Finally, detection of brake-lights was performed based on the color features extracted from the HSV color space.

In accordance with bioethics and safety act in the ministry of health and welfare (MOHW), Korea, this study does not collect or record personally identifiable information, does not experiment with human subjects, and uses only observation equipment that does not cause physical changes, hence, this study is not subject to IRB review.

ROI setting based on lane detection

Only information from the vehicle in front should be considered to deal with rear-end collision situations. Therefore, the algorithm should focus only on the information of the car in front by setting an appropriate ROI. Furthermore, setting the ROI can make the algorithm much more efficient by reducing noise and excluding unnecessary processing [13]. Some approaches set the ROI arbitrarily in this process. However, the position of the preceding vehicle may vary from frame to frame depending on road conditions, and in this case, applying the same ROI to each frame is suboptimal. Therefore, we generate the ROI based on the lane detection algorithm to set the ROI adaptive to the road condition of the input frame. The lane detection algorithm we used can simply and efficiently derive lanes on the actual road through the forward monocular camera of the vehicle using techniques such as edge detection, orientation filtering, and connected component analysis [46]. Since lane detection is out of our main research scope, please see the reference for more details on the lane detection algorithm we used. In addition to the algorithm we referenced, other lane detection algorithms can also be used for the same purpose.

Using the lane detection results, we set up an adaptive ROI to consider only the cars ahead in the same lane. Fig 2 shows the entire process. As shown in Fig 2(b), the lane detection algorithm was applied to the input image and two suitable lanes were found as a result of it. Transformation was applied to the derived two lanes in the manner shown in Fig 2(a). The left and right lanes are shifted by ϵx and extended to form two refined lines. In addition, a horizontal line is added at a position ϵy apart from the bottom of the image. Finally, the three lines form a triangle, which becomes our ROI mask for extracting only the information of the current lane. We set ϵx = 30, ϵy = 0.4 * wh, where wh is the height of the input image. The final result to which the generated triangular ROI mask was applied is shown in Fig 2(c). This process improved the efficiency of the algorithm by filtering out redundant information and allowing only information about the preceding vehicle to be considered. In addition, since we use lane detection to generate ROIs that are adaptive to road conditions, our ROI settings are flexible for input frames and various environments.

thumbnail
Fig 2. ROI setting result based on lane detection algorithm.

(a) Transformation method. (b) Result of applying the lane detection algorithm. (c) Result of masking operation based on the derived ROI.

https://doi.org/10.1371/journal.pone.0289700.g002

Rear-lights region detection

Detecting the rear-lights region is essential for brake-lights detection in the feature-based method. Therefore, we localize the rear-lights region based on a simple color feature, but introduce a novel method to effectively remove the surrounding noise.

Color space.

Red-color features were mainly used in several studies for the detection process of rear-lights region, focusing on the red color. Therefore, various color spaces such as RGB [9, 14, 17, 29], YCrCb [10], L*a*b* [13, 22], HSV [21], and YUV [19] were used. Among them, L*a*b* color space can approximately match the color difference that the human eye can detect [22]. Additionally, color features can be extracted less sensitively to changes in light by separating the L* channel containing the luminance information and the a* channel and the b* channel representing the color component [13]. Therefore, we derived the rear-lights region of the vehicle using the a* channel representing the red and green components in the L*a*b* color space, besides using the red-color property of the rear-lights.

Gamma correction and threshold method.

The region in the a* channel, after converting the given image in Fig 3(a) into the L*a*b* color space, has a high pixel intensity owing to the strong red component of the rear-lights region of the vehicle (see Fig 3(b)). Therefore, we expect that the rear-lights region of the vehicle can be effectively derived if thresholding is applied to the corresponding a* channel. We use the Otsu threshold method, which finds the optimal threshold value that minimizes the intra-class variance of the two groups, that is, maximizes the inter-class variance of the two groups [47]. The biggest advantage of the Otsu threshold method is that it automatically finds the optimal threshold value that can distinguish the background from the object in a given image [10]. It is highly preferable to set an adaptive threshold value for each image rather than setting a global threshold value because it is necessary to extract the region of rear-lights in various environments, such as daytime or nighttime. The result of applying thresholding to the a* channel is shown in Fig 3(d). Contrary to existing expectations, the binary image created as a result of thresholding contained a lot of noise; moreover, it also failed to extract the rear-lights region. This is because the region of rear-lights in the a* channel has strong pixel intensity, but several regions of surrounding noise, including sunlight and background, also have sufficient pixel intensity. This can also be confirmed through Fig 4(a), which shows the histogram of the original a* channel and the threshold value after applying the Otsu threshold method. The left side area was considered as the background by the Otsu method, and the right side area was considered as the object to be extracted based on the threshold value indicated by the red line in the histogram. Most of the values are concentrated around pixel value of 125, and in this case, the Otsu threshold method does not conduct as we expected. Dense pixel distribution causes the inter-class variance to be small at any threshold value, blurring the boundary between the target object and noise blobs. In order for the Otsu threshold method to distinguish the two groups of pixels well, a clear difference between the two groups must be made in the pixel distribution (i.e., bimodal distribution). Therefore, additional preprocessing was required for the threshold algorithm to effectively extract the region of rear-lights.

thumbnail
Fig 3. Result of color space conversion and thresholding.

(a) Original image. (b) a* channel after converting the original image into L*a*b* color space. (c) Result of applying the gamma correction with γ = 10 to the original a* channel. (d) Result of applying threshold method directly to the a* channel of the image. (e) Result of applying threshold method to a* channel to which gamma correction is applied.

https://doi.org/10.1371/journal.pone.0289700.g003

thumbnail
Fig 4. a* channel histogram and threshold value when the Otsu threshold method is applied.

(a) When no preprocessing is applied. (b) When gamma correction is applied.

https://doi.org/10.1371/journal.pone.0289700.g004

We propose a noise reduction technique based on gamma correction so that the pixel intensities between the target rear-lights region and the surrounding noise region can be clearly distinguished. Gamma correction is one of the image contrast enhancement techniques. Gamma correction is used to improve the overall brightness and contrast of the image by converting the existing pixel values of the image through non-linear mapping using the parameter γ [48]. The mapping function is as follows: (1) In Eq (1), x is the original pixel value, and x′ is the new pixel value converted by the gamma correction mapping function T. The change of pixel value according to γ can be seen in Fig 5. In general, gamma correction increases image brightness and contrast when 0 < γ < 1, as shown in Fig 5(a). Additionally, the pixel value mapping has a non-linear structure that occurs larger as the pixel value is lower, and as the γ approaches 0, the change occurs more rapidly in a wider range of pixel values. However, the image became dark when 1 < γ (see Fig 5(b)) Similarly, a sharp decrease in pixel value was seen in a wider range of pixel values with an increasing gamma value.

thumbnail
Fig 5. Gamma correction mapping curves.

(a) Gamma correction mapping curve when 0 < γ ≤ 1. (b) Gamma correction mapping curve when 1 ≤ γ.

https://doi.org/10.1371/journal.pone.0289700.g005

However, it should be noted that when the original pixel value was high, the pixel value did not decrease significantly even if the γ increased because of the non-linear mapping of gamma correction. In other words, if the γ is greater than 1, gamma correction will map most of the pixel values close to 0, and only a few with high pixel values will remain close to the original. This means that gamma correction can be used to accentuate target objects with high pixel intensity and significantly reduce unnecessary noise. This notion can be seen in Fig 3(c) and 3(e). Fig 3(c) shows the result of converting the given image (see Fig 3(a)) into the L*a*b* color space and applying the gamma correction with γ = 10 to the original a* channel. Unlike before, the target rear-lights region (high intensity) is more prominent with reduced surrounding noise (low intensity). Fig 3(e) shows the result of applying the Otsu threshold method to this refined a* channel. Unlike when thresholding was performed without applying gamma correction, the threshold algorithm appropriately distinguished the object to be extracted and other backgrounds much more easily. Fig 4(b) shows the histogram after applying gamma correction to the a* channel. Compared to Fig 4(a), most of the pixel values are distributed close to 0, and the other pixels are quite far from the dense point. Since the two groups are more clearly separated in the distribution, the Otsu threshold method can easily find a threshold value that separates the target object and the noise region (only some important and detailed elements were classified as objects).

In summary, we perform noise reduction by applying gamma correction to the a* channel, and then conduct thresholding on the refined channel map to extract rear-lights candidate regions.

Symmetry test.

After deriving rear-lights candidate regions, unnecessary regions were masked out using a triangle mask generated based on lane detection. Additionally, the morphological operation was performed to make the remaining pixels appear more clearly. The results of this process are presented in Fig 6(a) and 6(b). However, several unnecessary regions remained after simple filtering. Therefore, it was necessary to find the optimal region of rear-lights in the remaining areas through the verification process of rear-lights. Hence, the property of rear-lights—both pairs of rear-lights of a vehicle are symmetrical—was used. Some studies found the appropriate pair of rear-lights by comparing the pixel-level similarity of each region [15, 19, 23]. Inspired by these methods, we find the optimal rear-lights pair by comparing the correlation coefficient between each candidate region. The correlation coefficient of the two candidate regions can be obtained as follows: (2) (3) (4) In Eq (2), sim(Ci, Cj) denotes the degree of similarity between Ci and Cj based on the correlation coefficient. Ci and Cj denote candidate regions in the original image. In Eqs (3) and (4), and represent the mean pixel values of each region. An example of calculating the similarity for each pair of candidate regions is shown in Fig 7. In Fig 7, C1 and C3 showed the highest similarity at 0.97, and the selected C1 and C3 were the rear-lights pairs we wanted to extract. A simple filtering based on the aspect ratio was conducted to prevent mismatching, which is to not consider a pair if the height and width differ significantly between candidate regions [9, 13, 15, 17]. After applying the symmetry test, the final selection of rear-lights pairs and detection results are both presented in Fig 6(c) and 6(d).

thumbnail
Fig 6. Symmetry test result.

(a) Result of applying the masking operation and the morphological operation to the binary image to which thresholding is applied. (b) Result of drawing bounding boxes based on the remaining pixels. (c) Result of finding the optimal rear-lights pair among the candidates through the symmetry test. (d) The final result of rear-lights region detection.

https://doi.org/10.1371/journal.pone.0289700.g006

thumbnail
Fig 7. Calculation of pixel-level similarity between two candidate regions using correlation coefficients.

https://doi.org/10.1371/journal.pone.0289700.g007

In summary, our rear-lights region detection algorithm consists of color space conversion, noise reduction using gamma correction, thresholding to extract rear-lights candidate regions, and symmetry test to find optimal rear-lights pairs. Specifically, our proposed noise reduction technique makes our algorithm robust to noise, enabling it to effectively derive the rear-lights region of preceding vehicles in various environments.

Brake-lights detection

We successfully extracted the region of rear-lights of the front vehicle via the method mentioned before. We now need to find the characteristic that can distinguish the on and off conditions of brake based on the derived rear-lights region. Generally, the brightness is greater than when the brake is on than when it is off. Therefore, one can think of it as being able to distinguish between an on and off condition based on the intensity of brightness [15, 19]. Therefore, the brake on and off conditions were classified by converting the derived rear-lights regions into grayscale images and comparing the average intensities. The results are shown in Fig 8. In the first row of Fig 8, there was no significant difference between the grayscale images of the brake on and off conditions at daytime. This is because various noise lights, including sunlight, reflected from the vehicle in the daytime cause the brake off condition to have the same brightness intensity as the brake on condition in the grayscale image even when the brake is off. Furthermore, the first row of Fig 8 shows that it is impossible to distinguish the brake on condition from the brake off condition in nighttime using the brightness intensity because the rear-lights regions of the vehicle are lit even in the brake off condition. Therefore, brightness intensity cannot be an appropriate indicator to distinguish between brake on and off conditions in various environments.

thumbnail
Fig 8. Rear-Lights region detection results and feature extraction of detected rear-lights regions.

From the top, the first row is a grayscale image of rear-lights regions, and the second row is the rear-lights regions image with HSV color range filtering applied. From the left, the first column is brake on condition at daytime, the second column is brake off condition at daytime, the third column is brake on condition at nighttime, and the fourth column is brake off condition at nighttime.

https://doi.org/10.1371/journal.pone.0289700.g008

This study focuses on color features to classify brake on and off conditions. Although the method using brightness intensity could not properly distinguish the brake on and off conditions, it was clear that the two conditions had different characteristics. We checked if there were unique color features that could be only attributed to the brake on condition, unlike the brake off condition, in various color ranges. The HSV color space was used to adjust the color range. HSV color space can better represent the way humans observe colors. It consists of a cone-shaped space created by three components: H (Hue), indicating the type of color; S (Saturation), indicating the vividness of color; V (Value), indicating the brightness of color [21]. We compared the mean of each HSV channel in the brake on and off conditions with random samples to extract the color features that appear mainly in the brake on condition. The result is shown in Fig 9. X-axis, Y-axis, and Z-axis represent the mean of the H channel, S channel, and V channel, respectively (Fig 9). The mean of each HSV channel in the brake on and off conditions appeared to differ. The mean V channel value in the brake on condition was highly concentrated between 200 and 250, while the mean V channel value in the brake off condition was sparsely concentrated. Furthermore, the mean value of the H channel in the brake on condition was concentrated between 0 and 50; however, the mean H channel value in the brake off condition was concentrated at a larger point close to 150. The mean S channel values in the two conditions did not show significant difference; both were concentrated above 100. As shown in Fig 9(b), these are more evident when plotting the maximum density point of the distribution in the brake on and off conditions. Based on the comparisons, we set the hue in the range of 0-30, the saturation in the range of 130-255, and the value in the range of 220-250 to extract the color features in the brake on condition.

thumbnail
Fig 9. Comparing the mean of HSV channels under brake on and off conditions.

(a) Mean of each HSV color channel under brake on and off conditions. (b) Maximum density point for each distribution of brake on and off conditions.

https://doi.org/10.1371/journal.pone.0289700.g009

The result of applying HSV color range filtering is shown in the second row of Fig 8. The difference between brake on and off conditions became apparent due to color features extracted using HSV color space, unlike when brightness intensity was used. Color features within a specific HSV color range between bright orange and red were clearly visible in the brake on condition but not in the brake off condition. These results also solve the noise problems, such as sunlight in daytime, which were problems when using brightness intensity, and the problem that it was difficult to distinguish between brake on and off conditions at nighttime. In addition, the method using the HSV color range does not extract any features even when the turn signal of the vehicle in front is on, as shown in Fig 10.

thumbnail
Fig 10. Rear-lights regions image with HSV color range filtering applied when turn signals are on.

https://doi.org/10.1371/journal.pone.0289700.g010

The difference between the brake on and off conditions was verified by averaging the remaining S or V channels without additional conversion or processing for the rear-lights region filtered by the color range. Most of the mean values were close to zero or had a small value in the brake off condition because of filtering performed, whereas the mean value had a larger value in the brake on condition. Additionally, the distribution of the mean value of each condition was relatively more evenly spread than when the brightness intensity was used. The sum of the mean of S and V channels was used as the final decision value as shown in Fig 11. The decision value and the final classification process are as follows: (5) (6) In Eq (5), d denotes a decision value used to determine brake on and off conditions. Furthermore, μS and μV represent the mean of S and V channels, respectively. In Eq (6), B show if the brake is on or off, and τ denotes the threshold value. If the decision value d was greater than or equal to τ, it was considered as a brake on, and if it was less than τ, it was classified as brake off.

thumbnail
Fig 11. Sum of mean S channel value and V channel value in the brake on and off conditions.

https://doi.org/10.1371/journal.pone.0289700.g011

In summary, our brake-lights detection algorithm converts the derived rear-lights region to HSV color space, applies color range filtering, and then computes the decision value from the remaining pixels. The calculated decision value is compared with the threshold value to finally classify brake on or off conditions. The color range and pre-defined threshold value were set considering applicability in various environments.

Experimental results

In this section, we evaluate the performance of the proposed method. We evaluated the proposed method for rear-lights region detection and brake-lights detection, respectively. We used an INAVI V300 as our single monocular dashboard camera, which has 1280×720 resolution and 30 FPS. The dashboard camera was mounted at the top center of the test vehicle’s front windshield. The test vehicle is the Volkswagen Tiguan Allspace, with a length of 4.73 m, a width of 1.84 m, and a height of 1.67 m. We collected videos from urban roads and city highways in Seoul, Korea from April 13 to July 23, 2021. We collected 10,991 videos via a forward-facing dashboard camera mounted on a vehicle for the evaluation. Each video was approximately 4 to 5 minutes long and consisted of about 1,800 frames. The collected videos were varied in terms of the vehicle type of the preceding vehicle, the distance between the vehicles, and the lighting condition. Therefore, the collected data was appropriate to evaluate the method in the real driving environment. We classified the collected videos into four environmental categories—day, night, cloudy, and rainy to evaluate them in various environments. To evaluate rear-lights region detection performance, we selected three videos from each category as evaluation videos. It would be meaningless to evaluate the performance of the rear-lights region detection algorithm if the car in front does not exist in the evaluation video, such as on a quiet road or when the cars in front constantly change lanes. Therefore, we manually selected videos that a car must exist in front of the entire video frame for reliable evaluation. To evaluate brake-lights detection performance, we extracted a total of 12,000 frames from each category of collected videos. We extracted 4,000, 4,000, 2,000 and 2,000 frames for the day, night, cloudy and rainy environments, respectively. Similarly, we selected frames with a car present in front of the same lane and within an appropriate distance range for reliable evaluation of brake-lights detection performance. In all the environmental categories, the ratio of brake on and off conditions was equal. We divided the extracted 12,000 frames into 8,600 training data (70%) and 3,400 test data (30%), respectively. Training data is used when model training is required or searching optimal threshold values, and all algorithms including ours were evaluated through the test data.

We conducted comparative evaluation with three existing methods to further explore the effectiveness of our method. The first and second methods are similar to ours in that they extract rear-lights candidate regions through thresholding in the color channel of the input image. However, the first method used FRST to find Radial symmetric blobs and performed brake-lights detection based on the number of remaining pixels [13] (indicated as FR). In the second method, rear-lights region detection was performed through the Horizontal-vertical peak value verification process, and brake-ligths detection was performed by training a SVM [49] based on six features extracted from the derived rear-lights region [10] (indicated as HS). The third method performed rear-ligths and brake-ligths detection simultaneously based on deep learning object detection networks called YOLOv3-tiny [43], and used an additional output layer, spatial pyramid pooling (SPP) layer, and focal loss to improve performance [37] (indicated as YO).

Results of rear-lights region detection

First, we evaluated the performance of the rear-lights region detection algorithm. We used the detection rate (DR) as an evaluation metric according to standard in literatures [9, 13, 14, 23]. DR is calculated for selected evaluation videos. DR measures how well the algorithm detects the rear-lights region pair present in each frame of a given video. Specifically, DR refers to the percentage of rear-lights regions detected by the algorithm out of all ground-truth rear-lights regions present in a given video. We considered each pair of left and right rear-lights region as one ground-truth. Hence, there were a total of two ground-truths in one frame. We used the DR concept to evaluate our rear-lights region detection algorithm on a given video for each environmental category.

The evaluation results are presented in Table 1. Our rear-lights region detection algorithm showed quite high performance in all scenarios. Particularly, the total DR was as high as 93% and 92% in a day and cloudy scenarios, respectively. Despite the presence of severe surrounding noise in the day scenario and low visibility in the cloudy scenario, the experimental results show that our algorithm is robust to such noise and environmental changes. Additionally, our algorithm showed a total DR of 89% even in a night scenario with strong light noise from surrounding vehicles and low visibility, indicating that our algorithm is easily applicable at nighttime and has strong noise robustness. Finally, our algorithm showed a total DR of 87% in a rainy scenario. Rainy scenario has worse visibility because of raindrops and vehicle’s wiper operation. However, our algorithm is still robust against such difficulties.

thumbnail
Table 1. Rear-lights region detection algorithm evaluation result.

https://doi.org/10.1371/journal.pone.0289700.t001

The main reasons for the failure of rear-lights region detection are presented in Fig 12. Fig 12(a) shows a situation where the rear-lights region detection fails because the distance from the vehicle in front was very far. When the distance from the vehicle in front was very far, the algorithm found it very difficult to extract red color features present on the given frame. Even if they were extracted, they would have been considered as simple noise because they were quite small. However, the algorithm extracted the rear-lights region well at an appropriate distance, unless the distance from the vehicle in front was very far. Fig 12(b) shows a situation where rear-lights region detection failed due to raindrops on the windshield of a vehicle on a rainy scenario. In this case, the rear-lights regions of the front vehicle were blurred, causing the algorithm to fail the detection, or it extracted only one region of the rear-lights. This situation became more pronounced on days with heavy rains. However, the algorithm successfully extracted rear-lights regions again after a few frames have passed when either raindrops were flowing or the wiper wiped them off.

thumbnail
Fig 12. Rear-lights region detection failure cases.

(a) Failed to detect rear-lights region due to the distance from the vehicle in front. (b) Failed to detect rear-lights region due to raindrops.

https://doi.org/10.1371/journal.pone.0289700.g012

The comparative evaluation results for the algorithm are presented in Table 2. Compared to our method, the FR and HS methods showed significantly lower performance. This is because both FR and HS methods conducted vehicle detection before rear-lights region detection. In both methods, a compact image with a clear rear view of the vehicle was extracted from the raw image as a result of vehicle detection, and rear-lights region and brake-lights detection was performed under the refined image. This additional process effectively removes surrounding noises, but it requires model training for vehicle detection and increases computational cost. Therefore, although the two methods performed well for the refined images, they did not work well in the noisy raw environment like the data we collected. In addition, we evaluated the FR and HS methods by adding the gamma correction process for noise reduction used in our algorithm (indicated as FR+GC, and HS+GC). As shown in Table 2, the FR and HS methods with gamma correction significantly improved the performance. This means that the gamma correction process can reduce noise and increase the performance of rear-lights region detection. In other words, the gamma correction process has the effect of refining the raw image without additional vehicle detection process. Finally, the YO method showed high performance similar to ours, especially in the night scenario. However, our algorithm is more efficient considering that YO uses deep learning models which require a huge amount of training data/time and have high computational cost.

thumbnail
Table 2. Comparative evaluation result of rear-lights region detection algorithm.

https://doi.org/10.1371/journal.pone.0289700.t002

Examples of rear-lights region detection results for all methods are shown in Fig 13. Compared with ours, the thresholded images of the FR and HS methods contained a lot of noise, and as a result, rear-lights region detection was not performed successfully. On the other hand, in the FR and HS methods with gamma correction, it can be seen that the noise in the thresholded images is significantly reduced, and the rear-lights region detection result is also improved. Nevertheless, the FR+GC and HS+GC methods failed to derive the correct rear-lights regions in the night and rainy scenarios, even with similar noise levels to ours. This seems to be due to the difference in the method of finding the optimal rear-lights pair in the remaining candidate regions. Finally, in the case of the YO method, detection was generally successful. However, as seen in the night scenario, detection often failed when the distance from the preceding vehicle increased.

thumbnail
Fig 13. Comparative evaluation result of rear-lights region detection algorithm.

https://doi.org/10.1371/journal.pone.0289700.g013

In summary, comparative evaluation results in Table 2 and Fig 13 show that the gamma correction we used was effective in improving the rear-lights region detection performance by reducing the noise in the input image. Additionally, the results prove that our method showed good rear-lights region detection performance in a situation similar to the real driving environment.

Results of brake-lights detection

We evaluated the performance of the proposed brake-lights detection algorithm. The brake-lights detection algorithm determines whether the preceding vehicle in a given frame is in a brake on or off condition. We evaluated our brake-lights detection algorithm on selected test data. In addition, we compared the performance of our method with the three existing methods used in the evaluation of rear-lights region detection, and only considered the brake-lights detection algorithm.

For evaluation metrics, recall score and precision score were used considering two different functions of CAS. The formulas for recall and precision scores are as follows: (7) (8) In Eqs (7) and (8), TP denotes true positive, meaning that the brake of the vehicle in front is actually on, and the algorithm predicts correctly. In Eq (7), FN represents false negative, meaning that the brake of the vehicle in front is actually on, but the algorithm incorrectly predicts otherwise. In Eq (8), FP represents false positive, meaning that the brake of the vehicle in front is off, but the algorithm erroneously predicts that the brake of the vehicle in front is on. As mentioned earlier, CAS should send a warning to the driver or automatically apply the brakes without the input from the driver in an expected collision situation [2]. If the brake-lights detection algorithm is used for CAS such that it only alerts the driver in an expected situation of a collision, sending an erroneous warning when the brakes of the vehicle in front are not turned on do not pose a big problem. It is because the driver can acknowledge the warning and reconsider the situation by self-checking. Hence, the algorithm should focus on finding as many situations as possible when the brakes of the car in front are on. Recall score is suitable for this case because it evaluates the number of situations the algorithm can predict where the brakes are applied when the vehicle in front actually applies the brakes. However, if the detection algorithm is used in CAS that automatically applies the brakes without providing input to a driver in an expected collision situation, sending a false warning to the driver when the brakes of the vehicle in front are not turned on can lead to an accident as dangerous as a collision [50]. Therefore, it is necessary to accurately predict the brake condition of the vehicle in front while simultaneously considering the degree of false positive carefully. The precision score is suitable in this case as a metric that evaluates the number of situations in which the brake of the car in front is actually applied in the situation where the algorithm predicts that the brake of the car in front is applied. Therefore, we considered both the recall score and the precision score to evaluate the performance of the brake-lights detection algorithm.

We set the optimal threshold value τ to 8 as a result of experimenting with several training samples from each category. Moreover, we also obtained evaluation results by adjusting the threshold value considering the trade-off relationship between recall score and precision score. The trade-off relationship between recall score and precision score when evaluating while changing the threshold value is shown as a graph in Fig 14(a). It provide possibility to improve the flexibility of the system by adjusting the threshold value based on the purpose of the CAS, sending a warning to the driver, or automatically applying the brake. Additionally, Fig 14(b) presents the precision-recall curve showing the trade-off relationship between the two scores. The black line denotes the reference line, and the red curve shows the recall score and the corresponding precision score. Each threshold value is annotated on the curve. The precision score decreased slowly with the increasing recall score (see the red curve in Fig 14(b)). Moreover, as the threshold value passed through 8 and 6, the decreasing width increased. That is, the point where the threshold value is 8 was the most concave part of the curve, meaning that, when the threshold value was 8, the overall performance of the precision score and recall score was the best. Therefore, the optimally selected threshold value of 8 is an appropriate value considering both the precision score and recall score.

thumbnail
Fig 14. Relationship between recall score and precision score.

(a) Recall score and precision score were evaluated while changing the threshold value. (b) Precision-recall curve (PR curve).

https://doi.org/10.1371/journal.pone.0289700.g014

Examples of performing brake-lights detection are shown in Fig 15. The images in the left of Fig 15 are cases of successful prediction of the brake on condition of the vehicle in front, and the rear-lights region is marked with a red rectangle. Conversely, the images in the right of Fig 15 are cases of successful prediction of the brake off condition of the vehicle in front, and the rear-lights region is marked with a blue rectangle.

thumbnail
Fig 15. Brake-lights detection result.

(a) Brake on condition in a day environment. (b) Brake off condition in a day environment. (c) Brake on condition in a night environment. (d) Brake off condition in a night environment. (e) Brake on condition in a cloudy environment. (f) Brake off condition in a cloudy environment. (g) Brake on situation in a rainy environment. (h) Brake off condition in a rainy environment.

https://doi.org/10.1371/journal.pone.0289700.g015

Table 3 shows the performance of our proposed brake-lights detection algorithm and the performance comparison results with other methods. In Table 3, our algorithm showed good performance with a recall score of about 88.6% and a precision score of about 88.9% in the test data, and the precision score and recall score were in balance. The second row of Table 3 is the result of classifying the brake on and off conditions based on the mean brightness intensity in the grayscale image, which is also mentioned in previous section (indicated as Our+BI). As a result, both the total recall score and precision score were very poor. The imbalance and irregularity of the two scores were more severe than ours. This indicates that it is difficult to distinguish between brake on and off conditions with mean brightness intensity due to noise caused by ambient light and weather conditions. It is even more difficult to apply it to various environments. In the case of FR method, the performance was slightly better than Our+BI, but the imbalance between recall and precision was deepened. This means that the method using FRST and radial symmetry blobs used in the FR method is difficult to be globally applied to various environments. This low environmental applicability deteriorates the total performance of FR. The HS and YO methods perform brake-lights detection in a learning-based method using SVM and YOLOv3-tiny, respectively. As a result, both methods showed high performance, especially the YO method was slightly higher than ours. However, it should be noted that both methods require model training. Our method does not use any trained model, but our method outperforms the HS method and is highly comparable to the YO method. Considering the performance gap between Our+BI/FR and HS/YO, it can be seen that our method improves performance by successfully mitigating the noise problem and applicability problem faced by previous feature-based methods.

thumbnail
Table 3. Brake-lights detection algorithm evaluation result.

https://doi.org/10.1371/journal.pone.0289700.t003

Table 4 shows the comparison results of processing speed of our algorithm and the YO method using a deep learning-based approach. The YOLOv3-tiny model used in the YO method is famous for its lightweight and fast network, but it could only process about 6 frames per second. However, our algorithm was able to process about 37 frames per second and showed similar performance to the YO method. It means that our algorithm is sufficient for real-time CAS and it is efficient in terms of computational cost. Table 5 shows the ratio of the average time required for each process to the total time required. In Table 5, LD, RD, and BD refer to an ROI setting algorithm using lane detection, rear-lights region detection, and brake-lights detection, respectively. The ROI setting process using the lane detection occupied approximately all of the total process duration. Therefore, if a more efficient and powerful lane detection algorithm than used in this paper is used, the overall computational cost will be much lower. The evaluation process was performed on the OpenCV 4.5.2 version with a Python interface. The evaluation environment consisted of MS Windows 10 64-bit OS, AMD Ryzen 7 5800X 8-Core CPU @3.80 GHz, and 32GB RAM.

Conclusion

This paper proposes a vision-based approach to implement CAS specialized for rear-end collision situations. First, we set an ROI that applied the lane detection to consider only the information of the vehicle in front in the same lane for a system more suitable for rear-end collision situations. Next, we presented a rear-lights region detection algorithm that can extract the rear-lights region of the vehicle in various environments to solve the low applicability and noise sensitivity problems in various environments encountered by the existing feature-based approach. Finally, we presented a break-lights detection algorithm based on color features extracted from HSV color space to determine whether the front car brakes in various environments. Experimental results indicate that both the rear-lights region detection algorithm and the brake-lights detection algorithm show high performance in various environments. However, our proposed methods conducted rear-lights region detection and brake-lights detection in a frame-by-frame manner. The preceding vehicle’s rear-lights region or braking state may appear consistent in adjacent frame sets. Therefore, our future study is to use a tracking algorithm (i.e., Kalman filter) to further boost the performance of our method under these strong consistency assumptions. In addition, additional considerations may be required to improve performance in a nighttime environment where performance is relatively low compared to other environments. As a solution, we can either use shape or edge features that are less sensitive to light noise or design a more strict verification algorithm for filtering noise blobs. Finally, we will apply a more efficient and faster lane detection algorithm to improve the processing speed of our method.

References

  1. 1. Muhammad K, Ullah A, Lloret J, Ser JD, de Albuquerque VHC. Deep Learning for Safe Autonomous Driving: Current Challenges and Future Directions. IEEE Transactions on Intelligent Transportation Systems. 2021;22(7):4316–4336.
  2. 2. Mukhtar A, Xia L, Tang TB. Vehicle Detection Techniques for Collision Avoidance Systems: A Review. IEEE Transactions on Intelligent Transportation Systems. 2015;16(5):2318–2338.
  3. 3. Kusano KD, Gabler HC. Safety Benefits of Forward Collision Warning, Brake Assist, and Autonomous Braking Systems in Rear-End Collisions. IEEE Transactions on Intelligent Transportation Systems. 2012;13(4):1546–1555.
  4. 4. Manjunath A, Liu Y, Henriques B, Engstle A. Radar Based Object Detection and Tracking for Autonomous Driving. In: 2018 IEEE MTT-S International Conference on Microwaves for Intelligent Mobility (ICMIM); 2018. p. 1–4.
  5. 5. Agarwal V, Murali NV, Chandramouli C. A Cost-Effective Ultrasonic Sensor-Based Driver-Assistance System for Congested Traffic Conditions. IEEE Transactions on Intelligent Transportation Systems. 2009;10(3):486–498.
  6. 6. Stevenson R. A driver’s sixth sense. IEEE Spectrum. 2011;48(10):50–55.
  7. 7. Adla R, Al-Holou N, Murad M, Bazzi YA. Automotive collision avoidance methodologies Sensor-based and ITS-based. In: 2013 ACS International Conference on Computer Systems and Applications (AICCSA); 2013. p. 1–8.
  8. 8. Lin HY, Dai JM, Wu LT, Chen LQ. A Vision-Based Driver Assistance System with Forward Collision and Overtaking Detection. Sensors. 2020;20(18). pmid:32916970
  9. 9. Ming Q, Jo KH. Vehicle detection using tail light segmentation. In: Proceedings of 2011 6th International Forum on Strategic Technology. vol. 2; 2011. p. 729–732.
  10. 10. Wang X, Tang J, Niu J, Zhao X. Vision-based two-step brake detection method for vehicle collision avoidance. Neurocomputing. 2016;173:450–461.
  11. 11. Zhong G, Tsai YH, Chen YT, Mei X, Prokhorov D, James M, et al. Learning to tell brake lights with convolutional features. In: 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC); 2016. p. 1558–1563.
  12. 12. Komodakis N, Gidaris S. Unsupervised representation learning by predicting image rotations. In: International conference on learning representations (ICLR); 2018.
  13. 13. Chen HT, Wu YC, Hsu CC. Daytime Preceding Vehicle Brake Light Detection Using Monocular Vision. IEEE Sensors Journal. 2016;16(1):120–131.
  14. 14. Chen DY, Lin YH, Peng YJ. Nighttime Brake-Light Detection by Nakagami Imaging. IEEE Transactions on Intelligent Transportation Systems. 2012;13(4):1627–1637.
  15. 15. Almagambetov A, Casares M, Velipasalar S. Autonomous tracking of vehicle rear lights and detection of brakes and turn signals. In: 2012 IEEE Symposium on Computational Intelligence for Security and Defence Applications; 2012. p. 1–7.
  16. 16. Chen DY, Peng YJ. Frequency-Tuned Taillight-Based Nighttime Vehicle Braking Warning System. IEEE Sensors Journal. 2012;12(11):3285–3292.
  17. 17. Thammakaroon P, Tangamchit P. Predictive brake warning at night using taillight characteristic. In: 2009 IEEE International Symposium on Industrial Electronics; 2009. p. 217–221.
  18. 18. Satzoda RK, Trivedi MM. Looking at Vehicles in the Night: Detection and Dynamics of Rear Lights. IEEE Transactions on Intelligent Transportation Systems. 2019;20(12):4297–4307.
  19. 19. Almagambetov A, Velipasalar S, Casares M. Robust and Computationally Lightweight Autonomous Tracking of Vehicle Taillights and Signal Detection by Embedded Smart Cameras. IEEE Transactions on Industrial Electronics. 2015;62(6):3732–3741.
  20. 20. Chen YL, Chen YH, Chen CJ, Wu BF. Nighttime Vehicle Detection for Driver Assistance and Autonomous Vehicles. In: 18th International Conference on Pattern Recognition (ICPR’06). vol. 1; 2006. p. 687–690.
  21. 21. O’Malley R, Glavin M, Jones E. Vehicle Detection at Night Based on TailLight Detection. 2008.
  22. 22. Skodras E, Siogkas G, Dermatas E, Fakotakis N. Rear lights vehicle detection for collision avoidance. In: 2012 19th International Conference on Systems, Signals and Image Processing (IWSSIP); 2012. p. 134–137.
  23. 23. O’Malley R, Jones E, Glavin M. Rear-Lamp Vehicle Detection and Tracking in Low-Exposure Color Video for Night Conditions. IEEE Transactions on Intelligent Transportation Systems. 2010;11(2):453–462.
  24. 24. Guo Y, Xu Q, Su Y, Jiang S. Visibility detection based on the recognition of the preceding vehicle’s taillight signals. IEEE Access. 2020;8:206105–206117.
  25. 25. Lee KH, Tagawa T, Pan JEM, Gaidon A, Douillard B. An attention-based recurrent convolutional network for vehicle taillight recognition. In: 2019 IEEE Intelligent Vehicles Symposium (IV). IEEE; 2019. p. 2365–2370.
  26. 26. Jeon HJ, Nguyen VD, Duong TT, Jeon JW. A Deep Learning Framework for Robust and Real-Time Taillight Detection Under Various Road Conditions. IEEE Transactions on Intelligent Transportation Systems. 2022;23(11):20061–20072.
  27. 27. Zhang X, Story B, Rajan D. Night time vehicle detection and tracking by fusing vehicle parts from multiple cameras. IEEE Transactions on Intelligent Transportation Systems. 2021;23(7):8136–8156.
  28. 28. Parvin S, Rozario LJ, Islam ME, et al. Vision-based on-road nighttime vehicle detection and tracking using taillight and headlight features. Journal of Computer and Communications. 2021;9(03):29.
  29. 29. Cui Z, Yang SW, Wang C, Tsai HM. On addressing driving inattentiveness: Robust rear light status classification using Hierarchical Matching Pursuit. In: 17th International IEEE Conference on Intelligent Transportation Systems (ITSC); 2014. p. 2243–2244.
  30. 30. Pham TA, Yoo M. Nighttime vehicle detection and tracking with occlusion handling by pairing headlights and taillights. Applied Sciences. 2020;10(11):3986.
  31. 31. Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2014. p. 580–587.
  32. 32. Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2015. p. 3431–3440.
  33. 33. Kim TY, Cho SB. Predicting residential energy consumption using CNN-LSTM neural networks. Energy. 2019;182:72–81.
  34. 34. Wang J, Yu LC, Lai KR, Zhang X. Dimensional sentiment analysis using a regional CNN-LSTM model. In: Proceedings of the 54th annual meeting of the association for computational linguistics (volume 2: Short papers); 2016. p. 225–230.
  35. 35. Nava D, Panzani G, Savaresi SM. A collision warning oriented brake lights detection and classification algorithm based on a mono camera sensor. In: 2019 IEEE Intelligent Transportation Systems Conference (ITSC). IEEE; 2019. p. 319–324.
  36. 36. Wang Z, Huo W, Yu P, Qi L, Geng S, Cao N. Performance evaluation of region-based convolutional neural networks toward improved vehicle taillight detection. Applied Sciences. 2019;9(18):3753.
  37. 37. Li Q, Garg S, Nie J, Li X, Liu RW, Cao Z, et al. A highly efficient vehicle taillight detection approach based on deep learning. IEEE transactions on intelligent transportation systems. 2020;22(7):4716–4726.
  38. 38. Pirhonen J, Ojala R, Kivekäs K, Vepsäläinen J, Tammi K. Brake light detection algorithm for predictive braking. Applied Sciences. 2022;12(6):2804.
  39. 39. Lacatan LL, Santos RS, Pinkihan JW, Vicente RY, Tamargo RS. Brake-Vision: A Machine Vision-Based Inference Approach of Vehicle Braking Detection for Collision Warning Oriented System. In: 2021 International Conference on Computational Intelligence and Knowledge Economy (ICCIKE). IEEE; 2021. p. 485–488.
  40. 40. Du L, Chen W, Li C, Tong B, Zhang D, Liu B. A Vision-based Taillight Intention Detection Method for Intelligent and Connected Vehicles. In: 2022 China Automation Congress (CAC). IEEE; 2022. p. 3695–3698.
  41. 41. Tong B, Chen W, Li C, Du L, Xiao Z, Zhang D. An Improved Approach for Real-Time Taillight Intention Detection by Intelligent Vehicles. Machines. 2022;10(8):626.
  42. 42. Rampavan M, Ijjina EP. Genetic brake-net: Deep learning based brake light detection for collision avoidance using genetic algorithm. Knowledge-Based Systems. 2023;264:110338.
  43. 43. Redmon J, Farhadi A. Yolov3: An incremental improvement. arXiv preprint arXiv:180402767. 2018;.
  44. 44. Redmon J, Divvala S, Girshick R, Farhadi A. You only look once: Unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2016. p. 779–788.
  45. 45. He K, Gkioxari G, Dollár P, Girshick R. Mask r-cnn. In: Proceedings of the IEEE international conference on computer vision; 2017. p. 2961–2969.
  46. 46. Andrade DC, Bueno F, Franco FR, Silva RA, Neme JHZ, Margraf E, et al. A Novel Strategy for Road Lane Detection and Tracking Based on a Vehicle’s Forward Monocular Camera. IEEE Transactions on Intelligent Transportation Systems. 2019;20(4):1497–1507.
  47. 47. Otsu N. A Threshold Selection Method from Gray-Level Histograms. IEEE Transactions on Systems, Man, and Cybernetics. 1979;9(1):62–66.
  48. 48. Chang Y, Jung C, Ke P, Song H, Hwang J. Automatic Contrast-Limited Adaptive Histogram Equalization With Dual Gamma Correction. IEEE Access. 2018;6:11782–11792.
  49. 49. Sain SR. The nature of statistical learning theory; 1996.
  50. 50. Sun Z, Bebis G, Miller R. On-road vehicle detection: a review. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2006;28(5):694–711. pmid:16640257