Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

The intelligent engine start-stop trigger system based on the actual road running status

  • Xinhuan Zhang ,

    Contributed equally to this work with: Xinhuan Zhang, Hongjie Liu

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Writing – original draft, Writing – review & editing

    zxh@zjnu.cn (XZ); hj_popel@stu.xjtu.edu (HL)

    ‡ CM also contributed equally to this work. XZ and HL are joint senior authors on this work.

    Affiliation The Institute of Road and Traffic Engineering, Zhejiang Normal University, Jinhua, Zhejiang Province, China

  • Hongjie Liu ,

    Contributed equally to this work with: Xinhuan Zhang, Hongjie Liu

    Roles Conceptualization, Data curation

    zxh@zjnu.cn (XZ); hj_popel@stu.xjtu.edu (HL)

    ‡ CM also contributed equally to this work. XZ and HL are joint senior authors on this work.

    Affiliation School of Electronic and Information Engineering, Xi’an Jiao Tong University, Xi’an, Shanxi Province, China

  • Chengyuan Mao ,

    Roles Investigation, Supervision

    ‡ CM also contributed equally to this work. XZ and HL are joint senior authors on this work.

    Affiliation The Institute of Road and Traffic Engineering, Zhejiang Normal University, Jinhua, Zhejiang Province, China

  • Junqing Shi,

    Roles Data curation, Methodology

    Affiliation The Institute of Road and Traffic Engineering, Zhejiang Normal University, Jinhua, Zhejiang Province, China

  • Guolian Meng,

    Roles Data curation, Formal analysis

    Affiliation The Institute of Road and Traffic Engineering, Zhejiang Normal University, Jinhua, Zhejiang Province, China

  • Jinhong Wu,

    Roles Formal analysis, Investigation, Methodology, Project administration

    Affiliation The Institute of Road and Traffic Engineering, Zhejiang Normal University, Jinhua, Zhejiang Province, China

  • Yuran Pan

    Roles Data curation, Investigation

    Affiliation The Institute of Road and Traffic Engineering, Zhejiang Normal University, Jinhua, Zhejiang Province, China

Abstract

With the rapid development of urbanization and the popularization of the vehicle, the frequent occurrence of traffic jams results in idling fuel waste, environmental pollution, and other issues. In order to alleviate these problems, engine start-stop technology has been widely used in different types of vehicles in recent years. However, current start-stop trigger technology has many deficiencies, such as mistaken triggering and frequent engine start-stop, which greatly reduces user driving experience, causing most of them to deactivate this system. The intelligent engine start-stop trigger (IEST) system based on the actual road running status was established by building the image recognition model and the digital traffic analysis model in order to solve this problem. A system test shows that IEST can avoid frequently engine starting and stopping. The results show that IEST could effectively improve the driving experience and reduce engine fuel consumption, and it promotes conventional engine start-stop technology.

I. Introduction

With the rapid growth of automobile ownership, traffic congestion occurs frequently, which leads to the decrease of the capacity of road and intersections, and also increases the idle working time of automobiles, which not only leads to the increase of fuel consumption, but also aggravates the degree of environmental pollution. Therefore, the technical methods to make the cars pass the road and intersections efficiently and safely have always been a concern of people.

The working principle of an engine’s start-stop function of is: when the preset condition of stop is met, the engine automatically stops while the engine is idling, when intent to start driving or other engine requirements are detected, the engine will quickly start so as to reach normal working status [1, 2]. The engine start-stop function can effectively reduce the fuel consumption of the vehicle in idle status and reduce harmful gas emissions [35]. Therefore, research on engine start-stop control systems is of great significance.

Traditional engine start-stop technology’s working principle is that the engine stops once the brake pedal has been depressed for 2 seconds, and runs again when the brake pedal is depressed again, which helps save energy. However, this trigger technology has two important disadvantages:

  1. When a vehicle stops for red light for less than 5 seconds, the fuel consumed by activating the engine start-stop technology is more than when the engine idles for a time for the red light.
  2. It only considers the vehicle status, stopping or running, but neglects the road status, especially road congestion, which leads to frequent start-stop activation, further affecting both vehicle stability driving comfort.

The main reason for the above disadvantages is the unintelligent engine start-stop system trigger. To solve this problem, this paper combines the traditional engine start-stop system with intelligent identification of the road’s actual state, constructing image recognition module and a digital traffic analysis module. This paper proposes a new trigger mode for engine start-stop systems by judging the situation of the vehicle in congested traffic or stopping at a red light. These two modules could control an engine’s stopping time, thereby avoiding unnecessary engine start-stop. System testing proved that IEST can effectively improve the driving experience, reduce engine fuel consumption, and help promote traditional engine start-stop technology.

II. Literature review

The general idea for an engine start-stop system is to save energy and support the driver in concentrating his or her attention on driving. In recent years, engine start-stop technology has been widely used in different types of vehicles [68].

In recent years, researchers taking advantage of one of the Artificial Intelligence (AI) fields [9] have proposed a lightweight and real-time traffic light detector for the autonomous vehicle platform. The model consists of a heuristic candidate region selection module to identify all possible traffic lights, and a lightweight Convolution Neural Network (CNN) classifier to classify the results obtained. With the number of cars increasing, how to coordinate the traffic light controllers of multiple intersections becomes a key challenge for multi-agent reinforcement learning (MARL). Most existing MARL studies are based on traditional Q-learning, but its unstable environment leads to poor learning in the complicated and dynamic traffic scenarios. Wu, Tong, et al. propose a novel multi-agent recurrent deep deterministic policy gradient (MARDDPG) algorithm based on the Deterministic Policy Gradient algorithm for traffic light control (TLC) in the vehicle networks [10]. Liang, Xiaoyuan, et al propose a deep reinforcement learning model to control the traffic light cycle [11]. The model quantifies the complex traffic scenario as states by collecting traffic data and dividing the whole intersection into small grids. The duration changes of a traffic light are the actions, which are modeled as a high-dimension Markov decision process. The result is the cumulative waiting time difference between the two cycles.

Many researchers have already investigated engine start-stop systems and the identification of traffic signals. For example, David Ibarra researched noise emissions of this system [12], and A. de la Escalera researched traffic sign recognition and analysis for vehicles [13]. Hirabayash proposed a method to recognize the state of traffic lights in images [14]. Lucas C. proposed to integrate the power of deep learning-based detection with maps previously used to recognize relevant traffic lights of predefined routes [15]. Jiankang Deng proposed a deep learning method to recognize faces [16]. HaoYang proposed efficient asymmetric one-directional 3D convolutions to approximate the traditional 3D convolution and achieve high performance in action recognition methods [17]. Cheng proposed a two-layer Convolutional Neural Network (CNN) to learn the high-level features which can effectively recognize the faces [18]. Haike developed an algorithm that can recognize traffic lights and dangerous driving events [19].

Due to digitization, a huge volume of data is being generated across several sectors such as Intelligent roadside equipment, IoT devices. Machine learning algorithms are used to uncover patterns among the attributes of this data. Not all the attributes in the datasets generated are important for training the machine learning algorithms. Some attributes might be irrelevant and some might not affect the outcome of the prediction. Thippa Reddy Gadekallu et al propose two of the prominent dimensionality reduction techniques, Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA) which can help ignoring or removing these irrelevant or less important attributes reduces the burden on machine learning algorithms [20]. Praveen Kumar Reddy Maddikunta et al propose that a machine learning based model implementing a random forest regression algorithm is used to predict the battery life of IoT devices. Several pre-processing techniques like transformation and dimensionality reduction are used in this model [21].

Because the starting and stopping of the brake involves the safety of the vehicle, Connected and Automated Vehicles (CAVs), the sensor-generated data are, however, vulnerable to anomalies caused due to faults, errors, and/or cyberattacks, which may cause accidents resulting in fatal casualties. To help in avoiding such situations by timely detecting anomalies, Abdul Rehman Javed et al propose an anomaly detection method that incorporates a combination of a multi-stage attention mechanism with a Long Short-Term Memory (LSTM)-based Convolutional Neural Network (CNN), The MSALSTM-CNN method effectively enhances the anomaly detection rate in both low and high magnitude cases of anomalous instances in the dataset [22]. Abdul Rehman Javed et al propose a novel approach named CANintelliIDS, for vehicle intrusion attack detection on the CAN bus. CANintelliIDS is based on a combination of CNN and attention-based GRU model to detect single intrusion attacks as well as mixed intrusion attacks on a CAN bus [23].

At present, the engine start-stop control system is too simple to judge the stop conditions. Generally, it controls the stop when the engine stops idling, and lacks consideration of road conditions and environment. It is easy to cause frequent start-stop when there is no need to start-stop, and there is no start-stop function when there is need to start-stop, thus causing the problem of increased fuel consumption. For example, when the signal light is about to change from red to green (test data shows that the fuel consumption in one start is equivalent to 5 seconds of idling caused by the gasoline engine and 20 seconds of idling caused by the diesel engine [24]), during the waiting at the red light for less than a certain time, it is better to save fuel and reduce pollutant emissions without using the start-stop function. The frequent start-up and stop of the engine in the lightly congested road section and the failure of the air conditioning after the engine shutdown in high temperature weather cause bad driving experience for the driver and user. Some users even turn off the engine start-stop function for a long time. And because the starting and stopping of the brake involves the safety of the vehicle, it needs to set up high-precision special sensors for monitoring and protection, and needs to use a variety of strengthening components, so the development cycle is long, the production cost is high and the usability is poor, not suitable for general use. Therefore, it is very necessary to design and develop the control system and control method suitable for universal application based on the mature starter, battery and other components of the existing engine start-stop system.

In view of the fact that the current hardware development level of engine start-stop system has reached a certain level, but there are still some defects such as start delay, jitter, inaccurate judgment of shutdown and so on, it is possible to get twice the result with half effort to focus on the development of control system and control method suitable for popular application. At the same time, for the problem that the current engine start-stop control system cannot sense the actual running environment information of the road, based on the existing logic of engine start-stop control, an intelligent engine start-stop trigger system based on the actual road running state is proposed to make the engine start-stop control more accurate, so as to improve the fuel saving effect of the engine automatic start-stop function and the user driving experience. It can be predicted that the engine start-stop control system and control logic with remarkable energy saving and emission reduction effect and excellent user experience will become the current and future research and development hotspot.

III. Methodology

Fig 1 represents the IEST(The intelligent engine start-stop trigger system) working principle as a flow chart. When encountering a traffic light or traffic jam, an IEST-equipped vehicle would first check the state of all sensors and then determine whether the vehicle engine meets the traditional shutdown conditions. The traditional shutdown conditions are 1) The wheel speed sensors of the antilock system display zero. 2) The transmission is not in reverse gear. 3) Electronic battery sensors detect that the battery has enough energy for the next ignition [25]. With these conditions satisfied, IEST enables the image recognition module to work, and the camera starts to take pictures of the front road to obtain traffic light information. If the red light’s remaining time is more than 5 seconds, the engine will stop working. If the image recognition module does not detect a red light in the image, the digital traffic analysis module will start to judge whether the road is in congestion or not. By judging the road condition and the driver’s intention, IEST will give commands and instructions to control the engine.

After satisfying the traditional shutdown conditions, IEST will work the engine start-stop system with greater accuracy by identifying the road condition, which not only can maximize fuel economy and reduce CO2 emissions but also avoid the frequent vehicle start-stop common on congested roads and in turn improve driver willingness to use the engine start-stop system.

IV. The recognition of traffic lights

A large amount of image-based data has recently become available in many disciplines, including medicine [10], agricultural production [26], geology [27], ecology [28], and also in the traffic domain. In this project, image processing technology was applied to identify the actual road status.

The traffic signal identification process is shown in Fig 2. The details of how the system decodes and recognizes the image are presented in the following section.

thumbnail
Fig 2. The flow chart of image recognition and image process.

https://doi.org/10.1371/journal.pone.0253201.g002

A. Extracting the characteristic region

1) The extraction of the target region in the RGB space.

Recently, various models of human upper-limb anatomy have been derived. The biomechanical models of the arm that represent precise anatomical models including muscles, tendons, and bones are too complex to be utilized in the mechanical design of an anthropomorphic robot arm. From the view of the mechanism, we should set up a more practical model for easy and effective realization. The RGB model has been widely used in the image recognition area [11, 2931] but is rarely used for characteristic region division because RGB components are easily affected by light. However, the difference corresponded to three components from three colors remains at a certain range and the difference is hardly affected by the light, so the difference could be applied to make the characteristic region get better segmentation in RGB space without space conversion in real-time.

In this module, the key part of the image which needs to be identified is the red area, and the R, G difference marked as ΔRg, ΔRb, could be used as the judgment basis to extract the characteristic region. The distribution of ΔRg, ΔRb between the red area and other color areas in the traffic light are shown in Fig 3.

Comparing Figs 3 with 4, the value ΔRg, ΔRb of red areas is found to be larger than 60, while in other color areas the values are smaller than 20, so the difference ΔRg and ΔRb, could be used to be the identification parameters to recognize the traffic lights. The equation can be expressed by Eq 1 (1)

Rij is the R value of the pixel in row i and column j in the original image. Gij is the G value of the pixel in row i and column j in the original image. Bij is the B value of the pixel in row i and column j in the original image.

P is the binary image. Pij is the grey value of the binary image.

The images after the color recognition process are shown in Fig 5.

thumbnail
Fig 5. The comparison images before and after traffic lights color recognition.

https://doi.org/10.1371/journal.pone.0253201.g005

The pictures in Fig 5 reveal that the RGB model could be applied to extract the common target region. However, some traffic lights with strong light intensity would cause a halo effect as shown in Fig 5(C). From this picture, the halo effect shows that the outline of the light is red while the lamp is white, which causes the characteristic region with the jagged white edge and the black inner part like Fig 5(D). The survey found that this phenomenon is very common. The halo effect increases the difficulty in extract the Target Region. Therefore, in order to solve this problem, a method for using the halo effect to extract the target region was studied in this paper.

2) The extraction of the target region using the halo effect.

The halo effect causes two different colors (the lamp color and the tube periphery color) to mix together and makes the target region fuzzy: the lamp is the inner layer of the target region with white color and the tube periphery is the outer layer with red color. In this research, the extraction of light color and the number is the most important part in this module. Therefore, the inner layer is the part required to be extracted. Based on the pixel difference experience threshold method, the outer layer can be successfully extracted in the first step. The next step is to extract the inner layer.

By analyzing the color recognition results, this module, through the character of the halo surrounding the inner layer, first extracts the outer layer edge and then negates the inner layer so as to extract the target region. The algorithm is expressed as follows:

  1. Step 1: The whole image could be divided into n image matrices by the way of the traffic lights color recognition and the size of the k-th image matrix is Mk * Nk, k = 1;
  2. Step 2: Scan the column j in the k-th matrix named p, if Pi01,j < 0 and Pij > 0, then x1 = i, y1 = j or else repeat step 2, when i = Mk, then j = j + 1, if j = Nk, go into step 5.
  3. Step 3: Do i = i + 1, if Pi(j−1) > 0 and Pij < 0 then x2 = i, y2 = j or else repeat step 3 until i = Mk, do x2 = Mk, y2 = j;
  4. Step 4: Do Pij = − (Pij − 1), i∈(x1, x2), j∈(y1, y2), j = j + 1 and i, j are integer and next return to step 2;
  5. Step 5: The matrix p should be divided as step 1, meanwhile do k = k + 1 and return to step 2, if k = n, the algorithm ends.

Fig 6 shows the image when the halo effect (Fig 3(D)) has been processed by the above algorithm. The result shows that the image with the halo effect has been successfully transferred into the normal image (as shown in Fig 5(B).

thumbnail
Fig 6. The extraction of the target region with the use of the halo effect.

https://doi.org/10.1371/journal.pone.0253201.g006

B. The noise-suppressed processing based on the area threshold

Because there are some noise points which are similar to the target region, the noise points should be eliminated from the target region to make pattern matching easier. The noise points are easily removed because the noise-connected area is generally smaller than the target region [32]. As shown in Fig 7, most of the noise points have been removed and the large noise area (shown in the lower left area in Fig 7(C)) can also be removed during pattern recognition.

C. The characteristic region segmentation

Before conducting the pattern recognition, the target region must be divided with the projection method to improve the matching speed. The projection segmentation process consists of the following steps:

  1. Step 1: Complete the vertical projection of the image.
  2. Step 2: Record the number of white points in each column.
  3. Step 3: Retain the part that the number of white points of each column is over 3 and remove the part that the number of white points of each column is less than 3.

Given that the size of the picture is M * N, the projection method is shown as the following formulas: (2)

The value J could be divided into k sets and the length of k-th set indicated as C is nk. The element Jkj’ in Jk should obey the following formulas: (3)

Fig 8 shows the image divided by the projection method, the result is shown in Fig 8.

thumbnail
Fig 8. The image segmentation based on the projection method.

https://doi.org/10.1371/journal.pone.0253201.g008

D. The pattern recognition based on characteristic vector and neural network

1) The extraction and selection of characteristic.

Different objects appear on images and these objects should be classified, recognized, or identified. Object classification can be effectively accomplished through the extraction and selection of characteristic quantities [27]. The characteristic quantity should not be affected by object movement that includes image translation, rotation, and scaling [33].

Hu, under the rule of definition, describes the characteristics of scale invariance, translation invariance, and rotation invariance [27, 34, 35]. The binarized image could be expressed with the two-dimensional density function, therefore the invariant moment could be applied to analyze the image characteristic. For the two-dimensional image, seven invariance moments, φ1, φ2…φ7 are the characteristic values.

To recognize traffic light time, the characteristic value of the templates from the numbers 0 to 9 plays an important role in the matching pattern process. Due to the influences of light and shooting angle, seven characteristic values of the binary image of the number a (a = 0,1,2… 9) are affected. In order to improve the identification accuracy, in this research, thirty characteristic values (ηi,1ηi,30, i = 0…7) of each number a (a = 0,1,2… 9) have been obtained and first the false value should be eliminated from these values according to the error processing criterion. Then the average value of the remaining η1,j must be calculated as the template characteristic value η1. Similarly, η2, η3, η4, η5, η6, η7 could be also regarded as template characteristic values.

2) Pattern matching based on neural networks.

Neural Networks (NNs) have been widely used in pattern recognition, function approximation, and many other application areas in recent years and have shown their strength in solving hard problems.

This module applied neural network technology in order to improve the accuracy of the pattern matching process. Formally, a neural network consists of Q layers, where the first layer denotes the input, the last one, Q is the output, and the intermediate layers are hidden layers. Since the seven invariant moments were used as the image characteristic values, the input node m is 7 and the total number of categories a is 11 and the output layer nodes n is 11. The number of nodes in the hidden layer could be determined by Eq 4: (4)

n1 is the number of nodes in hidden layer, n1 = 15 in this paper. This module takes the TRAINSCG conjugate gradient method as the new weight learning algorithm, and 147 samples were tested for neural network training. The gradient normalization errors of these samples after 49 iterations and gradient normalizations. are shown in Fig 9.

thumbnail
Fig 9. The gradient normalization errors of these samples.

https://doi.org/10.1371/journal.pone.0253201.g009

From Fig 9, the neural network test errors are concentrated at -0.0158, which indicates that the neural network could be used in traffic lights template matching. Using this method, the rate of recognition accuracy among 90 traffic- intersection-pictures taken in this project is more than 90%.

V. The analysis of traffic flow

A. The theoretical analysis of road congestion judgment

This module applied statistical principles to analyze and judge the road conditions in order to alleviate the frequent rate of engine start and stop during the rush hour.

According to queuing theory in operational research, given a single random arrival of vehicles, the average arrival rate is λ and the average arrival interval between two cars is 1/λ at a certain distance. The departure rate is μ in one single lane, so the average service time is 1/μ. The ratio ρ = λ/μ is called the traffic intensity factor used to determine the road various status. When ρ < 1(λ < μ) and time is enough, each status (the jammed status, the semi-jammed status, the smooth flow status) of this road would be recurring. When ρ > 1, each status is unstable and the length of the line becomes longer with no upper limit. Thus, the condition to maintain a stable state that could ensure a single-channel queuing evacuation is ρ < 1. The study shows that when traffic intensity ρ is over 1.1, queue length will be increase rapidly and the service level decline rapidly [36].

This study observed many city roads while considering the principle just described. One representative road with A and B intersections was chosen for presentation here. Table 1 shows statistics for the same period of 10 working days:

Fig 10 shows a scatter diagram made from twenty groups of measured data (from Table 1) using the error processing criterion to rule out accidental values.

thumbnail
Fig 10. The relation between ρ and the number of the car stops.

https://doi.org/10.1371/journal.pone.0253201.g010

From Fig 10. ρ is directly interrelated with the number of cars stops, therefore ρ is regarded as the reference to judge road conditions. Moreover, the time of car stops per unit time is another measure of road congestion.

B. The basic principles of traffic judgment

When ρ is over 1.1, this system considers that the road condition is in congestion. From Table 1, the stops frequency f is 3 during 58.536 seconds. Hence, if the number of stops is over 3 times during t0 seconds (t0 = 58.36s), the road is in congestion in this system.

When this module works, it records the time of the latest three stopping and starting of the vehicle, then calculates and analyzes the total time of these three stops. If the total time is less than t0 and two stopping duration is less than 5s, the engine would stop. If the total time is over t0, this system is turned off by default according to the basic principles of traffic judgment.

Since a car mainly stops in traffic jams or the traffic light in city roads, this system could save fuel and improve the driving experience.

VI. System test

The intelligent engine start-stop trigger system software based on MATLAB was designed according to the image recognition module and the digital traffic analysis module this paper, describes. Fig 11 shows the software interface.

The software includes simulations of road status and vehicle status. When the trigger system is activated, each step of the image processing shows on the interface. The final result determining whether or not the engine works will appear on the interface. This study tested many kinds of road status. Fig 11 shows the test processing for typical road statuses.

  1. ✓. When red light time remaining is over 5s, the engine will stop, or else it will not stop.
  2. ✓. If a road status has no traffic lights but is congested, the digital traffic analysis module will activate to detect the road status and determine whether the engine must stop.

These results are consistent with the principle of the IEST system, and from system testing conducted in this study, the IEST with the described software platform has been determined to work correctly under these road statuses. It also shows that the theory underlying IEST is correct and feasible.

VII. Conclusion

This paper proposed a novel IEST system to reduce unnecessary stops by adopting image recognition technology and numerical statistical techniques that can judge road condition. This paper also did study the system’s ability to judge road condition and establish a reference of congestion evaluation. Overall, this system is of great significance to further promote the traditional engine start-stop system so as to achieve the aims of energy savings and lower CO2 emissions.

Acknowledgments

We would like to acknowledge Mr. Hongjie Liu for providing the data for the study. Additionally, special gratitude to Guolian Meng for her collaboration with investigation and analysis, Thanks to anonymous reviewers for their very helpful comments and suggestions regarding earlier version of this paper.

References

  1. 1. Sheridan Thomas B. Telerobotics, Automation, and Human Supervisory Control[M]. Cambridge MA: The MIT Press,8,1992: 301–319.
  2. 2. Chen Kebin, ZhiqiangYu Shawn Deng, Wu Qiang, Zou Jianxin, Zeng Xiaoqin. Evaluation of the low temperature performance of lithium manganese oxide/lithium titanate lithium-ion batteries for start/stop applications[J]. Journal of Power Sources.2012,278: 411–419. Get rights and content.
  3. 3. J. Zhen. The breakthrough needed to be made for the start-stop system to "standard". http://www.cinn.cn/qc/309223.shtml [[OL]. 1/24/2014:33–40.
  4. 4. Fonseca Natalia., Jesús Casanova, Manuel Valdés. Influence of the stop/start system on CO2 emissions of a diesel vehicle in urban traffic[J]. Transportation Research Part D,2011(16): 194–200.
  5. 5. Ibarra David, Ricardo A Ramirez-Mendoza, Edgar López Rogelio Bustamante. Influence of the automotive Start/Stop system on noise emission[C]: Experimental study Applied Acoustics. 2015(12), 100:55–62.
  6. 6. De La Escalera A., Ma Armingol. J., Mata M. Traffic sign recognition and analysis for intelligent vehicles[J]. Image and Vision Computing, 2003(21): 247–258.
  7. 7. M. L, Z. X, Z. L., The study of the intelligent start-stop system[J]. Automobile Electric. 2015 (1):15–18.
  8. 8. Buluswar S., Draper B. Color recognition in outdoor images[C]. Sixth International Conference on Computer Vision, IEEE, 1998(1): 47–49.
  9. 9. Ouyang Zhenchao, Niu Jianwei, Yu Liu, Mohsen Guizani. Deep CNN-based real-time traffic light detector for self-driving vehicles, IEEE transactions on Mobile Computing., 19 (2), 2019:300–313.
  10. 10. Wu Tong, Pan Zhou, Kai Liu, Yali Yuan, Xiumin Wang, Huawei Huang, et al. Multi-agent deep reinforcement learning for urban traffic light control in vehicular networks, IEEE Transactions on Vehicular Technology. 28/May/2020: 8243–8256
  11. 11. Liang Xiaoyuan, Xunsheng Du, Guiling Wang, Zhu Han. A deep reinforcement learning network for traffic light cycle control, IEEE Transactions on Vehicular Technology. 68 (2),2019:1243–1253.
  12. 12. Ibarra David., Ricardo A., Ramirez-Mendoza. , Edgar López. Influence of the automotive Start/Stop system on noise emission[C]: Experimental study Applied Acoustics., 100,15/December/2015:55–62.
  13. 13. De La Escalera A., Ma Armingol. J., Mata. M. Traffic sign recognition and analysis for intelligent vehicles[J]. Image and Vision Computing, 2003(21): 247–258. Get rights and content
  14. 14. Hirabayashi Manato, Sujiwo Adi, Monrroy Abraham, Kato Shinpei, Edahiro Masato. Traffic light recognition using high-definition map features, Robotics and Autonomous Systems. 2019 (111): 62–72.
  15. 15. Possatti Lucas C., Rânik Guidolini, Vinicius B. Cardoso, Rodrigo F. Berriel, Thiago M. Paixão, Claudine Badue, et al. Traffic light recognition using deep learning and prior maps for autonomous cars. in: 2019 International Joint Conference on Neural Networks (IJCNN), IEEE. 2019:1–8. https://doi.org/10.1109/IJCNN.2019.8851927
  16. 16. Jiankang Deng, Jia Guo, Niannan Xue, Stefanos Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019:4690–4699. https://doi.org/10.1109/CVPR.2019.00482
  17. 17. Yang H., Yuan C., Li B., Du Y., Xing J., Hu W., Maybank S. J., Asym- metric 3D convolutional neural networks for action recognition, Pattern recognition. 2019 (85): 1–12.
  18. 18. Cheng E.-J., Chou K.-P., Rajora S., Jin B.-H., Tanveer M., Lin C.-T., et al., Deep sparse representation classifier for facial recognition and detection system, Pattern Recognition Letters. 2019 (125):71–77.
  19. 19. Guan H., Kasahara R., Yano T., Traffic light recognition and dangerous driving events detection from surveillance video of vehicle camera, Electronic Imaging. 2017:3–10. 4.SRV-349.
  20. 20. Thippa Reddy Gadekallu, Praveen Kumar Reddy, Lakshman Kuruva, Kaluri Rajesh, Dharmendra Singh Rajput, Gautam Srivastava, et al. Analysis of dimensionality reduction techniques on big data, IEEE Access, IEEE. March16, 2020 (Volume: 8): 54776 – 54788, https://doi.org/10.1109/ACCESS.2020.2980942
  21. 21. Praveen Kumar Reddy Maddikunta, Gautam Srivastava, Thippa Reddy Gadekallu, Natarajan Deepa, Prabadevi Boopathy, Predictive model for battery life in IoT networks, IET Intelligent Transport Systems, November 2020, 14(11): 1388–1395.
  22. 22. Abdul Rehman Javed, Muhammad Usman, Saif Ur Rehman, Mohib Ullah Khan, Mohammad Sayad Haghighi. Anomaly Detection in Automated Vehicles Using Multistage Attention-Based Convolutional Neural Network. IEEE Transactions on Intelligent Transportation Systems, IEEE, October 01, 2020. https://doi.org/10.1109/TITS.2020.3025875
  23. 23. Rehman Abdul, Saif Ur Rehman,Mohibullah Khan, Mamoun Alazab, Thippa Reddy G. CANintelliIDS: Detecting In-Vehicle Intrusion Attacks on a Controller Area Network using CNN and Attention-based GRU. IEEE Transactions on Network Science and Engineering, IEEE, Feb, 19,2021. https://doi.org/10.1109/tnse.2020.3032117 pmid:33997094
  24. 24. Xiao Xiao. Mechanism and Method of Engine Start and Stop Control for Hybrid Electric Vehicle [D]. Qinhuangdao: Yanshan University, 2018:108–115.
  25. 25. de la Escalera A., a Armingol J., Mata M., Traffic sign recognition and analysis for intelligent vehicles, Image and Vision Computing. 21 (3), 2003: 247–258.
  26. 26. Thippa Reddy Gadekallu, Dharmendra Singh Rajput, M. Praveen Kumar Reddy, Kuruva Lakshmanna, Sweta Bhattacharya , Saurabh Singh, et al. A novel PCA–whale optimization-based deep neural network model for classification of tomato plant diseases using GPU. Journal of Real-Time Image Processing, Springer, June 12, 2020. https://doi.org/10.1007/s11554-020-00987-8
  27. 27. De A., la Escalera L. E. Moreno , Salichs M. A, Armingol J. M., Road traffic sign detection and classification, IEEE Transaction on Industrial Electronics. 44 (6), 1997: 848–859.
  28. 28. Kim S.-K., A new approach for road sign detection and recognition algorithm, in: Proc. Int Symp. Automot Technology Autom, Vol. 30, Motion and Machine Vision in the Automotive Industries. 1997:171–178.
  29. 29. Zadeh Mahmoud M., T. Kasvand, Ching Y. Suen M., Localization and recognition of traffic signs for automated vehicle control systems, in: Intelligent Transportation Systems. 27 January 1998.
  30. 30. Overton K., Weymouth T., A noise reducing preprocessing algorithm, in: Proc. IEEE Computer Science Conf. on Pattern Recognition and Image Processing.1979:498–507.
  31. 31. S. Ambellouis, F. Cabestaing, Motion analysis with a time delayed neural network, in: Symposium on robotics and cybernetics (Lille, July 9–12, 1996).1996:328–332.
  32. 32. Grisan Enrico, Marco Foracchia, Alfredo Ruggeri, A novel method for the automatic grading of retinal vessel tortuosity, Medical Imaging IEEE Transactions on. 2008 Mar;27(3):309–310. pmid:18334427
  33. 33. Lekshmi S., Revathy K., Nayar S. P., Galaxy classification using fractal signature, Astronomy & Astrophysics. 405 (3) 2003:1163–1167.
  34. 34. Bowman E. T., Soga K., Drummond W., Particle shape characterisation using fourier descriptor analysis, G´eotechnique. 51 (6): 2001:545–554.
  35. 35. Russell J. C., Hasler N., Klette R., Rosenhahn B., Automatic track recognition of footprints for identifying cryptic species, Ecology. 90 (7), 2009: 2007–2013. pmid:19694147
  36. 36. Hu M. K., Visual pattern recognition by moment invariants, Information Theory Ire Transactions.8 (2) (19), 1962: 179–187.