Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Application of an ensemble CatBoost model over complex dataset for vehicle classification

Abstract

The classification of vehicles presents notable challenges within the domain of image processing. Traditional models suffer from inefficiency, prolonged training times for datasets, intricate feature extraction, and variable assignment complexities for classification. Conventional methods applied to categorize vehicles from extensive datasets often lead to errors, misclassifications, and unproductive outcomes. Consequently, leveraging machine learning techniques emerges as a promising solution to tackle these challenges. This study adopts a machine learning approach to alleviate image misclassifications and manage large quantities of vehicle images effectively. Specifically, a contrast enhancement technique is employed in the pre-processing stage to highlight pixel values in vehicle images. In the feature segmentation stage, Mask-R-CNN is utilized to categorize pixels into predefined classes. VGG16 is then employed to extract features from vehicle images, while an autoencoder aids in selecting features by learning non-linear input features and compressing representation features. Finally, the CatBoost (CB) algorithm is implemented for vehicle classification (VC) in diverse critical environments, such as inclement weather, twilight, and instances of vehicle blockage. Extensive experiments are conducted using different large-scale datasets with various machine learning platforms. The findings indicate that CB (presumably a specific method or algorithm) attains the highest level of performance on the large-scale dataset named UFPR-ALPR, with an accuracy rate of 98.89%.

1. Introduction

In contemporary urban environments, surveillance cameras have become ubiquitous. The primary purpose of deploying these surveillance systems revolves around two key objectives: real-time monitoring and event retrieval [1, 2]. This paper exclusively focuses on the latter, emphasizing the importance of event retrieval for law enforcement agencies. For instance, it aids police officers in searching for specific vehicles [3, 4]. To accomplish this task effectively, officers require detailed information about the vehicle’s characteristics, including its color and type, which serve as essential clues for vehicle identification [5, 6]. Unfortunately, officers often invest a substantial amount of time manually monitoring recorded videos. Typically, the time spent on searching exceeds the duration of the video itself, necessitating multiple repetitive search attempts. Furthermore, fatigue can set in after prolonged searching, misclassification, potentially leading to errors in the process [7, 8]. The existing system suffers from inaccuracies and errors due to manual operations conducted by individuals. These inaccuracies in the system result in financial losses for individuals employed by the supplier. This paper sparks innovative solutions for addressing existing problems [9]. The paper aims to develop an efficient real-time VC system by leveraging ten predominant large-scale vehicle datasets [10]. To facilitate VC based on distinctive features, the paper employs the CB algorithm, which provides a gradient-boosting framework [11, 12]. This algorithm introduces a unique approach for handling categorical features through a permutation-driven method, deviating from conventional algorithms. The computer vision methodology for VC predominantly focuses on preprocessing, feature selection, feature extraction, and the classification process. Image processing is used for recognizing patterns whereas machine learning used to train system to identify pattern changes [13]. Thus, image processing using machine techniques plays a vital role in the modern era for VC. Initial image data is sourced from various large-scale datasets, including Stanford car, vehicle rear, MVVTR, Indian Vehicle, TRANSCOS, Thai vehicle classification, 2023 Car Model, UFPR-ALPR, Vehicle x and CompCar datasets [14, 15]. After the acquisition of the vehicle dataset, the extraction and curation of vehicle features are performed. These selected features are instrumental in enhancing the training process for the classifier, resulting in more precise vehicle categorization [16, 17]. The features encompass both global and local descriptors. The initial stage in the VC workflow involves preprocessing [18, 19]. Directly employing raw data or images for classification is unfeasible due to the potential for inaccuracies arising from image noise, geometric distortions, variations in image size, and color inconsistencies [20, 21]. Thus, preprocessing techniques are employed where the noise reduction, and pixel quality improve. The background is eliminated, and specific vehicle location are identified [22]. The result of the pre- processing technique will be fed as input to the feature segmentation. Under feature segmentation, the vehicle images are masked so that the pixel are classified to pre-defined classes [23, 24]. After feature segmentation, the feature extraction process is carried out. In this feature extraction method, the large training data are handled so that the trained data can be fed into next stage, namely feature selection. Followed by feature extraction, feature selection process is performed. In this feature selection process, accurate features are defined. Once accurate features are defined, the vehicles images are classified from others based on classes, types and model using classification algorithm in the classifier model. The classifier algorithm plays a vital role in classification of vehicles. Considering mathematical and statistical parameters, the vehicle images are classified from the large-scale dataset [25, 26]. There are many classifiers’ algorithms support vector machine (SVM), decision making, Naïve Bayes (NB), K-nearest neighbor (KNN), k-means clustering, logistic regression (LR), linear discriminant analysis, random forest (RF), cluster analysis and quadratic equation [27, 28]. Out of these classifier algorithms, decision trees in machine learning furnish a strong method for providing decision since it knocks out the problem of possible outcome. Overfitting, error bias and variance error are drawbacks of decision trees [29]. These are overcome by the gradient boosting algorithm, where classification and regression can be found. There are four types of boosting algorithm in machine learning namely Gradient Boosting, extreme gradient boosting (XGBoost), light gradient boosting (LGB) and CB machine [30, 31]. Out of these, CatBoost has notable feature of more accurate than any other mode of classifier, train faster in large scale dataset, handling the missing values, supporting categorical features, training performed on multiple GPUs, good performance in assumption parameters, fast prediction and hold up with classification and regression problem [32]. VC from large scale dataset is great challenging in computer task. In this paper, different large-scale datasets, namely Stanford car, Vehicle rear, MVVTR, Indian Vehicle, TRANSCOS, Thai VC, 2023 Car Model, UFPR-ALPR, Vehicle x and CompCar datasets are utilized for VC [33, 34].

The major primary contributions of this paper can be succinctly outlined as follows:

  • Contrast enhancement technique is used for pre-processing technique used to differentiate image features which outline the pixel values and improve the quality images.
  • Mask–R-CNN for feature segmentation which segment mask of the vehicle and also classify the pixel into pre-defined categories.
  • Random forest technique is used for feature selection method for improve purity of node.
  • CatBoost algorithm for classification technique to classify the vehicle from large scale dataset.

The subsequent sections of this paper are structured as follows: Section II delves into the discussion of related works and outlines the problem description. Section III elucidates the research methodology pertaining to the proposed approach. A detailed presentation of the VC methods employing the CatBoost algorithm is expounded in Section IV. Comprehensive experiments and in-depth analysis are conducted in Section V. Finally, Section VI provides a summary of the research endeavors.

2. Related works

This section highlights the most recent work related to VC over large scale dataset using classifier technique has been reviewed. Due to poor installation of camera, bad weather conditions and occlusion of vehicle lead to inaccuracies in classification of vehicle. VC majorly concern on feature extraction and classification using classifier. The classifiers are trained with extracted features and model is used to classify the vehicles from large scale dataset [35]. Since vehicles are classified using raw data and video frame, there are the chances of leakage of vehicle owner confidential, hence the size of bus, car and motorcycles using seismic waves are considered. It is compared with LR, SVM and NB. In this scenario, NB gain F1 score upto 97%. But traffic flow, speed, direction and driver safety are not considered. No bicycles are considered, and only single dataset are taken [36, 37]. In case of vehicle re-identification, vehicle databases are gained by integration of global and local features, channel attention module is used for feature extraction, using weighted feature the noise and background are eliminated. The drawback in this paper is that only three datasets are considered for vehicle re-identification. Light motor vehicles are not considered [38, 39]. Using SVM and OCR the vehicles are classified with the accuracy of 98.3% but single dataset is taken [40, 41]. Using sensor (IR and ultrasonic) the vehicle is detected as sedan, pick up, SUV and two-wheeler with the accuracy of 99%. The accuracy of classification of vehicle based on classifier is not mentioned with more precisely [42]. 100% accuracy VCs are performed using SSD method, but the camera is kept on top of another vehicle with shorter and single dataset are considered. DAWN, CDNet 2014, LISA 2010 dataset are considered for vehicle detection using improved version of YOLO but it gain the accuracy of 95.67% [43]. Edge-empowered Cooperative Multi-Camera sensing system is proposed for vehicle tracking over two vehicle datasets with the maximum accuracy of 92.43%. But the limitation are, it does not work on severe climatic conditions [44, 45]. Using classifier like Naïve Bayes, vehicle faults [46] are identified with the features of temperature, noise and vibration of the vehicles even parking management system [47] are performed. Similarly using KNN classifier, from 4 type of scenario, vehicle classified with the help of forward scattering radars and gain accuracy of 99% [48]. Only car signatures are taken consideration [49]. Whereas for the fast classification of identifying vehicle from road, Light GBM, KNN and SVM are used to classify the vehicles. Under these trails, Light GBM classify the vehicle with a faster rate of 0.015 sec. [50] with the decision trees, the vehicle are classified with the accuracy of 99.38% [51]. Hence to boost the accuracy in the vehicle detection, XGBoost classifier is used. But it gains accuracy of 97.07% having single dataset. XGBoost used to classify two-wheeler including e-bikes from two large scale dataset with the accuracy of 99% [52]. For prediction analysis, Random Forests and Deep Learning Neural Network are used, which gain maximum accuracy of 96.6% [53]. Using WIFI, the vehicles are classified which gain accuracy upto 100%, it has limitation of extracting peak value of the signal. If the signal is weak, there would be a misclassification rate [54, 55]. Concerning the features of vehicles like color and shapes from vehicle dataset, the vehicles are classified using CatBoost algorithm. Light motor vehicles over two datasets are considered for VC. In some cases, hybrid models are used for detection purposes. In [56] this author proposes, hybrid model CNN and CatBoost utilizes to classify test samples to make prediction with the accuracy of 96.15%. A hardware accelerator named FPGA is used for faster processing speed with low power for VC [57], lane detection, traffic signal and obstacle detection [58]. In this more features are considered for classification of the vehicles. In Table 1, the performances of the classifier over classification are summarized.

Thus, from the literature survey, it concludes that there are limitations like range extenders and insufficient mass-produced models in traditional dataset namely Cars-Conventional engine and EVs. Even, Bangladeshi vehicle dataset face significant challenges in classifying vehicle images when deployed in model like VGG19, ResNet-152. The existing vehicle dataset has issues on intra-class, scalability, and angle variation. This leads to poor accuracy in classifying the vehicles from large scale vehicle dataset. Similarly, similar sizes and color of the vehicles are hard to distinguish the vehicle category. Noise and unwanted features also occur in the feature extraction process. Therefore, the proposed technique is implemented to overcome all these challenges by employing ensemble algorithms, namely CatBoost algorithms. This classifier is used to categorize vehicle images into one more class from large scale dataset. Reduction of high dimensionality features, minimum model training time, simple model which gain the VC accuracy. This approach results in a remarkable accuracy rate of 98.89% in VC, particularly using CatBoost algorithm, on a large-scale dataset.

3. Material and methods

Vehicle dataset is the collection of vehicle images of different make and model using various modes like digital single lens reflex, camcorder, and mobile phones irrespective of various time, location and weather conditions. From large scale vehicle dataset, the vehicle classes and label’s information are extracted with advanced technologies to classify the vehicle as size, shape, color, make and model. Various vehicle factors and parameters can be predicted with the development of machine learning techniques. To achieve the precise classification in visual content from a large-scale dataset, an ensemble algorithm is utilized. This encompasses various processes such as preprocessing, feature extraction, segmentation, selection, and the use of a classifier. Fig 1 illustrates the block diagram of the proposed VC using CB.

thumbnail
Fig 1. An outline of the envisaged framework for vehicle classification.

https://doi.org/10.1371/journal.pone.0304619.g001

In Fig 1 An outline of the envisaged framework for vehicle classification in adverse weather conditions is provided. Vehicle features are derived through the utilization of pre-trained VGG16, denoted by the pink dotted line. The subsequent vehicle categorization is executed through the CB algorithm, represented by the red dashed line. The blue dashed line illustrates the process of classifying vehicles into various forms.

3.1. Acquisition of vehicle datasets

Vehicle datasets are governed with vehicle data collection obtained from observation, calibration, and analysis in form of data. The information is in the form of numeric, figures, label, or basic description. The vehicle images are fetched from the site of Google open images and kaggle competition. The list of vehicle datasets is mentioned as follows for proposed VC. In Fig 2, portray the collection of various large scale vehicle dataset for VC.

thumbnail
Fig 2. Sample images from various vehicle datasets.

(A) Stanford car dataset (B) Vehicle–Rear dataset (C) MVVTR dataset (D) Indian Vehicle dataset (E) TRANCOS dataset (F) Thai Vehicle Classification Dataset (G) 2023 Car Model dataset (H) UFPR-ALPR dataset (I) VehicleX dataset (J) CompCars dataset.

https://doi.org/10.1371/journal.pone.0304619.g002

3.1.1 Stanford car dataset.

The dataset contains nearly 16K images of different categories of car. Make, model and year of the vehicle are categorized. It also contains 3D orientations for multi view object class identification. (https://www.kaggle.com/datasets/jessicali9530/stanford-cars-dataset).

3.1.2 Vehicle–Rear dataset.

Vehicle—rear dataset is used for vehicle identification having HD videos with more precise information of make, model, color and year of the vehicles. It is a novel dataset containing 3k vehicles and it further used to for identification and position of concern vehicle license plates. (https://paperswithcode.com/dataset/vehicle-rear).

3.1.3 MVVTR dataset.

Multi view VTR contains 7K real vehicles images with different types. Each type has 1K images. Images are taken from different angles hiding the license plates for security purposes. Images are collected from internet search engines and vehicles images are labelled. (https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8004504).

3.1.4 Indian vehicle dataset.

The Indian vehicle consist of images containing vehicle which is used for VC and identifying the objects. The images contain various types of vehicles. The images have been taken under different climatic situations. It has different illumination, distances and viewpoints of vehicle images and also used for image recognition and object detection for autonomous driving driver. (https://www.kaggle.com/datasets/dataclusterlabs/indian-vehicle-dataset).

3.1.5 TRANCOS dataset.

Traffic and congestion (TRANCOS) dataset handles overlapping vehicle in traffic situations used for vehicle counting. It consists of 1244 images, with 46796 vehicles annotated. The images are taken from the CCTV provided by Spain government. (https://gram.web.uah.es/data/datasets/trancos/index.html).

3.1.6 Thai vehicle classification dataset.

The dataset prepared from road maintenance unit under department of rural road of Thailand. It contains 6.3TB of videos from 23 cameras for 3 days in the year 2020. The number samples or class is car, bus, taxi, bike, pick up, truck and trailer. The total number of samples is 29474. (https://www.linkedin.com/pulse/thai-vehicle-classification-dataset-bipul-neupane/).

3.1.7 2023 car model dataset.

The 2023 car model dataset includes a comprehensive collection of information about cars. The diesel includes horsepower rate, torque value, types of vehicle transmission, and numbers of doors, price, model and body of the vehicles. (https://www.kaggle.com/datasets/anoopjohny/2023-cars-dataset).

3.1.8 UFPR-ALPR dataset.

The UFPR-ALPR has 4K vehicles images with different angles containing 30K characters from existing conditions. The vehicle as well as the camera are on mobility. The specification of vehicle images is 1920x1080 pixels. GoPro Hero4Silver, Huawei P9 lite and iPhone 7 Plus cameras are utilized. (https://web.inf.ufpr.br/vri/databases/ufpr-alpr/).

3.1.9 VehicleX dataset.

VehicleX is a complex vehicle dataset which is scalable. It contains 1362 vehicles images of various 3D models with fully editable attributes. (https://paperswithcode.com/dataset/vehiclex).

3.1.10 CompCars dataset.

CompCars dataset contains both web and surveillance images. The cross-modality analyses of car can perform. The dataset contains nearly car images of 136727 and 27K images of its parts. The hierarchy of attributes and viewpoints of the car are their unique features. (https://mmlab.ie.cuhk.edu.hk/datasets/comp_cars/).

In Fig 2 observe that there is great diversity in term of climatic condition, poor quality of images, diverse in vehicle type, shape, color and models, occlusion, reflection and resolution.

3.2 Pre-processing

Once the dataset is collected it is sent to the preprocessing process. Since raw images fed into classifier model led to provide poor result of image classification. Thus, for noise removal, the preprocessed techniques are enhanced to boost the vehicle image features. An efficient method, Contrast enhancement techniques is used for differentiating various vehicle image features from input images. This method will increase the image contrast of digital image by outlining the pixel values to a comprehensive. It improves the visibility of details and features in vehicle images. This technique is image magnification in which quality of an image are improved by expanding image intensity values. In this, need to stretch the minimum and maximum values of intensity to maximum extent to avoid poor quality image preprocessing. The contrast stretching formula is given by, (1)

The Fig 3 indicates the original vehicle dataset, the intensity variation map is computed. From the computation output sigmoid map function are evaluated for individual contextual region. The intensity transforms into real vehicle image by linear interpolation. The vehicle images undergo enhancement through the application of contrast enhancement techniques, denoted by the highlighting in pink.

In Eq (1), the existing pixel intensity value is given by v, vmin and vmax denotes the minimum and maximum intensity values lies in the entire image respectively. The output result is shown in Fig 3 using proposed pre-processing technique namely contrast enhancement method.

3.3 Feature segmentation

Feature segmentation is classifying the pixel into classes. In this paper Mask R-CNN, is used for vehicle feature segmentation. Image segmentation generates a pixel-wise mask for individual vehicles provided in the image. image. Class label, bonding box coordinates for each vehicle for masking. Initially, features are evoked from the fetched image using ResNet architecture. These features are then fed as input to the next layer to obtain the feature map. The mapped features are enforced to the region proposal. These predict the vehicle location in the image. The regions are of various shapes. By adding pooling layer, all regions are converted into similar shape. After region is passed in fully connected network, class label, bounding box is predicted. In further, the Mask R-CNN generate the segment mask. The RoI is performed for quick computation time. The IoU with ground truth box is calculated. IoU is calculated using (2) (2) Where AoI is the area of intersection, AoU is the area of union. The IOU should be more than or equal to 0.5, in that case RoI can neither be considered or not. Presiding, region of interest (RoI) based intersection over union (IoU), the mask is included in the current framework. The return segmentation mask for each region contains vehicle images that scaled up for inference. Thus, vehicle images are predicted with the mask. Thus, in Fig 4 the MASK-R-CNN output is shown.

thumbnail
Fig 4. Prediction of each segmentation mask over region of interest.

https://doi.org/10.1371/journal.pone.0304619.g004

Fig 4 indicates the prediction of each segmentation mask over region of interest of enhance vehicle images which preserve the spatial information to form a map with fixed size.

3.4 Feature extraction

Feature extraction is a dimensionality depletion method. Next to mask prediction of vehicle images, feature extraction process is executed. In these processes, the best feature is choosing from large scale vehicle dataset by preferring and fusing vehicle variables into features which lead to reduce the quantity of data. This will avoid the large computing time. For further process, it is more feasible to acquire accuracy in original form. To identify an exact feature extraction and conversion, several methods are used such as linear discriminant analysis, principal component analysis and kernel principal component analysis. The VGG16 model of feature extraction provides good accuracy from large data. In recent trends, one of the most popular techniques of image feature extraction is VGG16 which perform better than VGG19 and Alexnet deep learning (DL) model in the application of classification tasks. VGG19 requires more weight in network compared to VGG16. Similarly, the large-scale dataset with deeper network layer advocate the processing with lesser filter give rise to ameliorate performance system when compared to Alexnet [5961].

Thus, in this paper, VGG16 DL model is implemented for feature extraction given proposed work. The VGG16 model is implemented on collection of vehicle datasets. The extracted features are mainly shapes, color, textures, and text. The features of vehicle internal characteristics like number of seat, internal load capacity are deprived. The features which have low level characteristics. The network can be able to learn the feature when the classes are unvaryingly distributed. The unbalanced dataset is categorized with the bias function. The uneven distribution is useful for better performing more classes of vehicle images. The proposed model utilizes diverse layers for feature extraction. The convolution, fully connected and pooling layer are a major part of the architecture. The convolution and fully connected layer consist of 16 layers. 224*224 pixels is the input size of the network with 3*3 filter size. Finally, the activation function can afford probabilities of classes to output layers. The foremost step for feature extraction is providing 224*224-pixel images to the first layer of the VGG16 network. Followed by fetching of images, the images are fed to various convolutional layers. The padding and stride value fixed at one. The architecture maintains entire spatial resolution to match the base images with the activation dimensions. The functions are processed to pooling layer with 2 pixel of stride value with the window size 2x2. The activation size gets reduced by ½.

Additionally, in the second step output is again fed into convolutional layers for stack purpose. The output is 56*56*128 pixels. Thus, the last layer provide result upto 1000 classes. The performance is compared with the GhostNet, ResNet10, ResNet50 and VGG16 is explained in section 4. The vehicle extraction using VGG16 is demonstrated in Fig 5.

thumbnail
Fig 5. The segmented mask images, the pixel values are scaled up to model based on array processing.

https://doi.org/10.1371/journal.pone.0304619.g005

3.5 Feature selection

Feature selection is the process of nominating the most essential features in the image dataset that provide optimal model performance in machine learning. This technique is used to avoid confusion between models, boost performance of model and interpretability. There are various methods of feature selection such as filter, wrapper and embedded. But recently auto encoders have remarkable approach on compression and reconstruction of images. The filter method lack deeper understanding of data, wrapper method make model intensive computation and embedded method feel hard to hold high dimensional data. To overcome the above stated issues, the auto encoder is implemented.

Initially, the dataset consists of vehicle measurement like dimensions, color, types and speed of various vehicles. Each type of vehicle is labelled properly which is suitable for supervised learning. Loading and pre-processing data to make ready to feed into model. Vehicle dataset and standardize the dataset to ensure that features have a mean 0 and standard deviation of 1. This standardization helps us to merge faster during training and features must be on an identical scale [6264]. Followed by standardization, the data is separated into training and testing set. An auto encoder is constructed, where the neural network learns to compress and reconstruct the input data. The architecture layer is defined as input, encoding and decoding layer. In the input layer, the number of features in dataset is matched, whereas in encoding layer, the data dimension is reduced. Finally, in decoding layer reconstruct the input from encoded representation. After reconstruction model, auto encoder feed data into model and train model on vehicle dataset. The network compressed and learned the features from dataset. This technique requires dependent and independent variable, number of epochs, batch size and parameters. The models start for training for number epoch. After training auto encoder, the important features are extracted. Finally, selected features are integrated with predictive model using machine learning algorithm to calculate the performance of feature selection using autoencoders. The autoencoders model separate encoder and decoder part [65, 66]. The encoder is used as forefront of any model to classify images. The vehicle images are fed to an encoder for compression of information vector, and this will be input to the classification model. In the case of deep learning model, the deep features were minimized to 1000 to 100 by using less redundancy with highest related algorithm for each model. Accordingly, the 100 features from individual model are considered. Finally, each feature is merged to form a strong feature set. The selection of subcategory of optimal features are nominated from provided dataset. In the case of machine learning models, these tasks are important for image classification CB algorithm. The overall accuracy of the learning model shot up with the help of optimized features. Initially, the classifier was processed freely and 9 major features from each algorithm were calculated. Among 4 subset features, only 6 features are selected which is the most common [67, 68]. By this way, the hyper parameter tuning is performed. Using CB the hyper parameters are tested. The learning algorithms provide greater performance of evaluation metrics.

3.6 VC using CatBoost algorithm

In the final step, the selected vehicle features are used to classify the vehicle based on classes and model. The number of classifiers including SVM, NB, RF and CB are then used for training and their performance is evaluated on test data. Later on, the data is tested on classifier models. In this paper, CB algorithm is implemented for VC with inter class and intra class variability in the complex environment. CB is a supervised ML technique used by train tooled and used for classification and regression. As its name suggests, it works with categorical data and is used for boosting. Gradient boosting processes where the decision trees ordered iteratively. This technique has an advantage of consuming both categorical and non-categorical variable without pre-processing. Initially, the dataset of vehicle images is denoted by A, calculated in Eq (3)

(3)

The random vector of the vehicle feature selection is given in (4) (4)

The target vehicle variable cd ϵ f may be in the form binary or numerical,

bd, cd are variable which contain unconventional and similar attribute accumulate to unspecified m(.,.) distribution G: He → H which contain minimum loss function. The loss function is shown as (5) (5)

K() → smooth loss function.(b, c) are test example with m independently training set. Alternatively, the sequence approximation is given in (6) (6)

Gn-1 → obtained from previous application, the formula is given in (7) (7)

On: Hi → H (fundamental predictor)

Taken from the group of function O in order to minimize the expected loss. It is expressed in (8) (8)

o → O minimization problem. The false gradient step is given by (9) (9)

Gradient step Gn → it will be chosen as gn(a) approximate pn(a, b) and it is denoted in (10) (10)

Least squares approximation is used in (11) (11)

Decision tree

He → several disjoint nodes

The values of some splitting attributes denote as “r”

Attributes are of binary variable used to identify the same feature bd exceeded

Some threshold S, that is,

r = T when the bd > n

bd are numerical or binary features having n = 0.52

In the regression task, the each terminal region are represented as a leaf on the tree and assigned a value that estimates the response variable C. This approach effectively addresses classification challenges through decision trees. Hence, the decision trees O is represented in (12), (12)

Hu is disjoint region about to leave the tress with effective and efficient way to deal that tree V is to subcategory of the dth are the sample training dataset having one feature is associated with target spaces. The features are in the form of numbers. An intermediate experiment target y conditioned by category . Commonly, it estimates experiment target y conditioned by category and it given in (13). The vehicle classification task is formulated in (14) (13) (14)

Training vehicle dataset is given in (15) (15)

One split based on threshold is shown in (16) (16) Which perfectly classify all training equation

For testing, the value of greedy target statistics is W. The predicted model are assigned as zero for C<n and 1 for the accuracy having 0.5 in 0 and 1 cases. The desired property for target statistics are formulated, it represented in (17) (17)

bd, cd are the dth training sample

J(/cd) = and J() = W are different. There are many way to restrict the condition shift. To calculate the target statistics for bd on the Ad⸦ A\{bd} which exclude the bd: (18)

In Eq (18), is considered as χ. Based on target statistics of the features, the vehicles are classified.

4. Experimental results and discussion

This section presents the implementation results and analysis of the proposed approach. VGG16 DL model is implemented, which consists of 16 layers of convolution and fully connected layer with 224*224 input pixel images containing 3*3 filter sizes. The probability of classes to output layer is performed with the activation function. For our proposed VC system, 80% and 20% of data is used for training and testing purposes. In sequence manner, the training and testing are equally divided. From the given vehicle dataset, the input images and extracted images are shown in Fig 6. Fig 7 shows the output result of VC under inclement weather, types, color and model of the vehicles. In Table 2, the comparison between the proposed approach and existing methods is conducted across different vehicle dataset with various classifiers.

thumbnail
Fig 6. Detection of vehicle with bounding box and class label output.

https://doi.org/10.1371/journal.pone.0304619.g006

thumbnail
Table 2. Shows the comparison of various vehicle dataset with different classifier model.

https://doi.org/10.1371/journal.pone.0304619.t002

Fig 7(A)–7(C) shows results of the VC under adverse conditions. Fig 7(D)–7(F) show the result of different vehicle types. Fig 7(G)–7(I) shows the result of various color and model of the vehicles. The sample images are displayed with bounding box and label.

To define the classification performance of the classifier, the confusion matrix is implemented. In Fig 8, confusion matrices of each classifier performance on the UFPR-ALPR dataset. From the figure, the confusion matrices for CB and RF is more or less identical. However, CB yield the best performance for VC task which is less prone to overfitting. Similarly, the accuracy for VC are also compared with different classifier namely SVM, KNN, RF and CB in Fig 9. RF and KNN attain the maximum accuracy level of classification greater than 85% when compared to KNN. The SVM gain accuracy value less than 70%. Our proposed method achieves the optimal accuracy result 98.89% using CB algorithm over UFPR-ALPR dataset.

thumbnail
Fig 8. Confusion matrices from UFPR-ALPR dataset showing the relative performance.

(A) CB (B) RF (C) KNN (D) SVM.

https://doi.org/10.1371/journal.pone.0304619.g008

thumbnail
Fig 9. Accuracy comparison of proposed model over different dataset for VC.

https://doi.org/10.1371/journal.pone.0304619.g009

The sensitivity of the proposed machine learning model is shown in Fig 10. For the proposed method, the sensitivity gained by 98 and for SVM, KNN and RF the values are nearly 86, 77 and 92. From the result of sensitivity obtained, KNN and RF have less sensitivity when compared to proposed approach. The specificity is compared in Fig 11. The specificity values are 0.75, 0.83, 0.86 and 0.98 for KNN, RF, SVM and CB respectively. The CB gains higher specificity values when compared to rest of the given classifier.

thumbnail
Fig 10. Sensitivity of proposed model over different dataset for VC.

https://doi.org/10.1371/journal.pone.0304619.g010

thumbnail
Fig 11. Specificity of proposed model over different dataset for VC.

https://doi.org/10.1371/journal.pone.0304619.g011

The area under curve for VGG16 approach is shown in Fig 12. For the proposed method, the area under curve value obtained is 99.56 and for traditional approaches such as ResNeXt50, ResNet10 and GhostNet gain the value of 70.23, 71.78, 84.65 and 88.01. Hence the proposed approach has a better result when compared to the existing methods. ResNeXt50 gain the area under curve less than 80 whereas ResNet10 and GhostNet gain the area under curve is more than 80.

thumbnail
Fig 12. ROC curves of the test results of the different networks.

https://doi.org/10.1371/journal.pone.0304619.g012

The precision for proposed approach is compared along with the existing methods such as SqueezeNet, ResNeXt50, ResNet10 and VGG16 is shown in Fig 13. The proposed algorithm gain 99.98 of precision value. The SqueezeNet and ResNeXt50 are 70 ranges whereas ResNet10 and GhostNet are 80 ranges. VGG16 gain higher value of precision around 88.7 when compared to the other existing approaches.

thumbnail
Fig 13. Precision comparison of various model over UFPR-ALPR dataset.

https://doi.org/10.1371/journal.pone.0304619.g013

The accuracy is compared with the performance of different networks such as SqueezeNet, ResNeXt50, ResNet10 and GhostNet containing error rate. The error is applied to test the performance of the proposed model. If the error rate is high, then the performance of the model is weak. By boosting error upto 2, there would be demote in accuracy performance. The existing networks such as SqueezeNet and ResNet10 have less performance due to inclusion of error whereas reaming network gain the performance of 80%. The performance of the proposed technique is unaffected based on error input. Figs 14 and 15 show the recall and f1 score of the proposed methods over different classifiers. In this SVM and KNN gain less recall when compared to RF. The RF obtained the recall of 91.23% and proposed is about 93%. The F1 score is high in case of SVM of about 95% but in compared to SVM, the proposed namely CB gain of 98.89%.

thumbnail
Fig 14. The recall graph of the complex dataset of VC using VGG16 with different classifier.

https://doi.org/10.1371/journal.pone.0304619.g014

thumbnail
Fig 15. The f1-score graph is highlighted on various dataset for VC using ensemble classifier.

https://doi.org/10.1371/journal.pone.0304619.g015

The proposed VC uses CB algorithm which provides classification of vehicle based on color, inclement weather, types and model. If more parameters are included then the model is needed for tested with specified approach. The feature extraction with the equivalent representations and handling input variable size are more accurate. Since for the classification of the images the input layer must be in fixed size. Finally, classification is invoked with CB which supports the categorical feature. This led to lower the information loss and avoids overfitting data.

5. Limitation

Using ensemble CB model for complex dataset helps governing the complex images and text data. But the stumbling block is that these methods need more utilization power because of various stages of process. This might delay the execution process with large scale dataset. Also, there is an intricate of the model procuring more attentive on minimum particulars in the training data. This led to languishing while get grips with the various data when there is unavailability of training data.

6. Challenges and future scope

Though there are dominant mechanisms for vehicle classification using CNN, there are some bottleneck in employing these methods. The major hindrance to overcome vehicle classification is the necessity of intensive training in neural networks methods. An accurate training is mandatory for the processor to perform more precise investigation and interpretation. This led to endless time. To overcome these issues the hardware must be robust and potent. The robust hardware and GUI are the most prerequisite for vehicle detection and classification using deep models. Another obstacle is having more processing time and power. The variance in vehicle dimension in complex environment is common provocation to be faced. Image pixel calculation less than millimeter size is generally difficult to analyses and lead to error in early stage of detection. It can be very tedious to differentiate shape, color, texture and in many cases. It is highly challenging to categorize and analyze the vehicle image due to interclass variation. Some datasets may be uneven. Many vehicle images are found but some are uncommon ones. As a result, reducing the vehicle images from visual features are great challenging. Image segmentation under crucial environment is the greatest task. Therefore, peculiar algorithms are used for segmentation processes. Nowadays, the emerging research on vehicle detection and classification is more pivotal. The current research is mostly focused on specific problems of image classification. To address traffic police and toll collection concerns, future research on vehicle classification may be highly focused on amalgamating entire images of vehicle. Fetching uninterrupted shots will accelerate the picture acquiring process. The concept of preprogrammed mechanized is one that was recently organized. This highlights unsupervised learning to recognize the features and inquire into comparison between individual images and dataset. There are more research and studies are carried out on deep model.

Future research could focus on numerous approaches to enrich the classification of vehicle images by advanced CNN with optimization algorithm. Expanding the complex vehicle dataset, fusing additional modal like local transportation data, decision making interpretability model development, analyzing transfer learning and particular domain versatility, incorporating virtual and real learning processes into application, amplifying the algorithm for real time application on wireless handset, research work on validation processes. Using transfer learning processes, pre trained model are rearranged to specific parameter of the dataset. Depending on the application, the model is trained on compact dataset. These methods are used to learn mobility visualization features from source to destination. This will further reduce the quantity of features of the data in major areas. A few shot learning is useful for learning small features. Whereas greater quantity of small features of data processing makes the system costlier due sampling less classes. For tackling the real time classification of images and learning labeled data issues, combination of few shots and transfer learning are applicable. The major objective of these future work is to optimize the efficiency of the model, applicability and less power consumption of detecting and classifying vehicle under various factors.

7. Conclusion

In this paper, CB algorithm is proposed for VC from the different complex dataset. Initially, the vehicle images are pre-processed with contrast enhancement technique. The external and internal features are segmented using Mask R-CNN and VGG16DL is used for the extracting the nominal features. From the segmented vehicle images, the most prominent features are extracted. Autoencoder is used for selecting features which result in compressed data. Finally, CB is implemented for classifying the vehicle based on types, color, and model under adverse weather. The experiment is carried out over different large scale vehicle datasets over various classifiers. The proposed performance is analyzed with performance metrics such as accuracy, recall, specificity, precision, sensitivity, and f1-score and compared with existing approaches. The performance of the proposed method is evaluated using metrics such as accuracy, recall, specificity, precision, sensitivity, and F1-score. Furthermore, a comparative analysis is conducted with existing approaches. By using explicit operational relationship of vehicle images, the accuracy for VC is achieved by 98.89%. The experimental results of complex large scale vehicle dataset demonstrate the CB algorithm surpass existing classifiers. For further extension, the experiment can be carried out in hardware implementations.

References

  1. 1. Li S. P., Yu K. M., Yeung Y. C., and Keung K. L. "Patent review and novel design of vehicle classification system with TRIZ." World Patent Information 71 (2022): 102155.
  2. 2. Li S., Chen J., Peng W., Shi X., & Bu W. (2023). A vehicle detection method based on disparity segmentation. Multimedia Tools and Applications, 82(13), 19643–19655.
  3. 3. Zhao X., Fang Y., Min H., Wu X., Wang W.,… Teixeira, R. (2024). Potential sources of sensor data anomalies for autonomous vehicles: An overview from road vehicle safety perspective. Expert Systems with Applications, 236, 121358. https://doi.org/10.1016/j.eswa.2023.121358
  4. 4. Jiang H., Chen S., Xiao Z., Hu J., Liu J.,… Dustdar, S. (2023). Pa-Count: Passenger Counting in Vehicles Using Wi-Fi Signals. IEEE Transactions on Mobile Computing.
  5. 5. Xiao Z., Li H., Jiang H., Li Y., Alazab M., Zhu Y., et al. (2023). Predicting Urban Region Heat via Learning Arrive-Stay-Leave Behaviors of Private Cars. IEEE Transactions on Intelligent Transportation Systems, 24(10), 10843–10856.
  6. 6. Xiao Z., Shu J., Jiang H., Min G., Chen H.,… Han Z. (2023). Overcoming Occlusions: Perception Task-Oriented Information Sharing in Connected and Autonomous Vehicles. IEEE Network, 37(4), 224–229.
  7. 7. Xiao Z., Shu J., Jiang H., Min G., Liang J.,… Iyengar A. (2024). Toward Collaborative Occlusion-Free Perception in Connected Autonomous Vehicles. IEEE Transactions on Mobile Computing, 23(5), 4918–4929.
  8. 8. Deng Z. W., Zhao Y. Q., Wang B. H., Gao W., & Kong X. (2022). A preview driver model based on sliding-mode and fuzzy control for articulated heavy vehicle. Meccanica, 57(8), 1853–1878.
  9. 9. Deng Z., Jin Y., Gao W., & Wang B. (2022). A closed-loop directional dynamics control with LQR active trailer steering for articulated heavy vehicle. Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering, 237(12), 2741–2758.
  10. 10. Sheng H., Wang S., Chen H., Yang D., Huang Y., Shen J., et al. (2023). Discriminative Feature Learning with Co-occurrence Attention Network for Vehicle ReID. IEEE Transactions on Circuits and Systems for Video Technology.
  11. 11. Feng J., Wang Y., & Liu Z. (2024). Joint impact of service efficiency and salvage value on the manufacturer’s shared vehicle-type strategies. RAIRO-OPERATIONS RESEARCH. https://doi.org/10.1051/ro/2024082
  12. 12. Cao B., Li Z., Liu X., Lv Z., & He H. (2023). Mobility-Aware Multiobjective Task Offloading for Vehicular Edge Computing in Digital Twin Environment. IEEE Journal on Selected Areas in Communications, 41(10), 3046–3055.
  13. 13. Kussl Sebastian, Omberg Kristian, and Lekang Odd-Ivar. "Advancing Vehicle Classification: A Novel Framework for Type, Model, and Fuel Identification Using Non-Visual Sensor Systems for Seamless Data Sharing." IEEE Sensors Journal (2023).
  14. 14. Fang Z., Liang J., Tan C., Tian Q., Pi D.,… Yin, G. (2024). Enhancing Robust Driver Assistance Control in Distributed Drive Electric Vehicles through Integrated AFS and DYC Technology. IEEE Transactions on Intelligent Vehicles.
  15. 15. Zhao J., Song D., Zhu B., Sun Z., Han J.,… Sun, Y. (2023). A Human-Like Trajectory Planning Method on a Curve Based on the Driver Preview Mechanism. IEEE Transactions on Intelligent Transportation Systems, 24(11), 11682–11698.
  16. 16. Jiang Y., Yang Y., Xu Y., & Wang E. (2023). Spatial-Temporal Interval Aware Individual Future Trajectory Prediction. IEEE Transactions on Knowledge and Data Engineering.
  17. 17. Chen B., Hu J., Zhao Y., & Ghosh B. K. (2022). Finite-time observer based tracking control of uncertain heterogeneous underwater vehicles using adaptive sliding mode approach. Neurocomputing, 481, 322–332. https://doi.org/10.1016/j.neucom.2022.01.038
  18. 18. Guo C., Hu J., Hao J., Čelikovský S., & Hu X. (2023). Fixed-time safe tracking control of uncertain high-order nonlinear pure-feedback systems via unified transformation functions. Kybernetika, 59(3), 342–364.
  19. 19. Mohammadzadeh A., Taghavifar H., Zhang C., Alattas K. A., Liu J.,… Vu M. T. (2024). A non-linear fractional-order type-3 fuzzy control for enhanced path-tracking performance of autonomous cars. IET Control Theory & Applications, 18(1), 40–54. https://doi.org/10.1049/cth2.12538
  20. 20. Mou J., Gao K., Duan P., Li J., Garg A.,… Sharma R. (2023). A Machine Learning Approach for Energy-Efficient Intelligent Transportation Scheduling Problem in a Real-World Dynamic Circumstances. IEEE Transactions on Intelligent Transportation Systems, 24(12), 15527–15539.
  21. 21. Qu Z., Liu X., & Zheng M. (2022). Temporal-Spatial Quantum Graph Convolutional Neural Network Based on Schrödinger Approach for Traffic Congestion Prediction. IEEE Transactions on Intelligent Transportation Systems.
  22. 22. Xiao Z., Fang H., Jiang H., Bai J., Havyarimana V., Chen H., et al. (2023). Understanding Private Car Aggregation Effect via Spatio-Temporal Analysis of Trajectory Data. IEEE Transactions on Cybernetics, 53(4), 2346–2357. pmid:34653012
  23. 23. Sun R., Dai Y., & Cheng Q. (2023). An Adaptive Weighting Strategy for Multisensor Integrated Navigation in Urban Areas. IEEE Internet of Things Journal, 10(14), 12777–12786.
  24. 24. Kumar, T. Vinoth, Ajay Reddy Yeruva, Sumeet Kumar, Durgaprasad Gangodkar, A. L. N. Rao, and Prateek Chaturvedi. "A New Vehicle Tracking System with R-CNN and Random Forest Classifier for Disaster Management Platform to Improve Performance." In 2022 2nd International Conference on Technological Advancements in Computational Sciences (ICTACS), pp. 797–804. IEEE, 2022.
  25. 25. Mohine , Shailesh , Babankumar S. Bansod , Bhalla Rakesh, and Basra Anshul. "Acoustic modality based hybrid deep 1D CNN-BiLSTM algorithm for moving vehicle classification." IEEE Transactions on Intelligent Transportation Systems 23, no. 9 (2022): 16206–16216.
  26. 26. Wu W., Zhu H., Yu S., & Shi J. (2019). Stereo Matching With Fusing Adaptive Support Weights. IEEE Access, 7, 61960–61974.
  27. 27. Wu Z., Zhu H., He L., Zhao Q., Shi J.,… Wu, W. (2023). Real-time stereo matching with high accuracy via Spatial Attention-Guided Upsampling. Applied Intelligence, 53(20), 24253–24274. https://doi.org/10.1007/s10489-023-04646-w.
  28. 28. Fu Z., Hu M., Guo Q., Jiang Z., Guo D.,… Liao, Z. (2023). Research on anti-rollover warning control of heavy dump truck lifting based on sliding mode-robust control. Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering.
  29. 29. Zhao L., Qu S., Xu H., Wei Z., & Zhang C. (2024). Energy-efficient trajectory design for secure SWIPT systems assisted by UAV-IRS. Vehicular Communications, 45, 100725. https://doi.org/10.1016/j.vehcom.2023.100725
  30. 30. Zhao L., Xu H., Qu S., Wei Z., & Liu Y. (2024). Joint Trajectory and Communication Design for UAV-Assisted Symbiotic Radio Networks. IEEE Transactions on Vehicular Technology.
  31. 31. Guo C., Hu J., Wu Y., & Čelikovský S. (2023). Non-Singular Fixed-Time Tracking Control of Uncertain Nonlinear Pure-Feedback Systems With Practical State Constraints. IEEE Transactions on Circuits and Systems I: Regular Papers, 70(9), 3746–3758.
  32. 32. Liu W., Bai X., Yang H., Bao R., & Liu J. (2024). Tendon driven bistable origami flexible gripper for high-speed adaptive grasping. IEEE Robotics and Automation Letters.
  33. 33. Cai Z., Zhu X., Gergondet P., Chen X., & Yu Z. (2023). A Friction-Driven Strategy for Agile Steering Wheel Manipulation by Humanoid Robots. Cyborg and Bionic Systems, 4, 64. pmid:38435676
  34. 34. Ren Y., Lan Z., Liu L., & Yu H. (2024). EMSIN: Enhanced Multi-Stream Interaction Network for Vehicle Trajectory Prediction. IEEE Transactions on Fuzzy Systems.
  35. 35. Wang Zhangu, Zhan Jun, Duan Chunguang, Guan Xin, Lu Pingping, and Yang Kai. "A review of vehicle detection techniques for intelligent vehicles." IEEE Transactions on Neural Networks and Learning Systems (2022).
  36. 36. Chen J., Xu M., Xu W., Li D., Peng W.,… Xu H. (2023). A Flow Feedback Traffic Prediction Based on Visual Quantified Features. IEEE Transactions on Intelligent Transportation Systems, 24(9), 10067–10075.
  37. 37. Zhang Peng, Zheng Jun, Lin Hailun, Liu Chen, Zhao Zhuofeng, and Li Chao. "Vehicle Trajectory Data Mining for Artificial Intelligence and Real-Time Traffic Information Extraction." IEEE Transactions on Intelligent Transportation Systems (2023).
  38. 38. Chen J., Wang Q., Peng W., Xu H., Li X.,… Xu W. (2022). Disparity-Based Multiscale Fusion Network for Transportation Detection. IEEE Transactions on Intelligent Transportation Systems, 23(10), 18855–18863.
  39. 39. Roy Debashri, Li Yuanyuan, Jian Tong, Tian Peng, Kaushik Roy Chowdhury, and tr Ioannidis. "Multi-modality sensing and data fusion for multi-vehicle detection." IEEE Transactions on Multimedia (2022).
  40. 40. Sun G., Song L., Yu H., Chang V., Du X.,… Guizani M. (2019). V2V Routing in a VANET Based on the Autoregressive Integrated Moving Average Model. IEEE Transactions on Vehicular Technology, 68(1), 908–922.
  41. 41. Chetouane Ameni, Mabrouk Sabra, Jemili Imen, and Mosbah Mohamed. "Vision‐ based vehicle detection for road traffic congestion classification." Concurrency and Computation: Practice and Experience 34, no. 7 (2022): e5983.
  42. 42. Sun G., Zhang Y., Liao D., Yu H., Du X.,… Guizani M. (2018). Bus-Trajectory-Based Street-Centric Routing for Message Delivery in Urban Vehicular Ad Hoc Networks. IEEE Transactions on Vehicular Technology, 67(8), 7550–7563.
  43. 43. Fang Z., Wang J., Liang J., Yan Y., Pi D., Zhang H., et al. (2024). Authority Allocation Strategy for Shared Steering Control Considering Human-Machine Mutual Trust Level. IEEE Transactions on Intelligent Vehicles, 9(1), 2002–2015.
  44. 44. Sun G., Sheng L., Luo L., & Yu H. (2022). Game Theoretic Approach for Multipriority Data Transmission in 5G Vehicular Networks. IEEE Transactions on Intelligent Transportation Systems, 23(12), 24672–24685.
  45. 45. Ahmad , Ahmad Bahaa, Saibi Hakim, Abdelkader Nasreddine Belkacem, and Takeshi Tsuji. "Vehicle Auto-Classification Using Machine Learning Algorithms Based on Seismic Fingerprinting." Computers 11, no. 10 (2022): 148.
  46. 46. Sun G., Zhang Y., Yu H., Du X., & Guizani M. (2020). Intersection Fog-Based Distributed Routing for V2V Communication in Urban Vehicular Ad Hoc Networks. IEEE Transactions on Intelligent Transportation Systems, 21(6), 2409–2426.
  47. 47. Rong Leilei, Xu Yan, Zhou Xiaolei, Han Lisu, Li Linghui, and Pan Xuguang. "A vehicle re- identification framework based on the improved multi-branch feature fusion network." Scientific Reports 11, no. 1 (2021): 20210. pmid:34642439
  48. 48. Joshua, Ishola Oluwaseun, Michael Olaolu Arowolo, Marion O. Adebiyi, Ogundokun Roseline Oluwaseun, and Kazeem Alagbe Gbolagade. "Development of an Image Processing Techniques for Vehicle Classification Using OCR and SVM." In 2023 International Conference on Science, Engineering and Business for Sustainable Development Goals (SEB-SDG), vol. 1, pp. 1–9. IEEE, 2023
  49. 49. Won Myounggyu. "Intelligent traffic monitoring systems for vehicle classification: A survey." IEEE Access 8 (2020): 73340–73358.
  50. 50. Mary Leena, and Koshy Bino I. "Detection and classification of vehicles using audio visual cues." Multimedia Tools and Applications (2023): 1–20.
  51. 51. Ghosh Rajib. "An Improved You Only Look Once Based Intelligent System for Moving Vehicle Detection." International Journal of Intelligent Transportation Systems Research (2023): 1–9.
  52. 52. Yang Hao Frank, Cai Jiarui, Liu Chenxi, Ke Ruimin, and Wang Yinhai. "Cooperative multi- camera vehicle tracking and traffic surveillance with edge artificial intelligence and representation learning." Transportation research part C: emerging technologies 148 (2023): 103982.
  53. 53. Vinothini, K., K. S. Harshavardhan, J. Amerthan, and M. Harish. "Fault Detection of Electric Vehicle Using Machine Learning Algorithm." In 2022 3rd International Conference on Electronics and Sustainable Communication Systems (ICESC), pp. 878–881. IEEE, 2022.
  54. 54. Sharma, Kritika Raj, Tripti Sharma, and Nitin Mittal. "Naïve-Bayes: Classification algorithm for smart parking management system." In 2023 10th International Conference on Computing for Sustainable Global Development (INDIACom), pp. 1070–1075. IEEE, 2023.
  55. 55. Kanona , Mohammed EA, Mohamad Y., Alias Mohamed Khalafalla Hassan, Khalid S. Mohamed , Khairi Mutaz HH, Mosab Hamdan, et al. "A Machine Learning Based Vehicle Classification in Forward Scattering Radar." IEEE Access 10 (2022): 64688–64700.
  56. 56. Kakade Pravin V., and Lakshi Prosad Roy. "Fast Classification for Identification of Vehicles on the Road from Audio Data of Pedestrian’s Mobile Phone." In 2022 IEEE 19th India Council International Conference (INDICON), pp. 1–7. IEEE, 2022.
  57. 57. Bhatlawande Shripad, Shilaskar Swati, and Dhanawade Amol. "LIDAR based Detection of Small Vehicles." In 2022 3rd International Conference for Emerging Technology (INCET), pp. 1–5. IEEE, 2022
  58. 58. Rani Preeti, and Sharma Rohit. "Intelligent transportation system for internet of vehicles based vehicular networks for smart cities." Computers and Electrical Engineering 105 (2023): 108543.
  59. 59. Pemila, M., R. K. Pongiannan, and V. Megala. "Implementation of Vehicles Classification using Extreme Gradient Boost Algorithm." In 2022 Second International Conference on Advances in Electrical, Computing, Communication and Sustainable Technologies (ICAECT), pp. 1–6. IEEE, 2022
  60. 60. Kanschat Raoul, Gupta Shivam, and Degbelo Auriol. "Wireless-Signal-Based Vehicle Counting and Classification in Different Road Environments." IEEE Open Journal of Intelligent Transportation Systems 3 (2022): 236–250.
  61. 61. Ong, Ardvin Kester S., Lara Nicole Z. Cordova Franscine Althea B. Longanilla , Neallo L. Caprecho, Rocksel Andry V. Javier Riañina D. Borres , et al. "Purchasing Intentions Analysis of Hybrid Cars Using Random Forest Classifier and Deep Learning." World Electric Vehicle Journal 14, no. 8 (2023): 227.
  62. 62. Pemila, M., R. K. Pongiannan, Venkatesh Pandey, Prasun Mondal, and Saumyarup Bhaumik. "An Efficient Classification for Light Motor Vehicles using CatBoost Algorithm." In 2023 Fifth International Conference on Electrical, Computer and Communication Technologies (ICECCT), pp. 01–07. IEEE, 2023.
  63. 63. Bhaskar Navaneeth, Bairagi Vinayak, Munot Mousami V., Gaikwad Kaustubh M., and Jadhav Sharad T. "Automated COVID-19 Detection From Exhaled Human Breath Using CNN-CatBoost Ensemble Model." IEEE Sensors Letters (2023).
  64. 64. Sadhu Vidyasagar, Anjum Khizar, and Pompili Dario. "On-Board Deep-Learning-Based Unmanned Aerial Vehicle Fault Cause Detection and Classification via FPGAs." IEEE Transactions on Robotics (2023).
  65. 65. Hasegawa Kento, Takasaki Kazunari, Nishizawa Makoto, Ishikawa Ryota, Kawamura Kazushi, and Togawa Nozomu. "Implementation of a ros-based autonomous vehicle on an fpga board." In 2019 International Conference on Field-Programmable Technology (ICFPT), pp. 457–460. IEEE, (2019).
  66. 66. Khanna M., Singh L.K., Thawkar S. and Goyal M.,. Deep learning based computer-aided automatic prediction and grading system for diabetic retinopathy. Multimedia Tools and Applications, 82(25), pp.39255–39302. (2023).
  67. 67. Khanna M., Agarwal A., Singh L.K., Thawkar S., Khanna A. and Gupta D.,. Radiologist-level two novel and robust automated computer-aided prediction models for early detection of COVID-19 infection from chest X-ray images. Arabian Journal for Science and Engineering, 48(8), pp.11051–11083. (2023).
  68. 68. Khanna M., Singh L.K., Thawkar S. and Goyal M., “PlaNet: a robust deep convolutional neural network model for plant leaves disease recognition.” Multimedia Tools and Applications, 83(2), pp.4465–4517. (2024).