Figures
Abstract
The automatic sorting of construction waste (CW) is an essential procedure in the field of CW recycling due to its remarkable efficiency and safety. The classification of CW is the primary task that guides automatic and precise sorting. In our work, a new method of CW classification based on two-level fusion is proposed to promote classification performance. First, statistical histograms are used to obtain global hue information and local oriented gradients, which are called the hue histogram (HH) and histogram of oriented gradients (HOG), respectively. To fuse these visual features, a bag-of-visual-words (BoVW) method is applied to code HOG descriptors in a CW image as a vector, and this process is named B-HOG. Then, based on feature-level fusion, we define a new feature to combine HH and B-HOG, which represent the global and local visual characteristics of an object in a CW image. Furthermore, two base classifiers are used to learn the information from the color feature space and the new feature space. Based on decision-level fusion, we propose a joint decision-making model to combine the decisions from the two base classifiers for the final classification result. Finally, to verify the performance of the proposed method, we collect five types of CW images as the experimental data set and use these images to conduct experiments on three different base classifiers. Moreover, we compare this method with other extant methods. The results demonstrate that our method is effective and feasible.
Citation: Song L, Zhao H, Ma Z, Song Q (2022) A new method of construction waste classification based on two-level fusion. PLoS ONE 17(12): e0279472. https://doi.org/10.1371/journal.pone.0279472
Editor: Vijayalakshmi G. V. Mahesh, BMS Institute of Technology and Management, INDIA
Received: June 25, 2022; Accepted: December 7, 2022; Published: December 27, 2022
Copyright: © 2022 Song et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant data are within the paper and its Supporting Information files.
Funding: This work was supported in part by Key Research and Development Project of Shaanxi Province (2020SF-367, 2020GY-186), Science and Technology Fund of Xi’an University of Architecture and Technology (ZR21034), Key Research and Development Project of Shaanxi Construction Engineering Holding Group (20211177-ZKT05). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
1. Introduction
As a global issue, construction waste (CW) obstructs the sustainable development process of the construction industry. For instance, in the European Union, the construction sector generates over 500 million metric tons of CW per year, accounting for 50% of the waste produced in the EU [1]. China, as a rapidly developing country, is suffering from the issue of increasing CW. In China, researchers have found that CW accounts for 30%-40% of the total urban waste [2]. As urbanization accelerates and the population increases, the amount of CW will continue to increase. Formerly, CW was collected and stacked in a landfill. It occupies land, pollutes groundwater and contaminates air. In Nigeria, Modu et al. [3] considered recycling methods as an effective and sustainable strategy for solid waste management. Recycling waste materials not only provides economic benefits but also minimizes environmental issues. In China, according to the composition of CW, experts estimate that 95% of CW can be reused. However, due to the limited effect of CW classification, the utilization rate is less than 5% in complex real-life scenarios [4]. In harsh operating environments, manual sorting is accompanied by a number of issues regarding health, safety, efficiency, and expense. These problems reduce the propensity to recycle CW from landfills. As a significant procedure in CW recycling practices, the CW classification methods are rapidly being adjusted due to legislative and economical drivers [5].
With the evolution of technology, classification approaches based on deep learning have been used in various fields. Talo et al. [6] used ResNet-50 to classify histopathology images and obtained a better result. In order to improve the performance of image denoising, Zheng et al. [7] proposed a denoising CNN, which consists of a dilated block, a RepVGG block, a feature refinement block, and a single convolution. In addition, a pure transformer was applied directly to sequences of image patches and performed very well on image classification tasks [8]. Peter et al. [9] proposed a deep convolutional neural network to identify typical CW. However, the superior adaptability of deep learning models relies on large datasets. On small or medium-sized datasets, methods with handcrafted features may outperform deep learning methods. For example, Scale Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF) are integrated for image retrieval [10]. Because SIFT is robust to the changes in scale and rotation, SURF is robust to the changes in illumination. Furthermore, features play an important role in improving classification accuracy [11]. Therefore, researchers combined BoVW with feature descriptors to improve the performance. Abouzahir et al. [12] used BoVW to represent the HOG descriptors in weed detection. The result demonstrated that using BoVW can improve the performance of HOG. Rao et al. [13] used the BoVW technique to label the SIFT features extracted from X-ray images as fractured or non-fractured. Local binary patterns (LBP) and the BoVW model were combined for detecting soybean diseases [14]. Aslan et al. [15] used the BoVW technique to reinforce the SURF features in human activity recognition.
Recently, with the development of sensing technology, many researchers have been working on the automatic classification of CW through crushing, magnetic separation, computer vision, and other processes. Among these methods, computer vision stands out for its rapidity, low cost, and applicability in different scenes. For example, Setiawan et al. [16] designed a method to identify organic garbage. Lee et al. [17] proposed a method based on machine learning to separate reinforcing bar areas from the background. Using image processing technology, Zhao et al. [18] built hardware and software systems to identify steel bars. In summary, these methods can only identify one specific material. In industry, the applicability of these methods is limited because various CW materials are collected in landfills. To automatically detect CW, Rashidi et al. [19] extracted the color histogram and the dominant edge histogram of three building materials and evaluated the performance of the proposed method with three different machine learning techniques (the multilayer perceptron, radial basis function, and support vector machine (SVM) methods). Xiao et al. [20, 21] used near-infrared hyperspectral technology to capture CW images and extract the characteristic reflectivity. In this study, according to the requirements of the management department, CW is divided into concrete, brick, plastic, foam, and wood. We use an industrial camera to capture the CW images. Compared with near-infrared hyperspectral cameras, industrial cameras can reduce costs for users. To obtain complementary information, the features are extracted from the global and local views of CW images. An efficient classification model based on feature-level fusion and decision-level fusion is proposed to improve the performance of CW classification.
A preliminary version of this work was presented in CCC2022. In this paper, we substantially revised and extended the original paper. The main extensions include using color features to divide CW into salient objects (i.e., bricks and wood) and other objects, and building a joint decision model based on a fusion mechanism. These extensions substantially improve the classification performance. The remainder of this paper is organized as follows. The capturing system and the CW classification framework based on two-level fusion are described in section 2. In section 3, we evaluate the performance of the proposed method and compare it with others. Finally, we summarize our work and give some remarks on future research directions in section 4. The automatic sorting of CW can be realized with our method.
2. Materials and methods
2.1 Capturing system
For image acquisition, we use an industrial camera to capture images. The camera is fixed on a metal frame located directly above the belt. To ensure that the objects on the belt can be captured, the height of the metal frame is variable. It can avoid confusing extraneous features, which will influence the experimental results. The data acquisition system is shown in Fig 1. The speed of the conveyer belt varies with the needs of users. The governor controls the speed of the motor. In this experiment, the conveyer belt runs at 100 mm per second. The industrial camera frame rate is 10 fps. The screen is used to display the CW images and experimental results.
(a) System structure. (b) Real system.
2.2 Experimental materials
In this study, we mainly classify materials, such as concretes, bricks, plastics, foams, and woods, provided by a construction management department. We capture 125 pictures of each class by using the capturing system, and 100 samples of each type are used as training data. These samples were picked from the waste collection site and were not cleaned to ensure that each sample contained pollution. These images are labeled as an array of 0 and 1 values. Each label array has five elements. If the image is class , the corresponding element is 1, and the others are 0. The sample images and corresponding labels of the five types of CW materials are shown in Fig 2. The appearance of the concrete is rough and white. Plastic, as a common CW, has a smooth surface. Foam can be different colors because its surface often attaches to other materials. Wood has a rectangular appearance. The color of bricks depends on their constituents. They are often orange or red. These materials are used in different construction stages.
The first row is the label, and the others are images.
2.3 Extracting visual features
(1) The global hue features.
Color is a comparatively robust visual feature that is invariant to object scale and position [22]. Similarly, color information is a main feature that allows humans to recognize objects. Since the hue-saturation-value (HSV) model is consistent with the characteristics of the human visual system, the HSV feature is one of the main features for pattern recognition [23, 24]. The HSV color space can be described as a conical geometry with three parameters, i.e., hue, saturation, and value [25]. In this method, we use the information from the hue channel to classify CW. We convert the image captured by the industrial camera from RGB space to HSV space. Next, we extract the hue information from the global perspective of the image. Finally, HH is used to describe the color information of the image. The algorithm flow for extracting HH is shown in Algorithm 1.
Algorithm 1: Extract HH
Input: Digital images
Output: Hue histogram (HH) of the images
Step 1: Convert the image from RGB color space to HSV color space.
Step 2: For each image, extract the hue information in the HSV color space.
Step 3: Use the histograms to represent the hue information of the CW images.
(2) The locally oriented gradient features.
The histogram of oriented gradients (HOG) is a good feature for describing the shape and texture information of an object [26]. The process of computing HOG starts by dividing an image into cells and grouping the cells into blocks [23]. In each block, we calculate the gradient magnitude and gradient orientation, which are shown in Eqs (1) and (2).
(1)
(2)
where my and mx are the vertical and horizontal gradients counted by the 1D filter.
and θ represent the gradient magnitude and gradient orientation of pixel (x,y), respectively. The detailed processing is referenced in [27], and we name it EXTRACTHOG (∙).
The traditional HOG model of an image is constructed by HOG descriptors of blocks. It usually does not provide good classification performance due to redundant information. To reduce the dimensions and maintain the discrimination of HOG, a BoVW method is used to code HOG models of all blocks in an image. We name the new feature . Assume there are N classes of CW images and each class has M images for training, denoted as
. The procedure of extracting
is shown in Algorithms 2 and 3.
Algorithm 2: Build a bag on training images
Input: Training images , parameters K,L
Output: Bag of visual words
Initial: the bag of the ith class B(i) = [], the whole bag
For i = 1:N
For j = 1:M
Divide into L blocks
, l = 1,…,L
For l←1 to L do:
bjl←EXTRACTHOG
End
End
Using K-means on {B(i)} to obtain K centers
End
Algorithm 3: Coding on the whole HOG descriptors
Input: Bag of visual words , parameters K,L, the target image It
Output: Coding vector
Initial: ←zeros(1, K×N)
Step 1: Divide It into L blocks.
Step 2: For l←1 to L do:
bl←EXTRACTHOG
End for
Step 3:
Step 4: Return
2.4 Feature-level fusion
As mentioned before, the color feature and HOG descriptors are extracted from multiple views. They can provide comprehensive information for CW classification. To combine the information from multiple views, we construct a new feature based on feature-level fusion and name it the Color-HOG feature. The overall flow is illustrated in Fig 3. For one sample image , we convert it to HSV color space and extract the hue information from the HSV model. Then, the HOG descriptors are extracted from the local perspective of the image and coded by the BoVW method. Finally, the new Color-HOG feature
can be obtained through Eq (3).
(3)
where
and
indicate the color feature and the coded HOG descriptors in image
, respectively.
and
are the weights to balance
and
. In this experiment,
and
are set to 1.
2.5 Decision-level fusion
It is observed that bricks and woods have more color saliency than the other materials. This interesting phenomenon can help to distinguish salient materials from others. Therefore, two base classifiers are used to learn the information from the HH feature space and the Color-HOG feature space. We designed a joint decision model to combine the decisions from the feature spaces. The flowchart of CW classification is shown in Fig 4.
Inspired by the fusion mechanism, the new joint decision-making model is illustrated in Eq (4). Let and
be the outputs of base classifiers, which are learned from the HH feature space and Color-HOG feature space, respectively. It is noteworthy that
is a vector containing two probability values. The probabilities are used to determine whether the object belongs to the categories with salient color.
is a vector containing five elements, and each element represents the probability that the sample belongs to the category of CW. ℝ is a transition matrix.
is the result of the decision-making model.
(4)
(5)
where ω1 and ω2 are the fusion weights of
and
. When ω1 = 0, the evidence from the Color-HOG feature space is completely credible. In this case, we classify CW based on feature-level fusion and name the method as the feature-level fusion-based CW classification (FLF). When ω1 = 1, the decision model indicates that only the HH feature is used to classify CW. In this case, since the HH feature can only effectively distinguish salient objects (i.e., bricks and woods) from the other materials, we cannot obtain an explicit label. For ω1∈(0,1), the CW classification method is based on feature-level fusion and decision-level fusion. We denote it as the two-level fusion-based CW classification method (TLF). It can be seen that FLF is a special case of TLF. In the next subsection, we investigate the influence of parameter ω1 tuning on the TLF algorithm.
3. Results and discussions
3.1 Salient features
The envelopes of HH features, B-HOG features, and Color-HOG features of 5 CW categories are displayed in Fig 5. The HH features of the CW images are shown in the first rank. We conclude that the probability curve of each category has two peaks, but the information distributions are different. The first peaks of the brick and wood materials are thin. However, the peaks of the plastic, concrete, and foam materials are wide. Therefore, HH features can be used to distinguish salient objects (i.e., bricks and woods) from others and as the basis for CW classification. The B-HOG features of 5 CW categories are shown in the second rank of Fig 5. A bag-of-visual-words about HOG descriptors is built through the K-means algorithm. A histogram of the bag-of-visual-words is used to describe the local information of the CW images. The envelopes of the histograms show that the B-HOG features can be regarded as the basis for CW classification. Inspired by the fusion mechanism, the HH features and B-HOG features are connected to form Color-HOG features, which is shown in the third rank of Fig 5.
(1) HH features, (2) B-HOG features, and (3) Color-HOG features.
3.2 Effects of parameter tuning
The number of visual vocabularies is a significant parameter that affects the performance of the TFL method. We use the K-means clustering method to construct different sizes of BoVW and determine the optimal parameter. The accuracies of the TLF method with different sizes of visual vocabularies are shown in Table 1. Using a small number of visual vocabularies, different significant information may be combined into one cluster. As the size of the visual vocabulary increases, more CW details can be obtained, but a large vocabulary tends to overfit. The highest average accuracy of 96.32% is obtained on the BoVW with a size of 250. Therefore, the number of visual words in the proposed method is set at 250.
In the joint decision model, ω1 and ω2 represent the fusion weights of and
, which satisfy ω2+ω1 = 1. To determine the optimal fusion weights, we conduct extensive experiments with different ω1. For ω1 = 1, only color features are used to classify CW, and this is described in section 2.4. In this instance, CW can be divided into salient objects and other materials. For ω1∈[0,1], the average accuracy curves of 5 CW categories are shown in Fig 6. When ω1<0.5, the evidence from the Color-HOG feature space plays a leading role. During this phase, the accuracy curves indicate that the classification accuracy increases as ω1 increases. For ω1>0.5, the evidence from the color feature space plays a dominant role, and the curves show a downward trend. This indirectly demonstrates that the features from the Color-HOG feature space and HH feature space are equally important. In our work, we recommend taking ω1 = 0.5 as a default value.
3.3 Evaluation of classification performance
In this section, the CW images described in section 2.2 are used to assess the performance of the proposed method. Five-fold cross validation is applied on all image sets. Confusion matrices are used to represent the performance of the proposed algorithm. The results of the proposed method on three base classifiers are shown in Tables 2–4. The base classifiers include the SVM [28], K-nearest neighbor (KNN) [29], and random forest (RF) [30] methods. The recall of plastics in the FLF method and TLF method based on the SVM classifier is 99.2%, which is higher than that of bricks, concretes, foams, and woods. Table 3 shows that the recall of plastic is the highest, which is 98.4%. Table 4 indicates that in the TLF method, the recall of bricks is higher than that of plastics. Generally speaking, the recall values of most materials are higher than 90%. But, based on the RF classifier, the recall of concrete and foam is 89.6% and 87.2%, respectively. However, if we replace the base classifier, the recall of concretes and foams can be improved. In other words, our proposed methods still show superior performance. In addition, precision is also used to evaluate the FLF and TLF methods. Table 2 shows that the precision of the plastics in the FLF method and TLF method is 98.4% and 99.2%, respectively, which is higher than that of the other materials. Based on the KNN classifier, the precision of plastics is the highest, which is 98.4%. Although Table 4 indicates that the results of concretes and foams are slightly confused, the average accuracies of the FLF method and TLF method are 92.8% and 93%, respectively. This may be related to the performance of the base classifiers. Overall, we can see that our proposed CW classification method achieves good performance. Some of the results are visualized in Figs 7 and 8. The correct classification results are marked in green, and the incorrect classification results are marked in red.
3.4 Testing in various conditions
In industry, the environment of the construction jobsite is variable. The shaking of the capturing system produces noise. In addition, because CW contains demolished materials, the sizes of the objects may vary. To verify the robustness of our proposed approaches, we test the performance of the TLF and FLF methods with different levels of noise and scales. The results are reported in Fig 9. As the scale changes, the accuracy curves show a slight downward trend. However, when the size of the CW image changes from a scale of 0.5 to 1.5, the overall accuracy is higher than 80%. With an increase in noise, the accuracy curves show a downward trend. When the noise intensity is lower than 0.3%, the classification accuracy of the proposed methods is higher than 80%. However, at a construction jobsite, the types of noise vary. Generally, the proposed methods have better performance.
The robustness of the proposed methods, (a) the effect of scale changes, and (b) the effect of noise intensity.
3.5 Comparison of classification performance
To verify the effectiveness of the new Color-HOG feature in our proposed approach, we conduct comparison experiments with other traditional features, i.e., HOG [31], LBP [32], SURF [33], and SIFT [34]. In the comparison experiments, the traditional features are also coded by visual words. As shown in Table 5, based on the SVM classifier, the average accuracies of LBP-BOW [14], SURF-BOW [15], SIFT-BOW [13], HOG-BOW [12], and Gabor Wavelets [19] are 71.36%, 83.36%, 83.84%, 59.04%, and 94.72%, respectively. Moreover, Tables 5–7 show that the FLF method and TLF method have more than 10% improvement compared with other methods except Gabor Wavelets. Unfortunately, on all tests, our proposed methods don’t always have the best results, such as the Gabor Wavelets method, which has the best results with the SVM classifier on Test2 datasets. However, if we replace the base classifier, our proposed methods can achieve the desired results. Therefore, the above phenomenon may relate to the potential of the base classifiers. Generally speaking, the average accuracy of our proposed method is better than that of the Gabor Wavelets method. In other words, our proposed methods still show competitive ability.
In addition to the above-mentioned classic models based on handcrafted features, deep learning models have also become a trend. VGG-16 [9], ResNet-50 [6], and the Vision Transformer network (ViT) [8] are three classic deep learning models used for comparison with the TLF method. Table 8 shows that the precision of VGG-16, ResNet-50, and ViT is 94.0%, 95.5%, and 96.2%, respectively. In these deep learning models, ViT has the highest precision, but it is still lower than the TLF method. The precision of the TLF method is 2.32% higher than that of the VGG-16 network. Compared with ResNet-50, the precision of the TLF method is improved by 0.82%. Table 9 shows that the recall of VGG-16, ResNet-50, and ViT is 93.6%, 95.2%, and 96.0%, respectively. Compared with VGG-16 and ResNet-50, the recall of ViT is the highest. However, the recall of the TLF method is 0.32% higher than that of ViT. Table 10 shows that the classification accuracy of VGG-16, ResNet-50, and ViT is 93.6%, 95.2%, and 96%, respectively. Compared with these deep learning models, the TLF method has higher accuracy. Although these deep learning models have achieved good performance on many datasets, the TLF method achieves better performance on the CW dataset.
4. Conclusions
With the increasing focus on preserving the environment, CW recycling has become an important topic. Sorting a large amount of CW precisely and quickly is an urgent problem. This research shows that it is feasible to classify CW by computer vision. Motivated by the characteristics of the human visual system, the TLF method is proposed to classify CW materials in this work. The TLF method is based on a joint model of feature-level fusion and decision-level fusion. For the former, a statistical histogram and a BoVW method are applied to capture color features and HOG descriptors from a CW image, respectively. Moreover, inspired by feature-level fusion, a new feature named Color-HOG is constructed. For the latter, we fuse decisions from two base classifiers, which are learned from HH features and Color-HOG features. We name the model based on feature-level fusion as FLF, which is a special case of the TLF method. Compared with other state-of-the-art methods, the FLF method and TLF method have higher accuracy. The classification accuracy of the FLF method based on three base classifiers is 95.2%, 94.4%, and 92.96%, which is higher than that of the other state-of-the-art methods. Experiments demonstrate that Color-HOG is a robust feature for representing the discriminative characteristics of CW. Compared with the FLF method, the TLF method has higher accuracy: the accuracy of the TLF method based on the SVM classifier is 1.12% higher than that of the FLF method. In addition, we conduct experiments under various conditions. The experimental results also show that the proposed method has excellent performance under different conditions. In other words, the TLF method is an effective tool to promote the sorting and recycling of CW. This will be beneficial to reducing construction and CW management costs.
References
- 1. Vieira C.S., Pereira P.M., Use of recycled construction and demolition materials in geotechnical applications: a review. Resour. Conserv. Recycl. 2015; 103:192–204. https://doi.org/10.1016/j.resconrec.2015.07.023
- 2. Xiao W., Yang J., Fang H., Zhuang J., Ku Y., Zhang X., Development of an automatic sorting robot for construction and demolition waste. Clean Technol. Environ. Policy. 2020; 22:1–13. https://doi.org/10.1007/s10098-020-01922-y
- 3. Modu B. and Umar M.M., Multiobjective Mathematical Optimization Model for Municipal Solid Waste Management with Economic Analysis of Reuse/Recycling Recovered Waste Materials. Journal of Computational and Cognitive Engineering. 2022; 1(3):122–137. https://doi.org/10.47852/bonviewJCCE149145
- 4. Duan H., Li J., Construction and demolition waste management: China’s lessons. Waste Manag. Res. 2016; 34(5):397. pmid:27178091
- 5. Ajayi S.O., Oyedele L.O., Policy imperatives for diverting construction waste from landfill: experts’ recommendations for UK policy expansion. J. Cleaner Prod. 2017; 147:57–65. https://doi.org/10.1016/j.jclepro.2017.01.075
- 6. Talo M., Automated Classification of Histopathology Images Using Transfer Learning. Artif. Intell. Med. 2019;101. pmid:31813483
- 7. Zheng M., Zhi K., Zeng J., A Hybrid CNN for Image Denoising. J. Artif. Intell. Technol. 2022; 2(3):93–99. https://doi.org/10.37965/jait.2022.0101
- 8. Dosovitskiy A., Beyer L., Kolesnikov A., An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. International Conference on Learning Representations. 2021. https://doi.org/10.48550/arXiv.2010.11929
- 9. Davis P., Aziz F., Newaz M.T., The classification of construction waste material using a deep convolutional neural network. Autom. Constr. 2021;122. https://10.1016/j.autcon.2020.103481
- 10. Ali N., Bajwa K.B., Sablatnig R., A Novel Image Retrieval Based on Visual Words Integration of SIFT and SURF. PLOS ONE. 2016; 11(6). pmid:27315101
- 11. Masood F., Masood J., Zahir H., Novel Approach to Evaluate Classification Algorithms and Feature Selection Filter Algorithms using Medical Data. Journal of Computational and Cognitive Engineering. 2022; 1(3):122–137. https://doi.org/10.47852/bonviewJCCE2202238
- 12. Abouzahir S., Sadik M., Sabir E., Bag-of-visual-words-augmented histogram of oriented gradients for efficient weed detection. Biosyst. Eng. 2021; 202(3):179–194. https://doi.org/10.1016/j.biosystemseng.2020.11.005
- 13. Rao L. J., Neelakanteswar P., Ramkumar M., Krishna A., Basha C.Z., An Effective Bone Fracture Detection using Bag-of-Visual-Words with the Features Extracted from SIFT. In: International Conference on Electronics and Sustainable Communication Systems. 2020. https://doi.org/10.1109/ICESC48915.2020.9156035
- 14. Araujo Jmm, and Peixoto Z., "A new proposal for automatic identification of multiple soybean diseases." Comput. Electron. Agric. 2019; 167:105060. https://doi.org/10.1016/j.compag.2019.105060
- 15. Aslan M. F., Durdu A., Sabanci K., "Human action recognition with bag of visual words using different machine learning methods and hyperparameter optimization." Neural. Comput. Appl. 2020; 32(12):8585–8597. https://doi.org/10.1007/s00521-019-04365-9
- 16. Setiawan W., Wahyudin A., Widianto G.R., The use of scale invariant feature transform (SIFT) algorithms to identification garbage images based on product label. In: International Conference on Science in Information Technology. 2017. https://doi.org/10.1109/ICSITech.2017.8257135
- 17. Lee J. H., Sang O.P., Machine learning-based automatic reinforcing bar image analysis system in the internet of things. Multimed. Tools. Appl. 2018; 78:3171–3180. https://doi.org/10.1007/s11042-018-5984-7
- 18.
Zhao, J, Xia, X., Wang, H., Kong, S., Design of Real-Time Steel Bars Recognition System Based on Machine Vision. In: 2016 8th International Conference on Intelligent Human-Machine Systems and Cybernetics. IEEE. 2016; pp:505–509. https://doi.org/10.1109/IHMSC.2016.75
- 19. Rashidi A., Sigari M. H., Maghiar M., Citrin D. An analogy between various machine-learning techniques for detecting construction materials in digital images. KSCE J. Civ. Eng. 2016; 20(4):1178–1188. https://doi.org/10.1007/s12205-015-0726-0
- 20. Xiao W., Yang J, Fang H., Zhuang J, Ku Y., A robust classification algorithm for separation of construction waste using NIR hyperspectral system. Waste Manage. 2019; 90:1–9. pmid:31088664
- 21. Xiao W., Yang J., Fang H., Zhuang J., Ku Y., Zhang J., Development of online classification system for construction waste based on industrial camera and hyperspectral camera. PLOS ONE. 2019; 14(1): e0208706. pmid:30650081
- 22. Lyu W., Lu W., Ma M., No-reference quality metric for contrast-distorted image based on gradient domain and HSV space—ScienceDirect. J. Vis. Commun. Image Represent. 2020; 69:117–128. https://doi.org/10.1016/j.jvcir.2020.102797
- 23. Hamuda E., Ginley B.M., Glavin M., Jones E., Automatic crop detection under field conditions using the HSV colour space and morphological operations. Comput. Electron. Agr. 2017; 133:97–107. https://doi.org/10.1016/j.compag.2016.11.021
- 24. Hossain M., Islam M., A new approach of content-based image retrieval using color and texture features. Br. J. Appl. Sci. Technol. 2017; 21(3):1–16. https://doi.org/10.9734/BJAST/2017/33326
- 25. Nugroho H.A., Goratama R.D., Frannita E.L., Saturation channel extraction of HSV color space for segmenting plasmodium parasite. IOP Conf. Ser.: Mater. Sci. Eng. 2021; 1088(1):012073 (7pp). https://doi.org/10.1088/1757-899X/1088/1/012073
- 26. Petra B., Tom D, Grzegorz C., Analysis of morphology-based features for classification of crop and weeds in precision agriculture. IEEE Robot. Autom. Lett. 2018; 99:1–1. https://doi.org/10.1109/LRA.2018.2848305
- 27.
Said, Y., Atri, M., Tourki, R., Human detection based on integral Histograms of Oriented Gradients and SVM. In: International Conference on Communications. IEEE. 2011. https://doi.org/10.1109/CCCA.2011.6031422
- 28. Chih-Chung C., Chih-Jen L. LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol.2011; 2(3):1–39. https://doi.org/10.1145/1961189.1961199
- 29. Fernando H., Marshall J., What lies beneath: material classification for autonomous excavators using proprioceptive force sensing and machine learning. Automat. Constr. 2020; 119:103374. https://doi.org/10.1016/j.autcon.2020.103374
- 30. Rutkowski L., Pietruczuk L., Duda P., Jaworski M., Decision trees for mining data streams based on the McDiarmid’s bound. IEEE Trans. Knowl. Data Eng. 2013; 25(6):1272–1279. https://doi.org/10.1109/TKDE.2012.66
- 31. Hu L., Liu C., Wu X., Image Segmentation of Rape Based on EXG and Lab Spatial Threshold Algorithms. In: Artificial Intelligence and Computer Science. 2019; 384–389. https://doi.org/10.1145/3349341.3349436
- 32. Wang C., Lee W. S., Zou X., Choi D, Gan H., Diamond J., Detection and counting of immature green citrus fruit based on the local binary patterns (LBP) feature using illumination-normalized images. Precis. Agric. 2018; 19:1062–1083. https://doi.org/10.1007/s11119-018-9574-5
- 33. Anna B., Eyal B.D., Automatic registration of airborne and spaceborne images by topology map matching with surf processor algorithm. Remote Sens. 2011; 3(1):65–28. https://doi.org/10.3390/rs3010065
- 34. Alamri J., Harrabi R., Ben S., Face recognition based on convolution neural network and scale invariant feature transform. IJACSA. 2021; 12(2). https://doi.org/10.14569/IJACSA.2021.0120281