Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Research on maize canopy center recognition based on nonsignificant color difference segmentation

  • Xiushan Wang,

    Roles Conceptualization, Funding acquisition

    Affiliation Department of Electrical Engineering, College of Mechanical & Electrical Engineering of Henan Agricultural University, Zhengzhou, Henan, China

  • Hehu Zhang ,

    Roles Software, Writing – original draft

    henauzhh@163.com

    Affiliation Department of Electrical Engineering, College of Mechanical & Electrical Engineering of Henan Agricultural University, Zhengzhou, Henan, China

  • Ying Chen

    Roles Investigation, Project administration

    Affiliation College of Humanities & Law of Henan Agricultural University, Zhengzhou, Henan,China

Abstract

Weed control is a substantial challenge in field management. A better weed control method at an earlier growth stage is important for increasing yields. As a promising weed control technique, intelligent weeding based on machine vision can avoid the harm of chemical weeding. For machine vision, it is critical to extract and segment crops from their background. However, there is still no optimal solution for object tracking with occlusion under a similar color background. In this study, it was found that the gray distribution of a maize canopy follows the gradient law. Therefore, the recognition method based on the HLS-SVM (HLS color space and Support Vector Machine) and on the grayscale gradient was developed. First, the HLS color space was used to detect the maize canopy. Second, the SVM method was used to segment the central region of the maize canopy. Finally, the maize canopy center was identified according to the gradient law. The results showed that the average segmentation time was 0.49 s, the average segmentation quality was 87.25%, and the standard deviation of the segmentation was 3.57%. The average recognition rate of the center position was 93.33%. This study provided a machine vision method for intelligent weeding agricultural equipment as well as a theoretical reference for further agricultural machine vision research.

Introduction

Maize is one of the three largest food crops in the world, and approximately 1/3 of the people in the world use maize as its primary food source [1]. In addition, maize is also one of the main feedstuff for developing animal husbandry. Reasonable field management can ensure maize yields [2]. However, in the early stage of maize growth, serious weed damages always make maize plant growth extremely slow. Weed control is an important field management method to improve maize yields. Weed control methods are mainly divided into manual weeding, chemical weeding, mechanical weeding and automatic intelligent weeding. Due to the aging of the population and the rise of labor prices, time-consuming and tedious manual weeding has been gradually replaced by simple and effective chemical weeding. However, the chemical herbicides lead to environmental pollution as well as reduced maize nutrition and increased weed drug resistance. In recent years, mechanical weeding has been used instead of chemical weeding. However, mechanical weeding without a perceptual system easily damages the roots and stems of crops [3]. With the rapid development of machine vision, intelligent agricultural equipment is developing rapidly [4]. Intelligent agricultural equipment with machine vision is more promising than traditional agricultural equipment in fruit harvesting, weed control, and navigation in the field. Some examples are provided as follows. Machine vision is used to recognize seedlings’ position in transplanting [5], the thinning machine is used to recognize excessive sugar beet plants based on machine vision [6], the intelligent weed machine is used to identify the weeds based on machine vision [7, 8], the tomato cluster picking robot is used to identify tomatoes based on machine vision [9], and machine vision was used to locate the positions of green peppers [10]. Although machine vision is widely used in agriculture, due to weather and complex backgrounds, the recognition rate and positioning accuracy are not high. There is still no good solution for object tracking with occlusion under similar color backgrounds. Furthermore, image segmentation is important and is perhaps the most difficult task that has a direct effect on the recognition results [11]. In agriculture image processing, some image segmentations are often adopted, such as color index threshold-based and learning-based segmentation. The segmentation method based on the color index is mainly divided into two categories: the green index segmentation method based on the different weights of color components and the comprehensive method based on different green index methods. For example, the color index of vegetation extraction (CIVE) was proposed by Kataoka in 2003 [12]. The standardized green—red difference index (NGRDI) was proposed by hunt in 2005 [13]. The vegetation index (VEG) for separating plant (grains and weeds) pixels from soil pixels was proposed by Hague in 2006 [14]. The segmentation methods based on learning are mainly divided into two categories. One category is the unsupervised learning method. For example, the fuzzy clustering unsupervised learning method was proposed by Meyer in 2004 [15]. The other category is supervised learning. For example, the supervised mean shift algorithm based on back propagation neural networks was proposed by Zheng in 2009 [16]. To solve the problem of illumination, the learning method based on a decision tree was proposed by Guo in 2013 [17].

All the above methods and applications are mainly focused on plant segmentation and recognition, and constantly improving the segmentation effect is the key to solve all the recognition, classification and location problems. It is easy to use machine vision to locate the target under significant color differences, but target recognition under similar color backgrounds does not have a mature, effective and universal solution. Therefore, under similar color background characteristics, the real-time maize canopy center position recognition needs to be solved urgently. This research work developed the maize canopy center recognition method based on the HLS-SVM (HLS color space and Support Vector Machine)and gray gradient law and achieved the accurate identification of maize center positions. The method is divided into three stages. First, an RGB image was converted to an HLS image, and the green vegetations were segmented from the background using HLS color segmentation [18]. Then, the SVM classifier was used to identify the blue maize canopy’s central region, namely, the regions of interest (ROI), and the residual vegetation region, namely, the non-regions of interest (N-ROI) [1921]. Since some weed regions have a similar color to the ROI, the weeds cannot be completely removed by SVM segmentation. Fortunately, the weeds and the ROI were isolated from each other after SVM segmentation, and the ROI occupied the most area in the image. Therefore, the remaining weeds were filtered out using the area filtration method. As weeds and maize canopy cover each other, small weeds were still connected around the ROI after filtering. To eliminate the weeds, the morphological processing method was used to obtain the smooth ROI [22]. Finally, the maize canopy center was further identified from the ROI. Before identifying the canopy center, the gray distribution of the ROI was analyzed. It was found that all the grayscale images, from the canopy center to the canopy periphery, followed the increasing gradient law, which is the unique attribute of the chlorophyll distribution of the maize canopy. This feature can be used to segment the different central hierarchies of the canopy, which provides a favorable condition for more accurate canopy center identification. The statistics showed that the gray level of the maize canopy center was concentrated with 0~30 gray values. Therefore, the threshold of the gray level was set to 30, and the maize canopy center was identified by threshold segmentation.

Materials and methods

Materials

Experimental field and maize plant.

The research was carried out in the experimental base of the National Maize Cooperative Innovation Center in China. Maize plants were planted according to the agronomic requirements to facilitate the mechanization and identification of the intelligent equipment. The distance of each line was 60 cm, and the distance between two strains was 40 cm. The video was collected from the spring sowing maize canopy. The average height of the maize plant was 40–60 cm, and the speed of the video tracking car was 1.2 m/s.

Machine visual experimental platform parameters.

Video acquisition peripherals: A video orbiter, a camera rocker arm 15 meters long, and 10 straight cars.

Camera parameters: The Q-PRI high speed camera is a portable high-speed camera made by AOS (AOS Technologies AG) in Switzerland. A 3 million pixel resolution is suitable for the image acquisition of fast moving objects. The frame rate of this resolution is up to 500 frames per second.

Video processing hardware: CPU: Intel(R)core™ i5-6500 @3.3 GHz. Processor architecture: x86_64. The core number of the processor is 4, the number of threads per core is 1, and the cache size is 6144 KB. The memory capacity is 4G, and the data hard disk’s capacity is 1 T.

Video processing software: operating system: ubuntu 16.04. Video image processing language: C++ and Python.

Recognition method of the maize canopy center position.

This part mainly studies the recognition method for the maize canopy center position. The four main steps are as follows.

  1. Global segmentation based on HLS mask: The RGB image is converted to an HLS image, and the color detection method is used to segment the green vegetation from the background.
  2. Local segmentation based on Linear SVM: The SVM classifier is used to extract the ROI in the green vegetation.
  3. Canopy center recognition based on the grayscale gradient law: According to the grayscale increasing phenomenon from the canopy center to the canopy periphery, the canopy center position is identified from the ROI.

Global segmentation based on HLS mask.

To identify the maize canopy center, the background first need to be filtered out. In previous studies, there are some methods that were used, such as the Standard Deviation Index, Excess Green Index, Excess Red Index, Color Index of Vegetation Extraction, and Combined Ex-G and Ex-GR and CIVE and VEG indices, as well as the Hue Index. This article uses the HLS space mask to filter the background. First, the image is converted from the RGB color space to the HLS space (Fig 1A and 1B), and then it is filtered with the HLS-mask (Fig 1C). The HLS thresholds is given by the expression (1) where x and y are pixel coordinates in the image, H is the hue of the image, L is the brightness of the image, and R is the saturation component of the image.

thumbnail
Fig 1. Global segmentation based on HLS mask.

(A)RGB image.(B)HLS image.(C)Mask image.

https://doi.org/10.1371/journal.pone.0202366.g001

The color distribution is mainly concentrated on two regions. The central region (the ROI) given by expression (2) is dark blue, and the surrounding region (the N-ROI) is green. The canopy center is located in the ROI, and the ROI will be segmented again. (2) where I is the HLS image, and ROI is the region(s) of interest in the HLS image, namely, the central region of the maize canopy.

Local segmentation based on linear SVM.

To segment the ROI, the SVM classifier is used to identify the ROI and N-ROI. The SVM is a machine learning method, which is a model based on the statistical learning theory and the structural risk minimization proposed by Vapnik [23, 24]. Because of the excellent learning ability of the SVM and the relatively low over fitting rate, the SVM model has good classification performance [25]. The local segmentation based on the SVM is described as follows (Fig 2).

thumbnail
Fig 2. Local segmentation based on the Linear SVM.

Select the candidate region from the ROI as the positive training sample, and select the candidate region from the N-ROI as the negative training sample. Then, fit the SVM model, and finally the SVM classifier is used to segment the HLS image.

https://doi.org/10.1371/journal.pone.0202366.g002

The local segmentation based on the Linear SVM includes four steps: training, parameter optimization, testing, and post processing. The next section is the detailed implementation of the SVM segmentation model.

(1) SVM kernel function selection and parameter optimization

To judge whether the ROI and the N-ROI in the HLS image are linearly separable, it is necessary to understand the distribution state of the two region’s pixels in the color space. The pixels in the training and the testing images are mapped to the HLS color space, respectively (Fig 3). The red and the blue solid points represent the positive and negative training samples, respectively, which are all distributed in two different regions. The green and the yellow hollow points represent the ROI and the N-ROI of the canopy, respectively. It can be seen that the two kinds of pixels be separated by the Linear SVM classification hyper-plane. Therefore, by using the Linear SVM kernel, the ROI and N-ROI can be separated. The SVM classification function is shown in formula (2). (3) where the vector , the vectors , and N is the number of samples.

The predicted results are stored in the label vector Vlabel(n), as shown in formula (4).

(4)

Convert Vlabel to a matrix named Masksvm, as shown in formula (5).

(5)
thumbnail
Fig 3. Linear SVM hyper-plane and green vegetation pixel distribution in the HLS color space.

https://doi.org/10.1371/journal.pone.0202366.g003

After selecting the SVM kernel function, the cross validation is performed to get the best parameters, including the penalty factor C and the slack variable g. In addition, select the best sample size according to the SVM learning curve. The training sets and the 4-fold cross validation method are used to build the Linear SVM model.

By the above cross validation, it is found that the variable g does not significantly affect the performance of the Linear SVM, and it is set to auto. Therefore, the penalty factor C is used as the main parameter that is analyzed. The red and the green dashed lines respectively represent the upper and lower variances, and the blue solid line represents the average cross validation curve (Fig 4A). It can be seen that when C is small, the average variance of the score is very large, and the score increases with the increase of C. After C ≥ 0.1, the variance gradually becomes small and stable. Thus, C = 0.1 is used as the best parameter.

thumbnail
Fig 4. SVM parameter.

(A) C cross validation curves.(B) Learning curves(Linear Kernel,g = 0.001).

https://doi.org/10.1371/journal.pone.0202366.g004

The characteristics of the support vector make the classifier obtain very good classification results with only a few training samples, and there is no serious over-fitting phenomenon. The red curve is the training score, the green curve is the cross validation score, and the light green area is the variance (Fig 4B). With the sample number increasing, the score rate increases, and the variance decreases. When the sample size is 100, the score of cross validation is the largest, and the variance is the smallest at the same time. Therefore, the sample size of 100 is selected.

(2) Post processing of the SVM segmentation image

It can be seen from the HSL mask image that both the ROI and some of the surrounding weeds appear blue. After the SVM segmentation, the blue weeds cannot be completely removed, and those need to be removed by the post processing.

After the SVM segmentation, the ROI are separated from most of the weeds, and the ROI occupied most of the image. As a result, the rest of the weeds can be filtered out using the area filtering method (Fig 5A and 5B). Then, the morphological methods are used to conduct erosion operations on the ROI to eliminate the residual boundary areas (Fig 5C). The image is eroded with the structure element and the convolution is defined as follows: (6) where Ds is the domain of S, x,y is the pixel position of Masksvm, and x′,y′ is the pixel position of the Kernel. After the area filtering and morphological processing, the residual weeds are filtered out and the ROI mask is retained (Fig 5D).

thumbnail
Fig 5. The post processing of the SVM segmentation.

(A)SVM segmentation.(B) Area filter.(C)Erosion.(D)ROI.

https://doi.org/10.1371/journal.pone.0202366.g005

Recognition of the canopy center based on grayscale gradient.

The ultimate aim is to acquire the maize canopy’s center position. The results revealed that the ROI gray value follows the gradient law, which is the essential feature of the maize leaf Chlorophyll distribution (Fig 6). This feature can be used to segment the ROI. Furthermore, it provides a favorable condition for more accurate center identification.

thumbnail
Fig 6. Gray gradient distribution in the ROI.

The gray level of 0–255 is divided into 17 parts, and the tab 20-color set is used to coloring the different gradation regions of the grayscale image in the ROI. The isohypse from low to high is [15,30,45,60,75,90,105,120,135,150,165,180,195,210,225,240,255].

https://doi.org/10.1371/journal.pone.0202366.g006

Although the increasing gray level is not linearly related to its radius, it is positively correlated with it. After statistical analysis, it is found that the gray level in the ROI is mainly concentrated on the gray range of 0~30, and therefore the maize canopy center can be identified accurately by the gray threshold segmentation.

Results

Results of local segmentation based on the Linear SVM

We display the 60 frame segmentation results in which the 57 ROI are extracted, but there are still 3 frame images that show a poor segmentation effect (Fig 7). When the size of the canopy is similar to the size of the weeds, the color distribution of the ROI in the HLS space is very close to the weeds and the surrounding leaves of the maize canopy, and the SVM classifier mistakenly identifies the ROI as a weed.

thumbnail
Fig 7. Linear SVM classifier segmentation results for HLS Mask images.

https://doi.org/10.1371/journal.pone.0202366.g007

Results of canopy center recognition based on the grayscale gradient law

Previously, the HSL color and Linear SVM segmentation methods have been used to obtain the ROI. The ROI's RGB image needs to be converted to the gray image (Fig 8A and 8B), and then the gray gradient distribution in the ROI is analyzed (Fig 8C). Therefore, we can use the gray threshold method to set the gray threshold T< 30 (Fig 8D).

thumbnail
Fig 8. The results of the canopy center recognition based on the gray gradient distribution.

(A)Canopy.(B)Gray ROI.(C)Gray gradient ROI.(D)ROI center.

https://doi.org/10.1371/journal.pone.0202366.g008

Statistics of the maize canopy center recognition

The maize canopy centers are identified by this method, and the canopy centers of each group are counted. The identification results of the 4 frame images are displayed (Fig 9).

thumbnail
Fig 9. Statistics of the recognition rate of the maize canopy center.

The x and y values: the canopy center location; the area between the two blue lines: the recognition area; and the red circle: the center of the recognition.

https://doi.org/10.1371/journal.pone.0202366.g009

The canopy centers of the 1641 images in 12 groups are identified, and the test results showed that the average recognition rate of the canopy layer was 93.33% (Table 1).

thumbnail
Table 1. Statistics of the maize canopy center recognition rate.

https://doi.org/10.1371/journal.pone.0202366.t001

Discussions

Performance of local segmentation based on the Linear SVM

The ROC (receiver operating characteristic) curves and AUC (area under the curve) are often used to evaluate a binary classifier. AUC is a very common evaluation index in machine learning and classification models. The larger the AUC is, the better the correct classification performs. The training set was used for four cross tests to calculate the AUC and the average AUC of each cross validation (Fig 10). The red, green, blue, and orange curves are the four cross validation ROC curves, the rough black dotted line is the average ROC, and the light colored dotted line is the reference. It can be seen from the graph that the AUC of the mean ROC is 0.89, and the Linear SVM has a lower error rate when classifying HLS images.

thumbnail
Fig 10. Receiver operating characteristic curve for the SVM classifier.

https://doi.org/10.1371/journal.pone.0202366.g010

Comparison of local segmentation performance based on the different methods

The accuracy of the segmentation methods is evaluated by the following performance measurements [26]. (7) where A is the set of the vegetation pixels (v = 0), B is a reference set of manually segmented vegetation pixels (v = 0), m,n are the respective image row and column sizes, i,j are the pixel coordinate indices of the images, and Qseg is the consistency of both the vegetation part and background part. The mean of the segmentation quality factor and the standard deviation of the evaluation image set can be derived using Qseg. Table 2 is a comparison of the ROI segmentation methods for each image with the size of 1242x931 and the resolution of 96 dpi under the same conditions.

The average time consumption based on the HLS-SVM method is 0.49 s, which is slower than the color index-based Ex-G and COM1 methods, but a difference was not obvious among them. In addition, the time consumption of the HLS-SVM is shorter than that of the single SVM method because the HLS method filters a large number of the background pixels and reduces the numbers of segmentation objects. From the mean of segmentation quality and the standard deviation, the segmentation effect of the HLS-SVM is better than the others, which results from the improvement of the computation complexity.

Comparison of local segmentation performance under different weathers and times

The ROI segmentation performance under different weather conditions and times was analyzed. The validation set with 600 samples was collected from two contrasting experimental fields under the following conditions: the times were 8:00, 12:00, and 16:00, and the weather included Sunday cloudy overcast. The SD of the quality with little change indicated that the HLS-SVM method had robustness under different weather conditions and times (Fig 11A and 11B). In addition, it was found that the method had a better segmentation effect during clear mornings or during the midday. This finding shows that the soft sunlight has a good effect on the HLS-SVM method.

thumbnail
Fig 11. Mean and SD of quality factor.

(A) A samples from Zhengzhou.(B) B samples from Nanyang. Red indicates sunny days, Green indicates cloudy, and Blue indicates overcast. Each histogram is divided into three time periods. The table below the histogram records the average value of the quality of the segmentation under different circumstances.

https://doi.org/10.1371/journal.pone.0202366.g011

During distinguishing similar colors, the machine learning method with high segmentation precision has a strong sensitive to the training data source while the color index method with low segmentation precision can quickly filter and eliminate a large number of unrelated backgrounds. Combination of the two methods above can achieve complementary advantages. The new method enables the weed robot to identify the maize canopy more accurately under similar green background while maintaining real-time performance. And then the gradient method was used to quantitatively reveal the distribution of the gray level of the canopy, which makes the weeding robot more accurately perceive plant location and reduce the risk of damage to plants.

Conclusions

In this paper, a novel recognition method for maize canopy centers based on machine vision technology was developed. The method combined the HLS-SVM theory with the gray gradient distribution rule and achieved the accurate position identification of the maize canopy under similar color backgrounds using multi-level segmentation. The recognition effect was better than the single identification method and is suitable for real-time identification using intelligent weeding. The research provided a complete machine vision system for intelligent weeding agricultural equipment as well as a theoretical reference for further agricultural machine vision research.

Acknowledgments

This research is supported by the following projects: the National Natural Science Foundation of China (Grant: U1204524), the Henan province science and technology key project (Grant: 172102210302), the Henan college key scientific research project (Grant: 18A416002), and the Development of key technologies for electrically driven inter robot weeding robot based on machine vision technology (KF1607). The project team heartily expresses their full thanks to above units.

References

  1. 1. Nuss ET, Tanumihardjo SA. Maize: A Paramount Staple Crop in the Context of Global Nutrition. Comprehensive Reviews in Food Science & Food Safety, 2010; 9(4):417–436.
  2. 2. Peña JM, Torres-Sánchez J, de Castro AI, Kelly M, López-Granados F.Weed Mapping in Early-Season Maize Fields Using Object-Based Analysis of Unmanned Aerial Vehicle (UAV) Images. PLOS ONE.2013;8(10).doi e77151. https://doi.org/10.1371/journal.pone.0077151.
  3. 3. Eyre MD,Critchley CNR, Leifert C,Wilcockson SJ. Crop sequence, crop protection and fertility management effects on weed cover in an organic/conventional farm management trial. Eur. J. Agron.2011;vol. 34, no. 3, pp. 153–162.
  4. 4. Zhang J, Basso B, Price R F, Putman G., Shuai G. Estimating plant distance in maize using Unmanned Aerial Vehicle (UAV). Plos One.2018;13(4)e0195223. pmid:29677204
  5. 5. Huang YJ, Lee FF. An automatic machine vision-guided grasping system for Phalaenopsis tissue culture plantlets. Comput. Electron. Agric. 2010;vol. 70, no. 1, pp. 42–51.
  6. 6. Moghaddam PA, Arasteh AS, Komarizadeh MH, Babazadeh S.Developing a selective thinning algorithm in sugar beet fields using machine vision system. Comput. Electron. Agric. 2016;vol. 122, pp. 133–138.
  7. 7. Li N,Zhang C,Chen Z,Ma Z,Sun Z,Yun T,et al. Crop positioning for robotic intra-row weeding based on machine vision. Int. J. Agric. Biol. Eng. 2015;vol. 8, no. 6, pp. 20–29.
  8. 8. Cordill C, Grift TE. Design and testing of an intra-row mechanical weeding machine for corn. Biosyst. Eng. 2011;vol. 110, no. 3, pp. 247–252.
  9. 9. Kondo N,Yamamoto K,Shimizu H,Yata K,Kurita M,Shiigi T,et al.A machine vision system for tomato cluster harvesting robot.Eng. Agric. Environ. Food. 2009; vol. 2, no. 2, pp. 60–65.
  10. 10. Eizentals P, Oka K, Harada A. 2016 International Symposium on Experimental Robotics.2017; vol. 1.
  11. 11. Abbasgholipour M, Omid M, Keyhani A, Mohtasebi SS. Color image segmentation with genetic algorithm in a raisin sorting system based on machine vision in variable conditions. Expert Syst. Appl. 2011; vol. 38, no. 4, pp. 3671–3678.
  12. 12. Kataoka K, Kaneko T, Okamoto H, Hata S. Crop growth estimation system using machine vision. Proc. 2003 IEEE/ASME Int. Conf. Adv. Intell. Mechatronics (AIM 2003).2003;vol. 2, no. Aim, pp. 1079–1083.
  13. 13. Hunt ER, Cavigelli M, Daughtry CST, McMurtrey JE, Walthall CL.Evaluation of digital photography from model aircraft for remote sensing of crop biomass and nitrogen status. Precis. Agric.2005; vol. 6, no. 4, pp. 359–378.
  14. 14. Hague T, Tillett ND, Wheeler H.Automated crop and weed monitoring in widely spaced cereals.Precis. Agric.2006; vol. 7, no. 1, pp. 21–32.
  15. 15. Meyer GE, Neto JC, Jones DD, Hindman TW.Intensified fuzzy clusters for classifying plant, soil, and residue regions of interest from color images. Comput. Electron. Agric.2004; vol. 42, no. 3, pp. 161–180.
  16. 16. Zheng L., Zhang J, Wang Q.Mean-shift-based color segmentation of images containing green vegetation. Comput. Electron. Agric.2009; vol. 65, no. 1, pp. 93–98.
  17. 17. Guo W, Rage UK, Ninomiya S.Illumination invariant segmentation of vegetation for time series wheat images based on decision tree model. Comput. Electron. Agric.2013; vol. 96, pp. 58–66.
  18. 18. Peer P, Kovac J, Solina F.Human Skin Colour Clustering for Face Detection. EUROCON 2003. Comput. as a Tool.2003;IEEE Reg. 8, pp. 144–148.
  19. 19. Zhao J, Wu H, Chen L.Road Surface State Recognition Based on SVM Optimization and Image Segmentation Processing. J. Adv. Transp.2017;vol. 2017.
  20. 20. Wali SB, Hannan MA, Hussain A, Samad SA.An Automatic Traffic Sign Detection and Recognition System Based on Colour Segmentation, Shape Matching, and SVM. Math. Probl. Eng.2015; vol. 2015.
  21. 21. Bai X, Wang W. Principal pixel analysis and SVM for automatic image segmentation.Neural Comput. Appl.2016; vol. 27, no. 1, pp. 45–58.
  22. 22. Najman L, Talbot H.Introduction to mathematical morphology.Math. Morphol. from theory to Appl.2010;pp. 3–34.
  23. 23. Chang C, Lin C.LIBSVM: A Library for Support Vector Machines.ACM Trans. Intell. Syst. Technol.2013;vol. 2, pp. 1–39.
  24. 24. Wu TF, Lin CJ, Weng RC. Probability Estimates for Multi-class Classification by Pairwise Coupling. Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit.2010; vol. 2, no. 2, pp. 2301–2311.
  25. 25. Lapin M, Hein M, Schiele B.Learning using privileged information: SVM plus and weighted SVM. Nn.2014;vol. 53, pp. 95–108. pmid:24576747
  26. 26. Meyer GE, Neto JC. Verification of color vegetation indices for automated crop imaging applications. Computers and Electronics in Agriculture.2008; 63(2), 282–293. https://doi.org/10.1016/j.compag.2008.03.009.