Retraction
After this article [1] was published, PLOS received information indicating that the listed corresponding author, YW, was included in the author list of [1] without their knowledge or consent, that YW was not involved in the research reported in [1], and that the email address listed in [1] for YW does not belong to them.
YW requested the article’s retraction.
PLOS have been unable to verify the contact emails provided for authors YZ or GZ and have not received responses regarding this matter from YZ, GZ, or the institution where the research was conducted.
In light of the above concerns, and since PLOS has not received the information needed to verify the article’s reliability, the PLOS One Editors retract this article.
At the time of retraction, [1] was republished to remove YW from the author byline and to update CX’s affiliation.
CX did not agree with the retraction. YZ and GZ either did not respond directly or could not be reached.
1 Dec 2025: The PLOS One Editors (2025) Retraction: Machine vision model for drip leakage detection of pipeline. PLOS ONE 20(12): e0337862. https://doi.org/10.1371/journal.pone.0337862 View retraction
Figures
Abstract
The prevailing trend in industrial equipment development is integration, with pipelines as the lifeline connecting system components. Given the often harsh conditions of these industrial equipment pipelines, leakage is a common occurrence that can disrupt normal operations and, in severe cases, lead to safety accidents. Early detection of even minor drips at the onset of leakage can enable timely maintenance measures, preventing more significant leaks and halting the escalation of pipeline failures. In light of this, our study investigates a method for monitoring pipe drips in industrial equipment using machine vision technology. We propose a machine vision model specifically designed for pipe drip detection, aiming to facilitate monitoring of pipe system drips. The system designed to collect the image of the droplet side cross-section with a Charge charge-coupled device (CCD) industrial camera, is aided by the computer image processing system used to analyze and process the collected images. Image enhancement technology is applied to improve the visibility of the image and image filtering technology is applied to remove the noise of the image. With the help of image segmentation technology, target droplet identification and division are achieved. Morphological reconstruction and region-filling techniques are used to remove the noise caused by shooting in the side cross-section image, such as hollow, reflection, and irregular droplet edge, to upgrade the quality of the solution droplet edge. The mathematical model is established for boundary position points extracted from the droplet side cross-section image. Then, the fitting droplet image is drawn. The droplet volume is obtained by calculating the volume of the rotating body. The two-dimensional image of the target droplet is obtained dynamically through the camera capture technology. The droplet boundary extraction algorithm is proposed, and the three-dimensional model of the target droplet is established, so the volume calculation problem of the droplet is solved, which provides a way of thinking for drip leakage detection of the pipeline.
Citation: Xiao C, Zhou Y, Zhang G (2025) RETRACTED: Machine vision model for drip leakage detection of pipeline. PLoS One 20(1): e0316951. https://doi.org/10.1371/journal.pone.0316951
Editor: Sameer Sheshrao Gajghate, GH Raisoni College of Engineering and Management Pune, INDIA
Received: August 8, 2024; Accepted: December 18, 2024; Published: January 16, 2025
Copyright: © 2025 Xiao et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The code and test images are available on GitHub (https://github.com/wyw-design/drip-leakage-detection-code.git).
Funding: “This research was funded by the Natural Science Foundation of Hubei Province of China under Grant No.2022CFB435, and was also supported by the Scientific Research Plan Project of Education Department of Hubei Province of China under Grant No.B2022452”.
Competing interests: The authors have declared that no competing interests exist.
Introduction
In the wake of the burgeoning global economy, industrial equipment has witnessed significant growth. As nations vie to advance their manufacturing capabilities, high-end equipment has emerged as a testament to human scientific and technological progress. Pipelines, analogous to blood vessels in industrial setups, play a crucial role by connecting primary and auxiliary machinery. The unpredictability of pipeline "leakage" necessitates prompt and precise detection, pinpointing the leak’s location and estimating its volume. Currently, both domestically and internationally, there are no effective leak detection strategies; reliance remains on manual maintenance post-leak incidents. Due to the precision limitations of traditional instruments, minor leaks, such as drips, can be challenging to identify, potentially leading to substantial damage. These challenges have underscored the potential of machine vision technology in leak detection.
In recent years, the advent of machine vision technology has led to its gradual integration into industrial equipment applications, particularly in structural defect detection, target identification and tracking, and security monitoring. Dong et al. [1] investigated the segmentation and detection of equipment weld defects using machine vision, revealing an impressive coverage rate exceeding 80%. In the realm of target recognition and detection, Huang et al. [2] utilized computer vision techniques for equipment target tracking, showcasing reduced error rates and superior practicality compared to traditional methods. Zhou et al. [3] integrated neural networks with genetic algorithms to accurately identify pipeline leaks. This approach combines the strengths of both neural networks and genetic algorithms, demonstrating excellent learning performance, robustness, and strong global search capabilities. Li et al. [4] employed convolutional neural networks for the recognition of pipeline leak acoustic signals. The study revealed that this method significantly surpassed AlexNet and support vector machines in terms of recognition effects, effectively addressing the challenge of recognizing small leak acoustic signals in water supply pipes and exhibiting promising application potential in the field of pipeline leak detection. PEREZ-PEREZ E J et al. [5] proposed a method for detecting and locating pipeline leaks using artificial neural network (ANN) technology, along with online measurements of pressure and flow rate. This method takes into account various leak scenarios to characterize the pressure loss and its variations in different parts of the pipeline.
Currently, the application of machine vision technology in leak detection and diagnosis remains experimental, presenting numerous challenges. Furthermore, given the challenging living conditions and intricate operational environments of industrial equipment pipelines, there are limitations in both the collection of extensive samples and the acquisition of expert prior knowledge.
This paper primarily investigates the method of ensuring accurate calculations of the volume of each drop solution, utilizing the highly precise technique of liquid drop measurement. As to the measurement of droplet volume, this paper analyzes the boundary data of the droplet side section graph according to its characteristics. Through the combination of machine vision and image processing, a mathematical model is established to analyze, to measure the droplet volume of each drop of solution, and ultimately improving its quality and accuracy.
Method
In the drip leakage detection system of pipeline, a machine learning method is employed to capture the droplet target map for research purposes. To draw an ideal approximate droplet image and obtain a more accurate droplet volume, it is necessary to conduct mathematical analysis on the approximate droplet side section image, and then establish a corresponding mathematical analysis model [6]. Through the analysis of the boundary position points found in the approximate droplet image, these discrete points are divided into sections first, because the boundary points of the sections need to be found out in the section division. After the segmentation regions are determined, the laws of these boundary position points must be found, and the best mathematical expressions of these regions can be determined according to least square method and multiple linear regression. When the mathematical expression of the boundary is presented, the best approximate image of the droplet is able to be drawn, and then the volume of the droplet can be obtained more accurately.
Least square method
The least square method is the most common mathematical principle in mathematical modeling [7], which essentially minimizes the error vector in the sense of norm (i.e., vector length). Let the formula
is the error vector, and the parameter is found to minimize the sum of squares of the error, that is,
Given the empirical formula (here, and are vectors), a series of information with errors
are required to determine the parameters
. Combined with the principle of least squares, it can be reduced to a formula, that is,
Here, represents the column vector formed by the data information of variable y, and
represents the matrix formed by the data information of variable x (possibly multivariable). Therefore,
is the vector-valued function obtained by bringing each group of data of
into the function f.
The least squares method can be employed to analyze the boundary points on a droplet contour, to identify the most suitable function model for describing the profile of the droplet.
Multiple linear regression
Multiple linear regression model is provided as
Where, ε~N(0, σ2). Let , expressed as
Now the n sets of observations for y and x1, x2,…,xp (n is generally required to be much larger than p, and when the constant term is considered in the regression model, it is equivalent to x0 taking the constant 1) has been obtained, the required estimate of . Let
and
be the n×1 and n×(p+1) matrices of the corresponding n sets of observations, that is
The estimated values for and
are
For n group of observed measurement values of any y and x1, x2,…,xp, the estimated value of regression coefficient can be obtained through the above equation [8] to judge the reliability of this model, and the analysis of residual
should be carried out to test whether it is valid.
The contour of the droplet can be mathematically represented by employing the multivariate linear regression analysis method, which is based on the principle of least squares curve fitting. This representation is derived from the contour points of the droplet.
Machine learning method
This study utilizes established machine learning methodologies for the identification of moving objects, with a particular focus on employing YOLOv5 for the recognition of target droplet images. YOLOv5 is a widely acknowledged and efficient target detection technique that boasts extensive applications in the realm of image recognition. The structural principle of this algorithm is illustrated in Fig 1.
The YOLOv5 network architecture is roughly divided into four parts, namely Input, Backbone, Neck, and Output. The specific components are shown as follows:
- Input: Load the image set to be detected, and conduct a series of pre-processing operations on the input image such as Mosaic data enhancement, scaling, color transformation, etc., to calculate the most appropriate anchor frame value of the image.
- Backbone: Slices images using the Focus module, concentrates image feature information into the channel space, and expands the channel dimension. At the same time, CSPNet is used to fuse the gradient change into the feature map, and the image features under different layers are obtained.
- Neck: PANet is used to combine image features and transfer strong low-level positioning features to the high-level feature map to improve the propagation of low-level features.
- Output: Makes predictions about image features, and generates bounding boxes and prediction categories.
Experiment
This paper introduces a visual model for drip leakage detection of pipelines, as depicted in Fig 2, designed to facilitate the detection and analysis of pipeline leakage. Given that water is the predominant liquid utilized in pipelines, this study employs it as the test medium to elucidate the experimental methodology. The approach can be extrapolated to other liquids by referencing the method delineated herein. The model serves two primary functions: firstly, it employs a video acquisition terminal to detect pipeline drip and capture images of the droplet target; secondly, it extracts drip characteristics such as the geometric feature parameter information of these droplets. This facilitates the tracking of drip status, supplying reference data for evaluating drip volume, and ultimately, enabling the estimation of said volume.
Test bench composition
The schematic representation of the experimental platform’s principle is depicted in Fig 3. This device simulates the cooling water circulation found on industrial equipment, mirroring the characteristics of a real-world industrial cooling system. Elements such as flowmeters, pressure sensors, temperature sensors, and standard valves installed along the line are not included in Fig 2 for clarity. Once all valves are closed, water begins to circulate in the system. A CCD camera captures images of droplets emerging from the leak valve, with subsequent data processing conducted by a computer. Droplets falling are collected by a droplet collector for statistical analysis.
Selection of experimental instruments
The primary objective of this experiment is to gather images and videos about pipeline leaks, employing visual methodologies for leak detection. The camera serves as a pivotal instrument for image acquisition, playing an instrumental role throughout the experiment. It directly impacts the quality of the images and profoundly influences the outcomes of subsequent research endeavors. By the experimental specifications, the main parameters of the CCD camera are delineated in Table 1.
Experiment setup
The fastest frame rate of the CCD industrial camera used in this system is 30 frames per second when the photo resolution is 640×480. Even though, the acquisition speed still cannot capture the clear and complete droplet image of the instant dripping. Consequently, the YOLOv5 algorithm is employed to autonomously capture critical images of droplet fall. This image is subsequently subtracted from the background image to yield a difference map. This difference map serves as an approximation of the droplet image representation, facilitating the analysis of the droplet’s volume [9]. In this paper, the experimental results of a CCD camera placed at 10 cm of leakage valve port are introduced. So, the critical diagram and background diagram of the droplet drop are shown in Fig 4.
Image processing and object extraction
The photos taken by the CCD camera are all RGB true color images. First, it is necessary to convert RGB color images into gray images [10]. Since the images captured by the CCD camera are objectively affected by various factors, and these noises hinder the understanding of the received information source, it is necessary to preprocess these images. This system adopts the image denoising method based on wavelet transform [11, 12]. After gray enhancement processing, the system selects the threshold selection algorithm based on the maximum inter-class variance for image segmentation. When a CCD camera takes a photo of a liquid drop, there are light spots in the photo because of the reflected light. To remove the spots below a certain area value in the figure, eight-connected binary open operations [13–16] are applied. The algorithm of morphological processing includes a target binary graph with inflating and a target binary graph of corrosion.
Considering the target object processed by this system is the droplet image, the difference between the critical graph and the background graph is required for approximate analysis. The approximate target droplet binary graph needs to be obtained by the difference between the solid critical binary graph and the solid background binary graph, so the critical binary graph and the background binary graph should be solid, and the method of open operation is adopted as well. The algorithm of critical graph processing is shown in Fig 5 and The algorithm of background graph processing is shown in Fig 6.
Due to the possibility of residual subtraction disjunction spots in the process of difference, the approximate droplet binary filling diagram obtained by the difference method hurts the image analysis. It is necessary to further remove the image spots. The approximate droplet side section binary filling diagram is shown in Fig 7.
Analysis
As can be seen from the approximate droplet binary filling diagram obtained by the difference method, the overall effect of the approximate image is not ideal, where the boundary contour of the right side is distorted. Affected by internal tension and gravity, the left and right sides of the droplet cross-section of the boundary contour are symmetrical, so the comparative ideal side of the droplet can be analyzed (the left side is selected for analysis in this paper). After drawing the symmetric diagram, the approximate droplet cross-section image can be trimmed [17, 18].
Algorithm for image contour fitting
Through the interpolation of one, two, three, four, five, and six polynomials on the curve, the preliminary fitting results are obtained, and then the error analysis is carried out on the corresponding fitting line segment. Ultimately, the best scheme is determined. The fitting results of the polynomial mathematical expression of one to six degrees are shown in Fig 8.
The steps to obtain the curve expression on the left side of the droplet image contour by polynomial mathematical fitting are to carry out polynomial fitting first, and then embark on residual analysis. With regard to the comprehensive circumstance, the best scheme is adopted according to the size of the error. It is worthy noting that here, the residual ε. is defined as the true value y minus the difference worth the fitting estimate [19]. In order to judge the fitting image, it is also necessary to find the mean value and the mean square errorvalue for the residual ε.
Finally, the residuals, mean value, and mean square error valueare drawn together in the form of a straight line. The residual analysis results of the function expression on the left side of the concave contour at the top of the droplet image are shown as the fitting results, as shown in Fig 9.
Where the red circle represents the residues, the green line represents the mean value of the residues, and the blue line represents the mean square errorvalue.
After observation and analysis, it can be seen that the average residual values of the six multiple residual graphs are all approximately 0, indicating that the space occupied by the six fitting curves is the same as that of the original one. According to the mean square error of each residual analysis graph, the residual values in the first and second mathematical fitting deviate greatly from the origin 0, and the obvious error is too large, so the results are not desirable. At the same time, the errors in the three, four, five, and six times of fitting are relatively small, and the values are all lower than 1. However, the mean square error of residuals of the first three times is larger than that of the fourth. Although the value of the fifth and sixth fitting is further improved compared with the level of the fourth fitting, the evaluation of the fifth and sixth fitting is very complicated, and it is more suitable for the comprehensive residual analysis to choose the quadric fitting value as the best scheme [20–22].
Image contour reconstruction
According to the algorithm analysis of image contour fitting in the above section, the three mathematical expressions for approximating the left contour of the droplet image are as follows.
According to the expression of the left contour function of the approximate droplet image and the symmetry of the droplet image, the contour of the approximate droplet profile image can be fitted, as shown in Fig 10.
Reliability analysis of fitting images
From the above analysis, it can be seen that the piecewise mathematical expression equation of the droplet side section profile is obtained using mathematical polynomial fitting, and the complete fitting status is viewed by drawing the fitting droplet side section profile image. The fitted profile of the droplet side section is compared with the original one and the approximate binary filling profile of the droplet side section to provide a basis for the reliability of the fitted data [23–25].
The fitted profile of the droplet side section is compared with that of the original droplet side section, and the comparison effect is shown in Fig 11.
The comparison between the fitted droplet side cross section profile and the approximate droplet side cross section binary filling image is shown in Fig 12.
Through the analysis of the above comparison diagram, on the one hand, when compared with the original diagram of the side section of the liquid drop, the fit profile of the side section has a high coincidence with the boundary of the side section of the liquid drop. While in the critical comparison diagram, except for a few flashes, the fit profile contains the droplet targets to be investigated, and these errors can be ignored. The fitted contour overlaps the boundary of the remaining droplet very well. On the other hand, when compared with the binary filling diagram of the approximate droplet side section, it has corrected the droplet side section diagram with errors caused by changes in morphological image processing, so the reliability of this method is verified [26].
According to the analysis before, the fitting approximate droplet side section profile is ideal and reliable. To make it more intuitive, it is necessary to fill the fitting approximate droplet side section contour as as to make it a filling diagram, which is very important for following analysis and calculation [27]. The filling pattern of the fitting approximate droplet side section is shown in Fig 13.
Drawing of three-dimensional graphics
According to the discussion results of the previous model, the sequence of three coordinate axes can be defined again. First, the previous x axis is converted into the z axis, and then the y axis is converted into the x axis. At this time, the graph rotates around the z axis, and finally the mathematical equation of the rotation body can be obtained:
Through the above mathematical expression, a three-dimensional imageapproximating a liquid drop can be drawn. The resulting image is shown in Fig 14.
Pixel volume calculation
To obtain the approximate droplet volume, considering the mechanical properties of the droplet itself, the approximate droplet can be regarded as a rotating body. According to the mathematical expression of the approximate droplet boundary contour obtained by the previous analysis, coupled with the quadrature equation of the rotating body, the approximate droplet volume can be obtained [28].
Let the contour mathematical expression is Q(x,y) = 0,(n≤x≤m), and let the increment be dx at the same time, where dx is infinitely approaching 0, so that the formula for calculating the volume of the rotation is Formula 11.
The droplet can be regarded as the rotating body of the component segment. By integrating and solving each segment and summing the volume of this part, the volume of the droplet can be figured out [29]. As a result, the approximate droplet volume is v = 846187.98(pixel cubic unit).
Estimated volume calculation
Since the CCD camera is placed at 10 cm from the port, the object distance is u = 100 mm. At the same time, the CCD camera lens is adjusted to capture the ideal image, and the adjusted focal length is f = 8.3mm. According to the CCD camera manual, the pixel size is h = 3.2μm = 3.2×10-6m. The imaging principle of convex lens is shown in Fig 15. Since the image distance u´ is very small, the focal length f value of the image distance u´ can be approximated [30–32]. According to the characteristics of a similar triangular type, the corresponding edge is proportional, so its actual size is as follows.
The parameter k represents the actual cubic unit adjustment coefficient. Formula 13 expresses the value of k. The formula for the calculated average volume of a single drop is represented as .
Discussion
Test sample
The drip monitoring model established is verified by using the drip data obtained from the drip test bench described in Fig 2 as experimental materials. The relevant information of the drip data selected is shown in Table 2. The object distance for experiments I through III is maintained at 10cm, while it is increased to 15cm for experiments IV through VI. In experiments I and IV, a total of 100 drops are released with an approximate drip frequency of 10 seconds per drop. For experiments II and V, the number of drops is increased to 150, with the drip frequency reduced to approximately 5 seconds per drop. Finally, in experiments III and VI, the number of drops reaches 200, with a drip frequency of about 3 seconds per drop. The droplets resulting from each experiment are collected and subsequently weighed. Taking into account the density of water, the total volume of the droplet is equivalent to its total weight when calculated using appropriate units.
Volume verification
Utilizing the aforementioned analysis method, experiments were executed on the test samples. Subsequently, the computed values were juxtaposed with the experimental test values. Formula 14 delineates the computational methodology for determining the droplet accuracy rate. V represents the calculated average volume of a single drop, and V´represents the collected average volume of a single drop.
The analytical outcomes are presented in Table 3.
Table 3 illustrates that the accuracy of the droplet model is marginally superior in Experiment I compared to Experiment II, and similarly, the accuracy in Experiment II surpasses that in Experiment III. This pattern is consistent across Experiments IV, V, and VI. These findings suggest that at a constant object distance, a lower droplet falling frequency results in a slightly improved effect of the droplet model. Furthermore, the accuracy of the droplet model in Experiment I is marginally better than in Experiment IV, the accuracy in Experiment II is marginally better than in Experiment V, and the accuracy in Experiment III is marginally better than in Experiment VI. This implies that a smaller object distance yields a slightly improved effect on the droplet model. Despite variations in data across these six experiments, the deviations in calculated droplet volume are relatively minor, indicating the method’s feasibility.
Error cause
Detection method error. The image contrast method for droplet size detection exhibits certain inaccuracies. The approximate droplet diagrams fitted also contain errors.These errors can be mitigated by obtaining more precise droplet critical diagrams and background diagrams, or by directly measuring the entire droplet within the sample.
Shooting error. The recognition efficacy of droplets is markedly influenced by the camera’s parameters, including its pixel count, resolution, and frame rate. By employing a camera with superior performance metrics for image capture, one can mitigate this error, leading to a reduction in the estimated volume discrepancy of leaked droplets.
Adjustment coefficient calculation error. The error arises from a multitude of factors, including non-standardized testing procedures, suboptimal performance of the shooting apparatus, and calculations based on approximate parameters. To mitigate this error, it is imperative to conduct experiments on enhanced experimental platforms and employ superior equipment for the investigations.
Conclusion
This paper introduces a machine vision-based model for visually monitoring drips, leveraging surveillance cameras to facilitate detection of pipeline leaks. The model extracts pertinent feature parameters from the leakage and estimates its volume. An experimental platform was developed using industrial equipment cooling water pipeline system leaks as the research focus, yielding diverse leak data. Data were collected from various object distances and leak flow rates to validate the model.
In experimental settings, this scheme calculates the drip volume with an accuracy exceeding 95% at a distance of 10cm. The proposed drip visual monitoring model effectively monitors areas near pipeline leaks with low drip frequency, thereby demonstrating significant engineering application value. In practical scenarios, it can be employed to monitor locations on critical pipelines that are susceptible to drips.
References
- 1. Shaohua Dong, Xuan Sun, Shuyi Xie, et al. Automatic Recognition Technology for Digital Image Defects in Pipeline Welding Seams [J]. Natural Gas Industry, 2019, 39(1): 113–117.
- 2. Zhihui Huang, Jin Zhan, Huimin Zhao, et al. A Brief Analysis of Visual Object Tracking Algorithms Based on Deep Learning [J]. Journal of Guangdong University of Technology, 2019, 40(3): 28–36.
- 3. ZhouShoujun Z O’Neill, et al. A review of leakage detection methods for district heating networks [J]. Applied Thermal Engineering Design Processes Equipment Economics, 2018,137:567–574.
- 4. Li Zhe, Feng Hao, Liu Xin, et al. Acoustic Signal Recognition of Small Leakage in Water Pipelines Based on CNN [J]. Noise and Vibration Control, 2021, 41(04): 66–72.
- 5. PéREZ-PéREZ E J, LóPEZ-ESTRADA F R, VALENCIA-PALOMO G, et al. Leak diagnosis in pipelines using a combined artificial neural network approach [J]. Control Engineering Practice, 2021, 107: 104677.
- 6. Dehais J, Anthimopoulos M, Shevchik S, Mougiakakou S. Two-view 3D reconstruction for food volume estimation. IEEE Trans. Multimed. 2016, 19, 1090–1099.
- 7. Sutton M. Experimental Measurements Using Digital Image Correlation Methods: Brief Background and Perspective on Future Developments. ASME J. Eng. Mater. Technol. 2023, 145, 014701.
- 8. Zhu K, Li C, Pan B. Rapid and Repeatable Fluorescent Speckle Pattern Fabrication Using a Handheld Inkjet Printer. Exp. Mech. 2022, 62, 627–637.
- 9. Zhu Z, Zhu D, Ge M. The Spatial Variation Mechanism of Size, Velocity, and the Landing Angle of Throughfall Droplets under Maize Canopy. Water. 2021, 13, 2083.
- 10. Martínez-González A, Moreno-Hernández D. Horizontally and vertically sensitive schlieren and shadowgraph system. Opt.Lett. 2022, 47, 3596–3599. pmid:35838739
- 11. Wu Z.; Guo W.; Pan B.; Kemao Q.; Zhang Q. A DIC-assisted fringe projection profilometry for high-speed 3D shape, displacement and deformation measurement of textured surfaces. Opt. Lasers Eng. 2021, 142, 106614.
- 12. Zhao Y, Huang B, Song H. A robust adaptive spatial and temporal image fusion model for complex land surface changes. Remote Sens. Environ. 2018, 208, 42–62.
- 13. Bowd C, Weinreb RN, Balasubramanian M, Lee I, Jang G, Yousefi S, et al. Glaucomatous patterns in Frequency Doubling Technology (FDT) perimetry data identified by unsupervised machine learning classifiers. PLoS One. 2014. p. e85941. pmid:24497932
- 14. Wen JC, Lee CS, Keane PA, Xiao S, Rokem AS, Chen PP, et al. Forecasting future Humphrey Visual Fields using deep learning. PloS one. 2019. p. e0214875. pmid:30951547
- 15. Waldchen J, Rzanny M, Seeland M, Mader P. Automated plant species identification-Trends and future directions. PLoS Comput Biol. 2018; 14(4):e1005993. pmid:29621236
- 16. Beck MA, Liu C-Y, Bidinosti CP, Henry CJ, Godee CM, Ajmani M. An embedded system for the automated generation of labeled plant images to enable machine learning applications in agriculture. PLoS one. 2020. p. e0243923. pmid:33332382
- 17. Zhang C, Liu C, Xu Z. High-Accuracy Three-Dimensional Deformation Measurement System Based on Fringe Projection and Speckle Correlation. Sensors. 2023, 23, 680. pmid:36679475
- 18. Michael B, Lukas V, Marco F, Jürgen F. Experimental Analysis and Optimisation of a Novel Laser-Sintering Process for Additive Manufacturing of Continuous Carbon Fibre-Reinforced Polymer Parts.Appl. Sci. 2023, 13, 5351.
- 19. Wu F, Zhu S, Ye W. A Single Image 3D Reconstruction Method Based on a Novel Monocular Vision System.Sensors2020, 20, 7045. pmid:33317002
- 20. Lian Y, Wang A, Zeng B, Yang H, Li J, Peng S, et al. Identification of male and female pupal characteristics of Zeugodacus cucurbitae (Coquillett) via machine vision. PLoS one. 2022. p. e0264227. pmid:35324918
- 21. Koyama K, Tanaka M, Cho B-H, Yoshikawa Y, Koseki S. Predicting sensory evaluation of spinach freshness using machine learning model and digital images. PLoS one. 2021. p. e0248769. pmid:33739969
- 22. Yuan P, Li C, Tang P, Yuan B, Yin Y. Machine vision model for detection of foreign substances at the bottom of empty large volume parenteral. PLoS one. 2024. p. e0298108. pmid:38669295
- 23. Lei Y, Tian T, Jiang B, Qi F, Jia F, Qu Q. Research and Application of the Obstacle Avoidance System for High-Speed Railway Tunnel Lining Inspection Train Based on Integrated 3D LiDAR and 2D Camera Machine Vision. Appl. Sci. 2023, 13, 7689.
- 24. Rajevenceltha J, Gaidhane V. An efficient approach for no-reference image quality assessment based on statistical texture and structural features. Eng. Sci. Technol. Int. J. 2022, 30, 101039.
- 25. Ryu J. Adaptive Feature Fusion and Kernel-Based Regression Modeling to Improve Blind Image Quality Assessment. Appl. Sci. 2023, 13, 7522.
- 26. Ribeiro R, Trifan A, Neves A. Blind Image Quality Assessment with Deep Learning: A Replicability Study and Its Repro-ducibility in Lifelogging. Appl. Sci. 2023, 13, 59.
- 27. Merras M, Saaidi A, El Akkad N, Satori K. Multi-view 3D reconstruction and modeling of the unknown 3D scenes using genetic algorithms. Soft Comput. 2017, 22, 6271–6289.
- 28. Felipe-Sesé L, Molina-Viedma J, López-Alba E, Díaz F. RGB Colour Encoding Improvement for Three-Dimensional Shapes and Displacement Measurement Using the Integration of Fringe Projection and Digital Image Correlation. Sensors. 2018, 18, 3130. pmid:30227618
- 29. Sijs R, Kooij S, Holterman H, Zande J, Bonn D. AIP Advances. 2021,11, 015315.
- 30. Ahmed M, Ahmed A. Palm tree disease detection and classification using residual network and transfer learning of inception ResNet. PLoS one. 2023. p. e0282250. pmid:36862665
- 31. Qu X, Wang J, Wang X, Hu Y, Tan T, Kang D. Fast detection of dam zone boundary based on Otsu thresholding optimized by enhanced harris hawks optimization. PLoS one. 2023. p. e0271692. pmid:36745651
- 32.
Zohaib M, Ahsan M, Khan M, Iqbal J. A featureless approach for object detection and tracking in dynamic environments. PLoS one. 2024. p.e0280476.