Figures
Abstract
As a natural gas pipeline approaches the end of its service life, the integrity of the pipeline starts failing because of corrosion or cracks. These and other defects affect the normal production and operation of the pipeline. Therefore, the identification of pipeline defects is critical to ensure the normal, safe, and efficient operation of these pipelines. In this study, a combination of adaptive adjustment based on conversion probability and Gaussian mutation strategy was used to improve the flower pollination algorithm (FPA) and enhance the search ability of traditional flower pollination. The adaptive adjustment of the transition probability effectively balances the development and exploration abilities of the algorithm. The improved flower pollination algorithm (IFPA) outperformed six classical benchmark functions that were used to verify the superiority of the improved algorithm. A Gaussian mutation strategy was integrated with IFPA to optimise the initial input weights and thresholds of the extreme learning machine (ELM), improve the balance and exploration ability of the algorithm, and increase the efficiency and accuracy for identifying pipeline defects. The proposed IFPA-ELM model for pipeline defect identification effectively overcomes the tendency of FPA to converge to local optima and that of ELM to engage in overfitting, which cause poor recognition accuracy. The identification rates of various pipeline defects by the IFPA-ELM algorithm are 97% and 96%, which are 34% and 13% higher, respectively, than those of FPA and FPA-ELM. The IFPA-ELM model may be used in the intelligent diagnosis of pipeline defects to solve practical engineering problems. Additionally, IFPA could be further optimised with respect to the time dimension, parameter settings, and general adaptation for application to complex engineering optimisation problems in various fields.
Citation: Gao Y, Luo Z, Wanng Y, Luo J, Wang Q, Wang X, et al. (2023) Intelligent identification of natural gas pipeline defects based on improved pollination algorithm. PLoS ONE 18(7): e0288923. https://doi.org/10.1371/journal.pone.0288923
Editor: Salim Heddam, University 20 Aout 1955 skikda, Algeria, ALGERIA
Received: October 1, 2022; Accepted: July 6, 2023; Published: July 27, 2023
Copyright: © 2023 Gao et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: the data are all contained within the manuscript
Funding: This research was funded by National Natural Science Foundation of China(41877527) and Fundamental Science Research Project of Jiangsu Province Colleges and Universities (21KJB62008).
Competing interests: The authors have declared that no competing interests exist.
1. Introduction
Natural gas provides high-quality clean energy, and abundant reserves thereof are available, which is of great significance to environmental protection and the sustainable economic development of countries worldwide. Natural gas is one of the main energy sources in the world [1]. According to the International Energy Agency, a quarter of the total global energy demand is met by natural gas [2]. Pipelines are considered the most preferred and safe means of transporting natural gas [3]. However, the pipe body of a pipeline may develop defects because of the damage inflicted by the pipeline service environment, geological conditions, stray currents, failure of anti-corrosion coatings, or third-party damage, among other reasons [4]. These defects may cause the pipeline to leak, leading to disasters [5]. Consequently, the safety of pipelines is becoming increasingly important, particularly because they cover long distances [6].
Traditional methods for the identification of pipeline defects, such as a BP neural network, cannot identify the defect categories of natural gas pipelines efficiently and accurately owing to the long training and learning times and the many parameters required to train the neural network. The extreme learning machine (ELM), a new type of learning algorithm based on a neural network, has been widely used for classification, regression, clustering, and feature learning [7]. The ELM is based on a model powered by an algorithm and is widely applied. The algorithm of the ELM model has been optimised for various fields and for solving a variety of problems. Although the ELM offers faster learning under the premise of ensuring learning accuracy, it has a superior pan-China effect [8]. However, ELMs also have shortcomings such as insufficient prediction accuracy and low convergence accuracy [9]. Although the ELM method is widely used in other defect identification fields, it has not found widespread use in the field of pipeline defect identification.
In essence, an ELM uses an algorithm based on a feedforward neural network [10]. However, it is prone to overfitting and has poor prediction accuracy. Therefore, introducing intelligent optimisation algorithms is necessary for improving the learning and prediction abilities of ELMs [11]. Existing intelligent optimisation algorithms are roughly divided into two categories: traditional and heuristic [12]. Traditional optimisation methods are designed to solve clearly structured optimisation problems. In comparison, heuristic optimisation methods can intelligently and efficiently search for the global optimum, and they can solve technical problems that cannot be solved using the traditional methods [13]. After more than two decades of development, heuristic algorithms have matured to become the mainstream algorithms for intelligent optimisation [14].
Despite their advantages, heuristic algorithms must be developed further such that they are efficient and robust, because traditional optimisation algorithms have limited ability to solve large-scale, highly nonlinear, and non-differentiable problems. The development of intelligent and efficient algorithms is required for real-world optimisation intelligence [15]. In this regard, meta-heuristic algorithms are intelligent and have many advantages, such as being powerful, efficient, flexible, and reliable [16]. Swarm intelligence and evolutionary algorithms are two very useful types of algorithms capable of intelligent optimisation [17]. The swarm intelligence algorithm develops intelligent search strategies by modelling and simulating the collective foraging behaviour of gregarious insects or animals using very few rules [18]. This method is characterised by its simplicity, efficient optimisation, and robustness. Various types of intelligent algorithms include differential evolution, whale optimisation, ant colony, particle swarm, colony foraging optimisation, frog leaping, artificial bee colony, flower pollination, firefly, cuckoo, bat, wolf swarm, fireworks, contract network protocol, and spider monkey optimisation algorithms. The evolutionary algorithm is an adaptive heuristic search algorithm based on the idea of natural selection and genetic evolution, which replaces an entire population by generating new offspring using natural operators such as crossover and mutation [19]. These algorithms include genetic, differential evolution, immune, genetic programming, evolutionary programming, evolutionary strategy, cultural gene, and multi-objective evolutionary algorithms. The advantages of the differential evolution algorithm, a global optimisation algorithm based on swarm intelligence, are its few control parameters, simplicity, and facile implementation. However, it readily converges to local optima, has slow convergence, and has numerous control parameters [20–22]. The whale optimisation algorithm has the advantages of simple operation, few control parameters, and a strong ability to escape from local optima, and it has attracted the attention of researchers in the field of swarm intelligence optimisation. However, this method is problematic because of its low convergence speed and accuracy [23].
The flower pollination algorithm (FPA) is a new meta-heuristic swarm intelligence optimisation algorithm proposed by Yang (2012) [24], a scholar at the University of Cambridge, UK in 2012. It simulates the processes of cross-pollination and self-pollination of flowering plants. Compared with other algorithms, the FPA has the advantages of fewer hyperparameters, a simple structure, and facile implementation. Therefore, the algorithm has been widely used for multi-objective and function optimisation as well as to solve prediction problems [25]. The FPA also has shortcomings, such as its tendency to converge to local optima, slow convergence during the late stage of iteration, and poor optimisation accuracy [26].
In this study, we propose a model framework for identifying defects in pipelines. We propose an improved flower pollination algorithm (IFPA), which combines an adaptive method based on transformation probability and Gaussian mutation strategies to improve the performance of FPA. Its optimisation performance is attributed to, amongst others, the FPA and particle swarm optimisation (PSO). Furthermore, the IFPA can be used to optimise the extreme learning machine (ELM). Compared with ELM and FPA-ELM, using the IFPA-ELM algorithm can more accurately identify the different states of the pipeline, such as the normal state of the pipeline, pit defects and crack defect signals and so on.
2. Basic theories
The existing quality problems of long-haul natural gas pipelines in service and the influence of external environment will produce various types of defects. However, the nondestructive testing technology of pipeline can only preliminarily judge whether the pipeline has defects according to the original signal itself, and cannot quantitatively identify the characteristics of defect type, defect degree, defect size and shape. Although various algorithms can provide new methods for pipeline defect detection, insufficient data and unbalanced samples are common problems that affect the generalization ability and robustness of pipeline defect identification models. In order to solve the above problems, the improved pollination algorithm combined with the extreme learning machine model was used in this study to improve the identification accuracy of pipeline defects.In this section, the FPA algorithm, adaptive adjustment method, Gaussian mutation strategy ,and IFPA algorithm are introduced.
2.1. FPA
The FPA was first proposed by Yang in 2012 as a new meta-heuristic swarm intelligence algorithm to simulate the flower pollination process of flowering plants in nature [24]. Flower pollination algorithms have been widely used for function optimisation, feature selection, model prediction, signal and graph processing, economic scheduling and control, and path planning [27]. The pollination of flowering plants occurs via cross-pollination and self-pollination; in the FPA, the simulated cross-pollination performs the function of global optimisation, and the simulated self-pollination that of local optimisation. Using the transition probability P, the global and local search can be converted into each other to solve the balance problem [28]. The specific rules are as follows:
- Cross-pollination is a global pollination process in which pollen is spread by pollen dispersers over a long distance in Levy flight, ensuring the most suitable pollination and reproduction.
- Self-pollination is a local pollination process.
- The constancy of a flower is the reproductive probability, which is proportional to the similarity of the two flowers involved.
- The global and local pollination are switched with the transition probability
. The pollination process is more inclined to self-pollination during the process of flower pollination owing to the influence of factors such as location and wind. Therefore, the transition probability P has a larger value for local pollination.
In the FPA, global and local pollination are the core of individual flower evolution. Establishing a mathematical model of its pollination process can lay a theoretical foundation for solving a series of complex optimisation problems. A flowchart of the algorithm is shown in Fig 1.
The global pollination process of the FPA is expressed as follows [29]:
(1)
In Eq (1), and
are the solutions of the pollen in the
th and tth generations, respectively; g* is the global best position in the current population; γ is the scaling factor that controls the step size; L is the size of the random search step of the Levy flight [30]. and is calculated as follows:
(2)
where Г(λ)is the standard gamma function, considering λ = 3/2, and s is the step size generated by the Mantegna algorithm.
In self-pollination, the equation for the updated position xti is as follows:
(3)
where xtj and xtk represent two random solutions of the population, and ε is a variable that obeys a uniform distribution in[0,1].
2.2. Adaptive adjustment based on transition probability
The transformation probability P is introduced into the FPA to prevent local optimisation resulting in premature convergence and to achieve a balance between the development and exploration abilities of the algorithm. P is the global pollination operation, and 1-P performs the local pollination operation, where P is a constant [31]. If the value of P is too large, the algorithm easily reaches the global optimum but does not easily converge; if the value of P is too small, it readily converges to a local optimum solution. Therefore, an adaptive adjustment method based on the transformation probability P is proposed:
(4)
Where rand1 is a random number between[0,1]. The initial conversion probability of cross-pollination and self-pollination was assumed P = 0.8.
In each iteration, the execution probability of the global and local search is updated adaptively, which effectively solves the balance problem between the development and exploration abilities of the algorithm, prevents the FPA from converging to local optima, and increases the convergence speed of the algorithm.
2.3. Adaptive adjustment based on gaussian mutation strategy
In FPA local pollination, the random number ε is in the range [0,1], and it controls the mutation probability of i pollen. If ε is too small, the position of the pollen before iteration is too close to that after iteration, resulting in slow convergence of the algorithm. If ε is too large, the position of the pollen is too poorly defined, and the jump step size is too large, which is not conducive for determining the optimal solution. Therefore, a Gaussian mutation strategy is employed because of the lack of a mutation mechanism in the local pollination process, making escape from a local optimum difficult [32]. Essentially, the Gaussian mutation strategy adds a random disturbance vector that obeys the Gaussian distribution to the original; thus, an individual to replace the original individual, in which case:
(5)
In Eq 5, b = (T − t)/T, where t is the current number of iterations, and T is the maximum number of iterations. N(0,1) obeys the standard Gaussian distribution with u = 0 and σ2 = 1.
The convergence speed in the early stage of the computation by the algorithm can be improved by a larger Gaussian mutation vector, and in the later stage, a smaller Gaussian mutation vector may improve the convergence accuracy. In particular, when the algorithm converges to a local optimum in the later stage, a more appropriate global value exists near the current value. The Gaussian mutation vector can then be used to achieve the effect of escaping from the local optimum and detecting the fitness value of surrounding individuals. The local pollination process based on the Gaussian mutation strategy is identified as:
(6)
2.4. Improved Flower Pollination Algorithm (IFPA)
The traditional FPA has the advantages of practicality, simplicity, and fewer parameters. Furthermore, gradient information is not necessary therein. However, while attempting to solve high-dimensional and multimodal optimisation problems, it cannot achieve the expected results. Therefore, an improved FPA is proposed, which, combined with the adaptive adjustment method of the transition probability and Gaussian mutation strategy, further improves the global and local search and optimisation capabilities of the FPA.
2.4.1. Description of algorithm.
This section mainly introduces the optimisation technology of the IFPA algorithm and proposes the IFPA algorithm, which uses the adaptive adjustment of the transition probability to enhance the global search and exploration abilities. The Gaussian mutation strategy is used to mutate the local solution, which effectively improves the convergence speed and accuracy of the algorithm. Thus, development and exploration are more optimally balanced by adaptively adjusting the transition probabilities and introducing Gaussian mutation strategies. Moreover, compared with the FPA, the IFPA also has the global pollination capability of Levy flight. A flowchart of the proposed IFPA is shown in Fig 2.
2.4.2. Simulation test and analysis of result.
The superiority of the IFPA is verified by comparing the classical optimisation algorithms, FPA, Spider monkey optimization (SMO) and African vulture optimization algorithm (AVOA). The IFPA algorithm is compared with three traditional algorithms. Six representative functions were selected from the literature [33] and the literature [34] as test functions to evaluate the performance of IFPA. Among them, f1–f2 are unimodal functions, and f3–f10 are multi-peak and multi-extremum functions. Using the same population size value of 20 for all algorithms, we ran 30 times independently, where the maximum number of iterations was 1000, and the search accuracy was 10−5. The optimal, worst, average value and standard deviation are used for evaluating the performance of the four algorithms. The parameters of the test function are shown in Table 1. All algorithms were run under Windows 7 environment, using Matlab 7.0 for programming.
The test values of the four optimisation algorithms in the six test functions and four test functions are compared, and the results are shown in Tables 2 and 3. The results show that the scores of the IFPA are higher than those of the FPA and traditional AVOA and SMO algorithms. The results therefore show that the proposed IFPA can more accurately optimise the test function to obtain the final optimisation result.
As can be seen from Table 3, in functions f7, f9 and f10, IFPA performs better than the other three comparison algorithms according to the mean value and standard deviation obtained from 30 independent runs. In function f7, although the optimal values of all algorithms are the same, the worst values and average values obtained by IFPA are better than the results of other algorithms. In function f8, AVOA, FPA and IFPA have the same optimal value and worst value, while IFPA’s mean and standard deviation are worse than AVOA, but better than FPA. In function f9, the best value, worst value, average value and standard deviation obtained by IFPA are all better than the other three algorithms. In function f10, although the best and worst values obtained by IFPA are inferior to FPA, the mean value and standard deviation of IFPA are superior to FPA. Therefore, both the optimal value of FPA and the worst value of IFPA are contingent cases, and IFPA is more stable. At the same time, IFPA can obtain better results than the other three algorithms in the optimization stage, and can jump out of the local optimal solution. In particular, in function f9, although IFPA is inferior to the other three algorithms at the beginning of optimization, it can quickly jump out of the local optimal in the early optimization stage and continuously develop known regions with its development ability in the whole process to obtain better results.
Therefore, IFPA performs better than the other three comparison algorithms in the four CEC2017 combined benchmark functions, and IFPA greatly improves the performance of FPA.
In order to intuitively understand the performance of IFPA, the convergence curve comparison of the algorithm is shown in Fig 3. It can be seen from the convergence diagram of the algorithm that among the test results of the 10 functions, the performance of the three algorithms is almost the same in terms of the optimal value. Only in functions f2 and f5, the optimal value of the optimization results of the other three algorithms is higher than IFPA, while the rest are opposite. From the average point of view, IFPA performs better than the results of other algorithms except f2 and f5. In terms of stability, IFPA algorithm with 10 functions performs better than other algorithms. In terms of convergence rate, IFPA, FPA and SMO algorithms all have good convergence rate on 10 benchmark functions, and IFPA has absolute advantage on f10 benchmark function.
3. IFPA-ELM model
3.1. Principle of extreme learning machine
The ELM is a training algorithm for a single-hidden layer feedforward neural network (SLFN), which was proposed by Huang et al. By randomly assigning the weights and biases of the input layer, ELM determines the weights from the hidden layer to the output layer by computing the Moore–Penrose generalised inverse matrix [35], as shown in Fig 4. The ELM algorithm does not require manual adjustment of the parameters; however, the parameters of the number of hidden layer nodes must be set manually. For a small number of iterations, the learning efficiency and convergence speed are high, which considerably reduces the training time and improves the generalisation performance of the model. However, ELM has poor optimisation accuracy and is prone to falling into a local minimum state. Therefore, the IFPA was adopted for optimisation. A simple statement is as follows:
Suppose N random samples (xi, ti), where and
. Here, g(x) is the hidden layer excitation function. Furthermore, the input weight is
, output weight is β, and hidden layer threshold is B. To minimise the output error, the samples are trained with zero error approximation as follows:
(7)
In Eq (7), H is the output matrix of the hidden layer, and T is the expected output [36].
By training the single hidden layer feedforward neural network, the connection weight β can be solved to obtain:
(8)
Here, H+ is the generalised inverse of the Moore–Penrose of H, and the range of β can be proven to be the smallest and unique [37].
3.2. The IFPA-ELM model
Using the IFPA to optimise the initial weights and thresholds of the ELM algorithm can greatly reduce the variability of the random initial weights and thresholds, and it can enable an intelligent diagnosis of the IFPA-ELM model. The flowchart of the IFPA-ELM model is shown in Fig 5.
4. Intelligent identification of the defects in natural gas pipelines
Many gas pipelines in China have been in use for decades, with varying degrees of ageing. Hence, the intelligent identification and classification of pipeline defects is particularly important to ensure the continuous and effective transportation of natural gas pipelines. In this study, we use the Shaanxi-Beijing natural gas pipeline as an example and employed a magnetic flux leakage detector to identify defects in the natural gas pipeline. According to the literature [38], the defect in the outer surface of the detector has a groove width, depth, and length of 0.2, 1, and 2 mm, respectively. Therefore, defects with a depth and diameter of 10 and 2 mm, respectively, can be detected. In this study, the intelligent identification of the pipeline defect signal and classification of its intensity were accomplished by analysing the signal collected from the pipeline.
4.1. Feature extraction of pipeline defect signals
The MATLAB tool was used to extract the characteristics of the magnetic flux leakage signal pertaining to the pipeline defect, and the flowchart of the process is shown in Fig 6.
We selected the data related to 140 defects. Each defect signal generated several or even dozens of magnetic flux leakage signal waveform curves from which the most representative peak-to-valley curve data were determined, as shown in Table 4.
A comparison revealed that the peak and valley data of the sixth waveform curve varies considerably; therefore, the sixth waveform data were selected as the defect signal waveform data. The characteristic signal of each defect can be described by four types of variables: peak–valley value, peak–valley difference, valley–valley difference, and valley–valley value, including two peak–valley values, two peak–valley differences, one valley–valley difference, and one valley–valley value. Therefore, each defect is ultimately described by six characteristic values, as listed in Table 5.
The normal pipeline, pit defect, and crack defect signals were selected as the set of pipeline defect signals, namely, S = {S1, S2, S3}. Pipeline defect signals inevitably contain a certain amount of noise. A traditional noise reduction method, such as wavelet noise reduction, is not ideal because it easily results in incomplete signal noise reduction. Singular values easily result in signal offset, whereas the integrated empirical mode decomposition method adds white noise. This increases the number of intrinsic mode functions (IMFs) obtained via decomposition, making it unsuitable for identifying and classifying pipeline defects. The empirical mode decomposition method, which we selected for this study, can adaptively decompose the signal and extract the characteristic parameters related to the pipeline defect.
Empirical mode decomposition (EMD) is an adaptive decomposition algorithm, which can decompose complex signals into a linear superposition of multiple IMFs [39]. Figs 7–9 show the signal decomposed into single-component signals imf1, imf2, imf3 . . . imf6, and using the following equation:
(9)
Eq (9) was used to calculate the correlation coefficient between each component imf decomposed via EMD and the original signal [40]. The correlation coefficients of S1, S2 and S3 and their respective IMF (Fig 10) were derived by calculating the similarity between each IMF component and the original signal.
Among them, imfi is the ith IMF constituent, x represents the actual signal, and ρ represents the correlation coefficient between the imfi component and original signal x.
The correlation coefficients of the IMF components with raw data are shown in Table 6. The correlation coefficients of imf2, imf3, and imf4 components are large, at 0.8620, 0.7253, and 0.5708, respectively. The imf2, imf3, and imf4 components are the real components of the signal, and imf1 with the smaller correlation coefficient is removed; imf1, imf5, imf6 are pseudocomponents. Therefore, we selected imf2, imf3, and imf4 to extract the feature parameters.
The sample entropy is the weight of the probability of a system to generate new patterns from the point of view of time series complexity; it quantitatively describes the complexity and regularity of the system [41]. The sample entropy is used to evaluate the complexity and regularity of time series. In this study, the randomness of the identification of pipeline defect signal is evaluated according to the sample entropy of each IMF component to determine the difference between the normal and defect signals of the pipeline. The lower the sample entropy, the higher is the time series self-similarity; the higher the sample entropy, the higher is the time series complexity. The determined value of the sample entropy for the sequence length L is given by Eq (10).
Here m = 2, t = 0.2 std, where std represents the standard deviation of the raw discrete sequence, and B is the average.
The sample entropy was selected from 140 group samples to obtain pipeline defects, as listed in Table 7, including the inevitable presence of noise and a certain extent of low-intensity signal interference. While detecting a pipeline with defects, the strength of the defect signal is considerably greater than that of the jamming signal, resulting in a decrease in the entropy of each IMF sample. Therefore, the IMF sample entropy is selected as the feature vector of pipeline defects to perform defect pattern recognition on the pipeline state.
4.2. Identification of the defect signals from natural gas pipelines
The test results of the six benchmark functions show that the IFPA exhibits superior performance. In addition, the combination of IFPA optimisation with ELM, maximises the accuracy and speed of the algorithmic model. These results confirm the suitability of the proposed IFPA to detect and identify signals associated with the defects in a natural gas pipeline. The features of the defects were extracted from the 140 defect samples selected as training samples. Each defect had six corresponding attribute features. According to the literature [42], when the number of neurons in the hidden layer is the same as the number of samples in the training set, the model of the algorithm can approximate all training samples without error. Therefore, 50 neurons were used in the hidden layer for simulation. According to the roughness of the defect attribute data, the pipeline defects were divided into three types: normal state, signal and pit defect, and crack defect signals.
The ELM, FPA-ELM and IFPA-ELM algorithms were used to distinguish the pipeline state, and the basic parameters of the FPA and IFPA were assigned to the same value. The specific parameters are listed in Table 8. The prediction results of the two groups of samples in the three states of the pipeline are listed in Table 9. The findings show that the original ELM algorithm erroneously identified two pipeline defects, and the IFPA-ELM algorithm incorrectly classified one pipeline defect. Despite these errors, the FPA-ELM algorithm significantly improves the recognition accuracy of the algorithm.
The recognition rate of the 140 defect samples is presented in Table 10. Corresponding to the signal of the normal pipeline, the recognition rates of the three algorithms are all 100%, whereas the ELM recognition rates of the two defect signals are 72% and 56%, respectively. Additionally, the recognition rates of the FPA-ELM algorithm are 89% and 82%, respectively. Considering that the ELM algorithm readily converges to local minima, FPA was used for optimisation. Although the recognition rate improved, it was still lower than 90%, indicating that the FPA algorithm still required further improvement. The efficiency and accuracy of the IFPA-ELM algorithm are considerably higher than those of the ELM and FPA-ELM algorithms.
5. Conclusion and future works
The accurate identification of defects in natural gas pipelines is important to ensure their continuous, efficient, and stable operation. The ELM algorithm has a high learning rate and good generalisation performance; however, the convergence accuracy of this algorithm is poor, and it easily converges to local minima. The introduction of IFPA improves the convergence accuracy and searchability of ELM, and it outperforms traditional intelligent algorithms such as SMO and AVOA.
The relatively poor optimisation ability of FPA motivated us to improve this algorithm by combining the adaptive adjustment based on the transformation probability and Gaussian mutation strategy. The combination of these two techniques enables a higher diversity of the population and an appropriate balance of global and local searches. The Gaussian mutation strategy improves the exploration ability of the algorithm. Six classical benchmark functions were used to test and verify the superiority of the improved algorithm and to compare its performance with SMO, AVOA, and FPA. The main contributions of this study are an adaptive adjustment of the conversion probability, introduction of the Gaussian mutation strategy into FPA, optimisation of the initial input weight and threshold of ELM to stabilise and improve the explorative ability of the algorithm, and higher efficiency and accuracy of pipeline defect identification.
The proposed IFPA-ELM combines the advanced IFPA with the ELM algorithm to identify pipeline defects. The proposed algorithm was evaluated by comparing its performance in terms of pipeline defect identification with that of the ELM, FPA-ELM, and IFPA-ELM algorithms. The results showed that, compared with the other algorithms, the IFPA-ELM model identifies defect signals more accurately. Finally, it is concluded that the recognition rates of various pipeline defects by the IFPA-ELM algorithm were to be 97% and 96% respectively. The recognition rates of the proposed algorithm are 34% and 13% higher than those of the FPA and FPA-ELM, respectively.
However, the following limitations of the proposed IFPA-ELM model must be addressed: Because IFPA is a combination of multiple algorithms, the time cost of computation is increased, and this aspect requires further optimisation. In addition, the constant value of the conversion probability in the algorithm greatly affects the performance of the algorithm. Hence, the performance of the algorithm could be assessed by determining the range of conversion probability values or the common adaptive value to set the appropriate value or adjust the balance between global and local search. Finally, the algorithm could be applied to complex engineering optimisation problems in various fields for further research in future.
References
- 1. Bo L, Ming Z, Xin Z, Chun C. An assessment of the effect of partisan ideology on shale gas production and the implications for environmental regulations, Economic Systems,Volume45, Issue 3,100907,ISSN 0939–3625, 2021. https://doi.org/10.1016/j.ecosys.2021.100907.
- 2. Papadakis GA, Porter S, Wettig J. EU initiative on the control of major accident hazards arising from pipelines. J. Loss Prev. Process Ind. 12 (1),85–90, 1999. https://doi.org/10.1016/S0950-4230(98)00042-4.
- 3. Nešić S. Key issues related to modelling of internal corrosion of oil and gas pipelines–A review. Corros. Sci. 49 (12), 4308–4338, 2007. https://doi.org/10.1016/j.corsci.2007.06.006.
- 4. Yi S, Xin W, Y Frank C. Buckling resistance of an X80 steel pipeline at corrosion defect under bending moment. Journal of Natural Gas Science and Engineering,Volume 93,104016,ISSN 1875-5100, 2021. https://doi.org/10.1016/j.jngse.2021.104016.
- 5. Tian X, Zhou Z, Xin H, Jian L, Hao F. Pipeline leak detection based on variational mode decomposition and support vector machine using an interior spherical detector. Process Safety and Environmental Protection,Volume 153,Pages 167–177,ISSN 0957-5820, 2021. https://doi.org/10.1016/j.psep.2021.07.024.
- 6. Sanjay K K, Sidhartha H, Biswajit R. Bibliometric analysis of the research on hydrogen economy: An analysis of current findings and roadmap ahead. International Journal of Hydrogen Energy,Volume 47, Issue 20,Pages 10803–10824,ISSN 0360-3199, 2022. https://doi.org/10.1016/j.ijhydene.2022.01.137.
- 7. Youngmin P, Hyun S, Yang. Convolutional neural network based on an extreme learning machine for image classification,Neurocomputing,Volume 339,Pages 66–76,ISSN 0925-2312, 2019. https://doi.org/10.1016/j.neucom.2018.12.080.
- 8. Dong L, Ming L, Ke W, Qiang F, Liang Z, Mo L, Xue L, Tian L, Song C. Evaluation and analysis of irrigation water use efficiency based on an extreme learning machine model optimized by the spider monkey optimization algorithm,Journal of Cleaner Production,Volume 330,129935,ISSN 0959-6526, 2022. https://doi.org/10.1016/j.jclepro.2021.129935.
- 9. De L, Shuo L, Shu Z, Jian S, Li W, Kai W. Aging state prediction for supercapacitors based on heuristic kalman filter optimization extreme learning machine.Energy,Volume250,123773,ISSN03605442, 2022. https://doi.org/10.1016/j.energy.2022.123773.
- 10. Ze C, Fa D, François C, Derek A. Training threshold neural networks by extreme learning machine and adaptive stochastic resonance.Physics Letters A,Volume 432,128008,ISSN 0375-9601, 2022. https://doi.org/10.1016/j.physleta.2022.128008.
- 11. Ju W, Quan C, Maolin H. Hybrid intelligent framework for carbon price prediction using improved variational mode decomposition and optimal extreme learning machine.Chaos,Solitons&Fractals,Volume156,111783,ISSN09600779, 2022. https://doi.org/10.1016/j.chaos.2021.111783.
- 12. Truong K H, Nallagownden P, Baharudin Z, Vo D N. A Quasi-Oppositional-Chaotic Symbiotic Organisms Search algorithm for global optimization problems. Appl. Soft Comput. 77, 567e583, 2019. https://doi.org/10.1016/j.asoc.2019.01.043.
- 13. Malik B, Abdelaziz H, Jaffar A, Mohammed A. White Shark Optimizer: A novel bio-inspired meta-heuristic algorithm for global optimization problems.Knowledge-Based Systems,Volume 243,108457,ISSN 0950-7051, 2022. https://doi.org/10.1016/j.knosys.2022.108457.
- 14. Abualigah L, Elaziz M, Khasawneh A. Meta-heuristic optimization algorithms for solving real-world mechanical engineering design problems: a comprehensive survey, applications, comparative analysis, and results. Neural Computing and Applications, 2022(10), 2022. https://doi.org/10.3390/electronics10020101.
- 15. Tawhid M A, Ibrahim A M. Solving nonlinear systems and unconstrained optimization problems by hybridizing whale optimization algorithm and flower pollination algorithm,Mathematics and Computers in Simulation.Volume 190,Pages 1342–1369,ISSN 0378-4754, 2021. https://doi.org/10.1016/j.matcom.2021.07.010.
- 16. Peng L, Lin Y, Yong Z, Binhua D, Ming P, Yong T. Review of meta-heuristic algorithms for wind power prediction: Methodologies, applications and challenges.Applied Energy,Volume 301,117446,ISSN 0306-2619, 2021. https://doi.org/10.1016/j.apenergy.2021.117446.
- 17. Abolfazl J, Mahdi P, Davood M, Omid R, Himan S, Ataollah S, Saro L, Dieu T, Biswajeet P. Swarm intelligence optimization of the group method of data handling using the cuckoo search and whale optimization algorithms to model and predict landslides.Applied Soft Computing,Volume 116,108254,ISSN 1568-4946, 2022. http://dx.chinadoi.cn/10.1016/j.asoc.2021.108254.
- 18. Dong W, Zhong W, Lei S, Chao T. Preaching-inspired swarm intelligence algorithm and its applications,Knowledge-Based Systems,Volume 211,106552,ISSN 0950-7051, 2021. https://doi.org/10.1016/j.knosys.2020.106552.
- 19. Zi H, Zi L, Hao S, Li W. MOEA3H: Multi-objective evolutionary algorithm based on hierarchical decision, heuristic learning and historical environment.ISA Transactions,ISSN 0019-0578, 2022. https://doi.org/10.1016/j.isatra.2021.12.038.
- 20. Kong D, Yin X, Ding X, et al. Global optimization of a vapor compression refrigeration system with a self-adaptive differential evolution algorithm. Applied Thermal Engineering, 2021, 197: 117427. https://doi.org/10.1016/j.applthermaleng.2021.117427.
- 21. Souheila K,Amer D,Giovanni I. A compact compound sinusoidal differential evolution algorithm for solving optimisation problems in memory-constrained environments. Expert Systems With Applications, 2021, 186: 115705. https://doi.org/10.1016/j.eswa.2021.115705.
- 22. Yang Y, Liu J, Tan S, et al. A multi-objective differential evolution algorithm based on domination and constraint-handling switching. Information Sciences, 2021, 579: 796–813. https://doi.org/10.1016/j.ins.2021.08.038.
- 23. Mirjalili S, Lewis A. The whale optimization algorithm. Advances in Engineering Software, 2016, 95: 51–67. https://doi.org/10.1016/j.advengsoft.2016.01.008.
- 24. Yang X S. Flower pollination algorithm for global optimization. In: International Conference on Unconventional Computing and Natural Computation,pp. 240e249, 2012. https://doi.org/10.1007/978-3-642-32894-7_27.
- 25. Fehmi B, Adil B. Chaos and intensification enhanced flower pollination algorithm to solve mechanical design and unconstrained function optimization problems.Expert Systems with Applications,Volume 184,115496,ISSN 0957-4174, 2021. https://doi.org/10.1016/j.eswa.2021.115496.
- 26. Amer D. On the performances of the flower pollination algorithm–Qualitative and quantitative analyses.Applied Soft Computing. 2015;34:349–371. https://doi.org/10.1016/j.asoc.2015.05.015.
- 27. Zahraa A, Abdalkareem Mohammed A A, Amiza A, Phaklen E, Abdelaziz I H, Omar H S., Discrete flower pollination algorithm for patient admission scheduling problem.Computers in Biology and Medicine,Volume 141,105007,ISSN 0010-4825, 2022. https://doi.org/10.1016/j.compbiomed.2021.105007.
- 28. Yang X S, Karamanoglu M, He X. Flower pollination algorithm: A novel approach for multiobjective optimization. Engineering Optimization, 46(9): 1222–1237, 2013.https://doi.org/10.1080/0305215X.2013.832237.
- 29. Yang X S,Karamanglu M,He X.Multi-objective flower algorithm for optimization.Procedia Computer Science,18(1):861–868, 2013. https://doi.org/10.1016/j.procs.2013.05.251.
- 30. Mantegna R N. Fast, accurate algorithm for numerical simulation of Levy stable stochastic processes. Physical Review E, 49 (5), 4677, 1994. https://doi.org/10.1103/physreve.49.4677.
- 31. Redner R A, Walker H F. Mixture densities, maximum likelihood and the EM algorithm.SIAM Review,26(2)195–239, 1984. https://doi.org/10.1137/1026034.
- 32. Liu F. Inverse estimation of wall heat flux by using particle swarm optimization algorithm with Gaussian mutation. International Journal of Thermal Sciences,54 (1): 62–69, 2012. https://doi.org/10.1016/j.ijthermalsci.2011.11.013.
- 33. Jamil M,Yang X. A literature survey of benchmarkfunctions for global optimization problems. International Journal of Mathematical Modelling & Numerical Optimisation,4:150–194, 2013.https://doi.org/10.48550/arXiv.1308.4008.
- 34. Karam M, Saber M, Ruhul A, Daryl L.Multi-method based orthogonal experimental design algorithm for solving CEC2017 competition problems. 2017 IEEE Congress on Evolutionary Computation (CEC), Donostia, Spain, 2017, pp. 1350–1357. https://doi.org/10.1109/CEC.2017.7969461.
- 35. Huang G,Zhu Q,Siew C. Extreme learning machine: theory and applications. Neurocomputing, 70(1–3), 489–501, 2006. https://doi.org/10.1016/j.neucom.2005.12.126.
- 36. Wu S , Cao W. Parametric model for microwave filter by using the multiple hidden layer output matrix extreme learning machine. IET Microwaves Antennas & Propagation, 2019, 13(11):1889–1896. https://doi.org/10.1049/iet-map.2018.5823.
- 37. Peng D, He Y, Xu Y, Zhu Q. Research and chemical application of data feature extraction based AANN-ELM neural network. CIESC Journal, 2012.https://doi.org/10.3969/j.issn.0438-1157.2012.09.039.
- 38. Xiao L, Wei L, Jing X. Intelligent diagnosis of natural gas pipeline defects using improved flower pollination algorithm and artificial neural network.Journal of Cleaner Production,Volume264,121655,ISSN09596526, 2020. https://doi.org/10.1016/j.jclepro.2020.121655.
- 39. Nayak P, Mallick R K, Dhar S. Novel hybrid signal processing approach based on empirical mode decomposition and multiscale mathematical morphology for islanding detection in distributed generation system. IET Generation Transmission & Distribution, 14(6), 2021. https://doi.org/10.1049/iet-gtd.2020.0780.
- 40. Li X,Dong L, Li B, Lei Y,Xu N.Microseismic Signal Denoising via Empirical Mode Decomposition, Compressed Sensing, and Soft-thresholding. Applied Sciences, 2020, 10(6):2191. https://doi.org/10.3390/app10062191.
- 41. Richman J S, Moorman J R. Physiological time-series analysis using approximate entropy and sample entropy.American Journal of Physiology: Heart and Circulatory Physiology,278(6):2039–2049, 2000. https://doi.org/10.1152/ajpheart.2000.278.6.H2039.
- 42. Li P, Lu Z, Ai S, Shi S, Shun W. Research on gear fault diagnosis based on feature fusion optimization and improved two hidden layer extreme learning machine.Measurement,Volume177,109317,ISSN02632241, 2021. https://doi.org/10.1016/j.measurement.2021.109317.