Figures
Abstract
Accurately evaluating earthquake-induced slope displacement is a key factor for designing slopes that can effectively respond to seismic activity. This study evaluates the capabilities of various machine learning models, including artificial neural network (ANN), support vector machine (SVM), random forest (RF), and extreme gradient boosting (XGBoost) in analyzing earthquake-induced slope displacement. A dataset of 45 samples was used, with 70% allocated for training and 30% for testing. To improve model robustness, repeated 5-fold cross-validation was applied. Among the models, XGBoost demonstrated superior predictive accuracy, with an R2 value of 0.99 on both the train and test data, outperforming ANN, SVM, and RF, which had R2 values of 0.63 and 0.80, 0.87 and 0.86, 0.94 and 0.87 on the train and test data, respectively. Sensitivity analysis identified maximum horizontal acceleration (kmax = 0.714) as the most influential factor in slope displacement. The findings suggest that the XGBoost model developed in this study is highly effective in predicting earthquake-induced slope displacement, offering valuable insights for early warning systems and slope stability management.
Citation: Wang J, Shahani NM, Zheng X, Hongwei J, Wei X (2025) Machine learning-based analyzing earthquake-induced slope displacement. PLoS ONE 20(2): e0314977. https://doi.org/10.1371/journal.pone.0314977
Editor: Linwei Li, Guizhou University, CHINA
Received: September 5, 2024; Accepted: November 20, 2024; Published: February 6, 2025
Copyright: © 2025 Wang et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: Data sources are included in the manuscript file.
Funding: This study was funded by the Guizhou Provincial Education Department's (Hundred Schools Thousands of Enterprises Science and Technology Research List) Project([2024]013) and the Qiankehezhongyindi([2024]039). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
1. Introduction
Evaluating the stability of earth infrastructure, including slopes, in the context of earthquakes is crucial due to the significant environmental, financial, and human impacts posed by seismic events [1]. Slopes with a magnitude of 4 or higher are prone to partial failure, while those experiencing a magnitude of 6 or more may face complete instability [2, 3]. Factors such as slope configuration, ground vibrations, and material properties play a crucial role in earthquake-induced slope failures [1]. Various methods have been proposed to model earthquake-induced slope displacement, including Saygili and Rathje developed an empirical model for predicting slope movement during an earthquake [4], while Lin and Whitman introduced a procedure to estimate permanent displacement caused by ground accelerations [5]. More advanced analyses, such as probabilistic assessments by Rathje and Raugali [6], and reliability analyses by Refice and Capolongo [7] have also contributed to understanding slope stability under earthquake loads. Al-Homond and Tahtamoin considered uncertainties in earthquake-induced movement and 3D slope stability [8]. Yuan et al. carefully investigated the effects of acceleration on sliding surfaces during earthquake-induced landslides [9]. Babanouri and Dehghani studied shear strains and failure probabilities for large, potentially slide-prone slopes during design earthquakes [10]. Bray and Travasarou employed a stochastic, complementary sliding-block framework to create a semi-empirical correlation for predicting displacement [11]. They also presented a straightforward approach to calculate pseudo-static benchmarks based on spectral acceleration, permissible movement, and earthquake magnitude [12]. To assess slope stability during earthquakes, Jibson [13] categorized evaluation methods into three general groups: “(1) stress-deformation analysis, (2) permanent-displacement analysis, and (3) pseudo-static analysis”. Bojadjieva et al. studied risk assessment and landslide hazards in Skopje, Macedonia, considering various water conditions and earthquake structures [14]. To enhance prediction accuracy, machine learning (ML) has emerged as a powerful tool, complementing traditional techniques like Monte Carlo simulation (MCS) in assessing slope failure risks [15–17].
ML has increasingly become a robust tool for analyzing various complex phenomena, including earthquake-induced displacement of slopes. Recent studies have demonstrated its effectiveness in diverse applications within geotechnical engineering and beyond. For instance, Cho et al. [18] emphasized the importance of nonlinear finite element analysis in generating displacement hazard curves for slopes affected by earthquakes, highlighting its role in improving predictive accuracy. In a related context, Lis-Gutiérrez et al. [19] explored the prediction of spending levels among displaced populations using random forest (RF) and support vector machine (SVM), underscoring the need to understand key variables driving these predictions. In slope stability, Dong et al. [20] introduced a real-time wireless monitoring system integrated with an autoregressive recurrent networks (DeepAR) model for predicting the deformation of unstable slopes. This approach has proven effective in ensuring safety during construction and providing accurate predictions during ongoing operations. Similarly, Durante et al. [21] applied the RF model to predict lateral spreading patterns, achieving high accuracy in identifying and classifying displacement occurrences. Daribayev et al. [22] predicted oil recovery, highlighting the importance of robust algorithms as reservoir models become increasingly complex. In addition, Melchor-Leal et al. [23] proposed an extreme learning machine (ELM) algorithm for characterizing force profiles in thermostatic bimetallic strips, demonstrating its predictive power. Thackway et al. [24] developed a tree-based model to predict gentrification trends in Sydney, demonstrating the potential of such techniques in urban planning and neighborhood analysis. Xu et al. [25] evaluated slope stability by developing RF, gradient boosting decision tree (GBDT), extreme gradient boosting (XGBoost), and light gradient boosting machine (LightGBM) for dynamic assessment based on multi-source monitoring data.
This study aims to evaluate the performance of various ML algorithms, including ANN, SVM, RF, and XGBoost, in predicting earthquake-induced slope displacement. The approach involves first collecting data on earthquake-induced slope displacement from existing literature. By comparing these models using different performance metrics, the study seeks to identify the most effective approach for assessing slope failure risk due to earthquake-induced displacements.
2. Dataset
The dataset used in this study was obtained from the work of Ferentinou et al. [26], encompassing a total of 45 original datasets. These datasets provide valuable insights into various parameters related to the study’s objectives. The input parameters considered in the analysis include height (H) in (m), unit specific weight (γ) in kN/m, cohesion (C) in kPa, angle of internal friction (φ), significant duration of shaking (D5–95), maximum horizontal acceleration (kmax), and return displacement (u) in cm. Based on Ferentinou et al. [26], the slope characteristics crucial for stability assessment include factors like H (m), ɣ (kN/m3), C (kPa), and Φ (o). These properties, combined with external influences like pore water pressure and earthquake forces, contribute to different deformation mechanisms, such as circular or wedge failures. The study highlights that these factors directly impact the slope’s response under seismic conditions. In parallel, ML models are employed to predict earthquake-induced slope displacement, using these input factors to estimate the likelihood and extent of failure under both static and dynamic conditions. The dataset used in this study is detailed in Table 1.
Fig 1 illustrates the statistical distribution of different input parameters and the u. It is important to highlight that most of the parameters are not highly correlated with one another, which allows for a comprehensive examination of all parameters in predicting the u. Notably, the maximum kmax shows a moderate correlation with the u. The correlation analysis of the entire dataset is also presented. A correlation coefficient near 1 defines a strong positive correlation between parameters, while a negative correlation coefficient near -1 defines a strong inverse relationship between parameters. On the other hand, a correlation coefficient of zero indicates no correlation between the parameters. In this study, apart from the kmax, none of the input parameters exhibit a significant correlation with the u.
The analysis conducted in this study examined the relationship between input variables and the output variable, u, in the dataset. Input variables were selected based on their correlation with the target variable, prioritizing those with stronger correlations for their direct impact on model performance. Weaker correlated variables were also included to capture potential non-linear interactions. This approach balances predictive accuracy and generalizability by incorporating both statistically significant and practically relevant factors, thereby enhancing the model’s robustness through careful parameter selection.
3. Methods
This study utilizes various ML models, particularly ANN, SVM, RF, and XGBoost, to analyze earthquake-induced slope displacement u. The predictive performances of these models are thoroughly compared from various perspectives. The proposed methodology is outlined in Fig 2. Initially, raw slope displacement data is collected from a genuine rock mechanics project. The data is then organized, with input and output values arranged to facilitate the execution and relationship analysis of the three frameworks. The input parameters include H in (m), γ in kN/m, C in kPa, φ, D5–95, kmax, and u in cm. The original data is randomly split into a training set with 70% and a test set with 30%, ensuring consistency in slope displacement data during the split. A 5-fold cross-validation is then performed on the training dataset to determine robust hyperparameters for the ANN, SVM, RF, and XGBoost models. These models are trained using the dataset with the tuned hyperparameters. The test set is used to evaluate model performance through various metrics, including coefficient of determination (R2), mean absolute error (MAE), mean squared error (MSE), and root mean squared error (RMSE). Finally, models are assessed, and the best-performing model is selected for implementation.
3.1. Artificial neural network
ANN is a crucial ML mechanism known for its remarkable self-learning, self-instructing, and self-adapting capabilities. These attributes have led to extensive research and successful application in addressing various real-life challenges [27]. The term "neural” in ANNs reflects their inspiration from the human brain’s learning processes. ANNs offer a robust approach for statisticians to uncover intricate and numerous patterns in real-world problems. They excel at predicting complex, multi-directional relationships between input and output parameters [28]. ANNs are versatile and can be utilized across various fields for classification, regression, prediction, and solving complex problems, including nonlinear issues [29]. ANNs are versatile numerical analysis tools applicable to various problems, including “speech recognition, pattern classification, adaptive interfaces between complex physical systems and humans, clustering, prediction, and forecasting” [30]. In real-world scenarios, the alteration of associative states among different neurons, as described by Hebb’s Learning Rule, can occur. The outcomes are influenced by previous perception models and computational neuron models with additional weighting [31]. Currently, multilayer perceptrons (MLPs), which are based on perceptrons and the number of neurons, have been implemented with high standards of perceptron connections. Researchers frequently use these models to tackle complex learning problems across various domains [32].
3.2. Support vector machine
In 1997, SVM was originated by Vapnik et al. [33], which is a form of supervised learning. SVM can be adapted for both classification and regression tasks. In the context of regression, support vector regression (SVR) is used to identify the optimal hyperplane that best predicts continuous output values while maintaining a specified margin of tolerance. SVR is particularly effective for high-dimensional datasets, offering robust resistance to overfitting, even when the number of features exceeds the number of samples. By leveraging kernel functions, SVR can model complex, nonlinear relationships within the data, making it a highly versatile and powerful tool for regression analysis. SVM uses high-dimensional feature spaces, and the prediction function is constructed using Vapnik’s ε-insensitive loss function and kernel functions [34].
3.3. Random forest
RF is an ELM method that constructs multiple decision trees to predict results for regression tasks. During training, each tree is built using a randomly selected subset of both features and data points, which presents variability among the trees and improves the overall model diversity. The final prediction is obtained by averaging the predictions from all individual trees, which helps to counteract the overfitting often associated with single decision trees. This aggregation process leads to a robust and stable model that achieves high predictive accuracy while being less sensitive to noise and fluctuations in the data. This approach is effective in handling complex regression problems due to its ability to generalize well across different datasets.
3.4. Extreme gradient boosting
XGBoost, or extreme gradient boosting, is an ensemble learning algorithm that enhances machine learning techniques through statistical boosting methods [35]. It builds on simple classification and regression trees (CARTs) by integrating multiple trees into a consensus prediction framework. Unlike constructing a single tree, boosting improves model precision by sequentially adding trees, each addressing the residuals of previous trees. This process iteratively refines predictions by minimizing errors from prior trees. The iterative nature of XGBoost can be viewed as a form of gradient descent, where each new tree is introduced to reduce the residuals of the previous trees [36]. The expansion of new trees continues until either the maximum number of trees is reached or the training error stabilizes, achieving a pre-defined target. To further enhance estimation precision and computational efficiency, XGBoost incorporates random sampling, known as probabilistic slope boosting [37]. In this approach, a random subset of the training data is used for each tree, rather than the full dataset, which helps to improve model performance without overfitting. XGBoost employs second-order loss function estimation, which accelerates convergence compared to traditional gradient boosting machines (GBMs). Its advanced capabilities have made it a powerful tool in various applications, including the analysis of gene expression data in mining research [38]. The XGBoost algorithm builds trees in a level-wise (depth-wise) manner, as depicted in Fig 3.
3.5. Evaluation criterion
To accurately evaluate the performance of ANN, SVM, RF, and XGBoost models, the following evaluation criteria (Eqs (1) to (4)) are used to assess the relationship between measured and predicted values. These criteria include the R2, MAE, MSE, and RMSE.
Here, X′ and X′′ show predicted values and the mean values, respectively. T denotes the total number of datasets and X represents the actual values.
3.6. Hyperparameter tunning
Optimizing hyperparameters is crucial for enhancing the accuracy of ML models. In this study, hyperparameters were adjusted using the cross-validation method and data normalization, rather than being set manually due to the limited dataset on earthquake-induced slope displacements. The data is divided into training and testing to enhance the models’ performance. The training dataset is used to evaluate the model, and the testing dataset assesses the final performance [39]. To address potential non-linearity in the dataset, the k-fold cross-validation technique is used. In this method, the data is divided into k segments, with each segment serving as a test set while the remaining k−1segments are used for training. This approach enables multiple evaluations, yielding metrics such as R2, MAE, MSE, and RMSE, as well as allowing for the calculation of average and standard deviation values for these metrics.
3.7. Grid Search Cross-Validation(CV)
A grid search method was employed using the GridSearchCV() function from the scikit-learn library in Python to optimize the hyperparameters of the models [40]. This method systematically explores all possible combinations of hyperparameters within the defined search space and computes cross-validation scores for each combination. In this study, 5-fold cross-validation with repeated random sampling was used, as shown in Fig 4. This approach helps mitigate the risk of overfitting by providing an average performance score across the folds, rather than relying solely on the best score from one-fold. While GridSearchCV() identifies the combination of best hyperparameters that yield the highest cross-validated score, it is important to consider the performance across all folds to avoid bias in model evaluation. Thus, the results from each fold are reported, ensuring that the identified optimal hyperparameters are not solely based on a single fold but reflect consistent performance across the dataset. All other parameters in the Python environment were left at their default settings during the analysis.
4. Analysis of results and discussion
The analysis revealed that training the ANN, SVM, RF, and XGBoost models on the entire dataset may lead to overfitting. Specifically, while the models perform well on the training data, they may struggle to accurately predict new, unseen data. To mitigate this issue, the slope displacement dataset was split into 70% for training and 30% for model testing. To avoid localized bias, the data was randomly shuffled before splitting. The models were then trained on the training dataset and validated using the testing dataset.
Determining the number of neurons in the hidden layers of the ANN involved a coarse search method followed by a refined search approach. Three configurations of hidden layers were tested, as shown in Table 2. The number of hidden neurons was initially set using a binary search to values of 32, 64, and 128 neurons. It was observed that, with appropriate parameters, the ANN models produced reasonably optimal results. For example, an ANN model with two hidden layers, each containing 32 neurons, using the rectified linear unit (ReLU) activation function, the RMSProp optimizer, and trained for 200 epochs, demonstrated effective performance.
Table 2 shows the performance of the ANN model with varying neuron numbers, evaluated using different metrics. Standardizing the input variables of the ANN to have a mean of 0 and a standard deviation of 1 is essential. Furthermore, a 5-fold cross-validation was conducted to assess the robustness of the ANN algorithm. The model’s performance, across different configurations of hidden layers and neurons per layer, demonstrates that increasing the number of hidden layers typically enhances accuracy, as illustrated in Table 2. Similarly, increasing the number of neurons in each hidden layer also improves the model’s accuracy. This improvement results from the additional hidden layers or neurons, which increase the connections between neurons, thus optimizing the computation process within the ANN.
The data distribution greatly influences the model’s accuracy, as illustrated in Table 2. Scattered data can reduce the ANN model’s accuracy. To address this challenge, the scattered data were normalized to achieve consistent mean and standard deviation. In addition, the volume of data used in the ANN learning process affects the model’s accuracy. As a result, the XGBoost model was utilized to develop an optimal model for predicting earthquake-induced slope displacements.
The XGBoost algorithm was employed to enhance model accuracy further. A standard XGBoost model with default parameters was used like 100 estimators, γ = 0, λ = 1, and a learning rate (η) of 0.3. The model was evaluated using a repeated 5-fold cross-validation setup, ensuring that data from the same source were not split between training and testing datasets. Cross-validation was conducted using 3-fold cross-validation repeated five times, resulting in a total of 15 folds. The data was normalized using a standard scaler before the process. After each fold, a prediction was generated. To derive a single representative prediction, the predicted values corresponding to each true value were averaged across all folds at the end of the process. All other parameters were set to their default values.
Fig 5 illustrates scatter plots and performance comparisons of actual versus predicted values for displacement (cm) using the ANN, SVM, RF, and XGBoost models on both the train and test data. The R2 value for ANN is 0.63 on the train data and 0.80 on the test data. On the other hand, the SVM and RF models achieve an R2 values of 0.87 and 0.94 on the train data, and 0.86 and 0.87 on the test data, respectively. Furthermore, the XGBoost model demonstrates an R2 value of 0.99 on both the train and test data.
Based on Table 3, the XGBoost model demonstrates the highest scalability and robustness in predicting the return displacement of slopes. Therefore, XGBoost is recommended for predicting the return displacement of slopes induced by earthquakes. Fig 6 depicts the bar plots of the performance metrics for ANN, SVM, RF, and XGBoost models.
5. Sensitivity analysis
This study employed ANN, SVM, RF, and XGBoost, to analyze return displacement in earthquake-induced slopes. Among these, XGBoost emerged as the most reliable model. It is important to assess the sensitivity of various parameters for accurate prediction of return displacement in such scenarios. Previous studies have used different methods to evaluate parameter sensitivity [41, 42]. In this context, this study used the XGBoost model to determine the importance of input parameters, as it consistently outperformed ANN, SVM, and RF in predicting earthquake-induced slope displacement. Feature importance techniques assign scores to input parameters based on their effectiveness in predicting the target outcome. The kmax was identified as a critical parameter, with an importance score of 0.714, as illustrated in Fig 7. H also plays a significant role, with an importance score of 0.078. Conversely, input parameters such as C with 0.013, γ with 0.010, D5–95 with 0.093, and φ with 0.0043 were found to be less influential.
6. Conclusions
This study developed the ANN, SVM, RF, and XGBoost models to analyze earthquake-induced slope displacement with high accuracy and efficiency. The model’s performance was validated using a dataset from earthquake-induced slope instability analyses and benchmarked against the traditional regression model. Statistical performance indices such as R2, MAE, MSE, and RMSE were used to assess predictive accuracy. The ANN model showed an R2 value of 0.63 for training and 0.80 for testing, while SVM and RF models obtained 0.87 and 0.94 on training data and 0.86 and 0.87 on testing data, respectively. The XGBoost model outperformed all others model with an R2 value of 0.99 on both training and testing data, demonstrating superior robustness in handling complex, nonlinear data.
The XGBoost model provides an effective tool for accurately estimating the tectonic safety of slopes subjected to earthquake-induced displacement. Its ability to serve as a reliable predictive and early warning system offers significant practical applications for slope displacement monitoring. while the model performed well across varying rock conditions, further generalization could be achieved by incorporating more comprehensive slope displacement and geological data. Future work should focus on expanding this model’s application by integrating it into the development of tectonic susceptibility maps.
Although data-driven models typically predict the median displacements of earthquake-induced slopes, XGBoost resilience in managing nonlinear datasets makes it a valuable alternative to deterministic and empirical methods. In the future, the range and number of training samples should be carefully considered to enhance model performance. Advanced machine learning techniques, including hybrid models, could further improve predictive accuracy and offer deeper insights into earthquake-induced slope displacements in future studies.
References
- 1. Ambraseys N, Srbulov M. (1995). Earthquake induced displacements of slopes. Soil Dynamics and Earthquake Engineering. 1995; 14(1): 59–71.
- 2. Keefer DK. Landslides caused by earthquakes. Geological Society of America Bulletin. 1984; 95(4): 406–421.
- 3. Jibson RW. Predicting earthquake- induced landslide displacements using Newmark’s sliding block analysis. Transportation research record. 1993; 9–17.
- 4. Saygili G, Rathje EM. Empirical predictive models for earthquake-induced sliding displacements of slopes. Journal of geotechnical and geoenvironmental engineering. 2008; 134(6): 790–803.
- 5. Lin JS, Whitman RV. Earthquake induced displacements of sliding blocks. Journal of Geotechnical engineering. 1986; 112(1): 44–59.
- 6. Rathje EM, Saygili G. Probabilistic assessment of earthquake-induced sliding displacements of natural slopes. Bulletin of the New Zealand Society for Earthquake Engineering. 2009; 42(1): 18.
- 7. Refice A, Capolongo D. Probabilistic modeling of uncertainties in earthquake-induced landslide hazard assessment. Computers & Geosciences. 2002; 28(6): 735–749.
- 8. Al-Homoud A, Tahtamoni W. Reliability analysis of three-dimensional dynamic slope stability and earthquake-induced permanent displacement. Soil Dynamics and Earthquake Engineering. 2000; 19(2): 91–114.
- 9. Yuan RM, Tang CL, Deng QH. Effect of the acceleration component normal to the sliding surface on earthquake-induced landslide triggering. Landslides. 2015; 12(2): 335–344.
- 10. Babanouri N, Dehghani H. Investigating a potential reservoir landslide and suggesting its treatment using limit-equilibrium and numerical methods. Journal of Mountain Science. 2017; 14(3): 432–441.
- 11. Bray JD, Travasarou T. Simplified procedure for estimating earthquake-induced deviatoric slope displacements. Journal of geotechnical and geoenvironmental engineering. 2007; 133(4): 381–392.
- 12. Bray JD, Travasarou T. Pseudostatic coefficient for use in simplified seismic slope stability evaluation. Journal of geotechnical and geoenvironmental engineering. 2009; 135(9): 1336–1340.
- 13. Jibson RW. Methods for assessing the stability of slopes during earthquakes- A retrospective. Engineering Geology. 2011; 122(1–2): 43–50.
- 14. Bojadjieva J, Sheshov V, Bonnard C. Hazard and risk assessment of earthquake- induced landslides- case study. Landslides. 2018, 15(1): 161–171.
- 15. Jiang SH, Li DQ, Cao ZJ, Zhou CB Phoon KK. Efficient system reliability analysis of slope stability in spatially variable soils using Monte Carlo simulation. Journal of geotechnical and geoenvironmental engineering. 2014; 141(2): 04014096.
- 16. Wang Y, Cao Z, Au SK. Practical reliability analysis of slope stability by advanced Monte Carlo simulations in a spreadsheet. Canadian Geotechnical Journal. 2010; 48(1): 162–172.
- 17. Abbaszadeh M, Shahriar K, Sharifzadeh M, Heydari M. Uncertainty and Reliability Analysis Applied to Slope Stability: A Case Study From Sungun Copper Mine. Geotechnical and Geological Engineering. 2011; 29(4): 581–596.
- 18. Cho Youngkyu, Ellen MR. Displacement Hazard Curves Derived from Slope-specific Predictive Models of Earthquake-induced Displacement. SOIL DYNAMICS AND EARTHQUAKE ENGINEERING, 2020.
- 19. Jenny-Paola L, Mercedes G, Jenny C. Spending Level of Displaced Population Returned to La Palma, Cundinamarca (2018): A Machine Learning Application", MIGRATION LETTERS, 2020.
- 20. Dong M, Wu H, Hu H, et al. Deformation Prediction of Unstable Slopes Based on Real-Time Monitoring and DeepAR Model. SENSORS (BASEL, SWITZERLAND), 2020. pmid:33375148
- 21. Maria GD, Ellen MR. An Exploration of The Use of Machine Learning to Predict Lateral Spreading. EARTHQUAKE SPECTRA, 2021.
- 22. Daribayev B, Akhmed-Zaki D, Imankulov T, Nurakhov, Y. Kenzhebek. Using Machine Learning Methods for Oil Recovery Prediction, 2020.
- 23. Melchor-Leal JM, Cantoral-Ceballos JA. Force Profile Characterization for Thermostatic Bimetal Using Extreme Learning Machine", IEEE LATIN AMERICA TRANSACTIONS, 2021.
- 24. William T, Ng M, Chyi LL, Pettit C. Building A Predictive Machine Learning Model of Gentrification in Sydney, CITIES, 2021.
- 25. Wenhan X, Kang Y, Lichuan C, Wang L, Qin C. Liting Zhang; Dan Liang; et al., Dynamic Assessment of Slope Stability Based on Multi‐source Monitoring Data and Ensemble Learning Approaches: A Case Study of Jiuxianping Landslide", GEOLOGICAL JOURNAL, 2022.
- 26. Ferentinou M, Sakellariou M. Computational intelligence tools for the prediction of slope performance. Computers and Geotechnics. 2007; 34(5): 362–384.
- 27.
Mikheev MY, Gusynina YS, Shornikova TA. Building Neural Network for Pattern Recognition. In 2020 International Russian Automation Conference (RusAutoCon), 357–361. IEEE.
- 28. Azadeh A, Sheikhalishahi M, Tabesh M, Negahban A. The effects of pre-processing methods on forecasting improvement of artificial neural networks. Australian Journal of Basic and Applied Sciences. 2011; 5(6): 570–580.
- 29. Du KL, Swamy MN. Neural networks in a softcomputing framework. Science & Business Media. 2006.
- 30.
Hassoun MH. Fundamentals of artificial neural networks. MIT press. 1995.
- 31. Rosenblatt F. The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review.1958; 65: 386. [CrossRef] [PubMed]. pmid:13602029
- 32.
Yegnanarayana B. Artificial Neural Networks; PHI Learning Pvt. Ltd. New Delhi, India, 2009.
- 33. Vapnik V, Golowich SE, Smola A. Support vector method for function approximation, regression estimation, and signal processing. Advances in neural information processing systems. 1997; 281–287.
- 34.
Negara A, Ali S, AlDhamen A, Kesserwan H, Jin, G. Unconfined compressive strength prediction from petrophysical properties and elemental spectroscopy using support-vector regression. In SPE Kingdom of Saudi Arabia Annual Technical Symposium and Exhibition. 2017. OnePetro.
- 35.
Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining. 2016; 785–794.
- 36. Schapire RE. The boosting approach to machine learning: An overview. Nonlinear estimation and classification. 2003;149–171.
- 37. Friedman JH. Stochastic gradient boosting. Computational statistics & data analysis. 2002; 38(4): 367–378.
- 38. Wang Z, Monteiro CD, Jagodnik KM, et al. Extraction and analysis of signatures from the Gene Expression Omnibus by the crowd. Nature communications. 2016; 7(1): 1–11. pmid:27667448
- 39. Choubineh A, Helalizadeh A, Wood DA. Estimation of minimum miscibility pressure of varied gas compositions and reservoir crude oil over a wide range of conditions using an artificial neural network model. Advances in Geo-Energy Research. 2019; 3(1): 52–66.
- 40. Bergstra J, Bengio Y. Random search for hyper-parameter optimization. Journal of Machine Learning Research. 2012; 13: 281–305.
- 41. C Hu R Wang, F Yan. Differential steering based yaw stabilization using ISMC for independently actuated electric vehicles. IEEE Transactions on Intelligent Transportation Systems. 2018; 19(2): 627–638.
- 42. C Qi Q Chen, A Fourie, Q Zhang. An intelligent modeling framework for mechanical properties of cemented paste backfill. Minerals Engineering. 2018; 123: 16–27.