Figures
Abstract
The uniaxial compressive strength (UCS) and elasticity modulus (E) of intact rock are two fundamental requirements in engineering applications. These parameters can be measured either directly from the uniaxial compressive strength test or indirectly by using soft computing predictive models. In the present research, the UCS and E of intact carbonate rocks have been predicted by introducing two stacking ensemble learning models from non-destructive simple laboratory test results. For this purpose, dry unit weight, porosity, P‐wave velocity, Brinell surface harnesses, UCS, and static E were measured for 70 carbonate rock samples. Then, two stacking ensemble learning models were developed for estimating the UCS and E of the rocks. The applied stacking ensemble learning method integrates the advantages of two base models in the first level, where base models are multi-layer perceptron (MLP) and random forest (RF) for predicting UCS, and support vector regressor (SVR) and extreme gradient boosting (XGBoost) for predicting E. Grid search integrating k-fold cross validation is applied to tune the parameters of both base models and meta-learner. The results demonstrate the generalization ability of the stacking ensemble method in the comparison of base models in the terms of common performance measures. The values of coefficient of determination (R2) obtained from the stacking ensemble are 0.909 and 0.831 for predicting UCS and E, respectively. Similarly, the stacking ensemble yielded Root Mean Squared Error (RMSE) values of 1.967 and 0.621 for the prediction of UCS and E, respectively. Accordingly, the proposed models have superiority in the comparison of SVR and MLP as single models and RF and XGBoost as two representative ensemble models. Furthermore, sensitivity analysis is carried out to investigate the impact of input parameters.
Citation: Fereidooni D, Karimi Z, Ghasemi F (2024) Non-destructive test-based assessment of uniaxial compressive strength and elasticity modulus of intact carbonate rocks using stacking ensemble models. PLoS ONE 19(6): e0302944. https://doi.org/10.1371/journal.pone.0302944
Editor: Xianggang Cheng, China University of Mining and Technology, CHINA
Received: December 18, 2023; Accepted: April 14, 2024; Published: June 10, 2024
Copyright: © 2024 Fereidooni et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The source code and dataset to reproduce the results presented in this paper is available at the link: https://gitHub.com/karimizohre/SERock. The code is written in python language and run in the jupyter notebook.
Funding: The author(s) received no specific funding for this work.
Competing interests: The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
List of symbols and abbreviations:
Ls,
second level algorithm in stacking ensemble; Li,
first level algorithm in stacking ensemble; Mi,
ith base model; Ms,
met-model; ,
dth feature of ith data;
,
maximum value of dth feature on all input data;
,
minimum value of dth feature on all input data;
,
normalized value of
; Adam,
(adaptive moment estimation; ANFIS,
adoptive neuro-fuzzy inference system; ANN,
artificial neural network; D,
The number of input dimensions (features); E,
Elasticity modulus (GPa); HBN,
Brinell hardness number (kgf/mm2); k,
The number of folds in k-fold cross-validation; KNN,
k-nearest neighbor; L-BFGS,
Limited-Memory Broyden-Fletcher-Goldfarb-Shanno; m,
data count; MAE,
Mean Absolute Error; MLP,
Multiple Layer Perceptron; MLR,
Multiple Linear Regression; MSE,
Mean Squared Error; ne,
Effective porosity (%); R2,
Determination coefficients; RF,
Random Forest; RMSE,
Root Mean Squared Error; RSE,
Relative Strength of Effect; SVR,
Support Vector Regression; UCS,
Uniaxial compressive strength (MPa); vp,
P-wave velocity (m/s); XGBoost,
eXtreme Gradient Boosting; γd,
Dry unit weight (kN/m3); d,
dth feature; l,
the number of base models in stacking ensemble
1. Introduction
The uniaxial compressive strength (UCS) and elasticity modulus (E) of intact rock are two fundamental requirements for evaluating the strength, stability, and deformation behavior of rock in engineering projects. These parameters are obtained from a common and the most important rock mechanics laboratory test namely uniaxial compressive strength test. The UCS is a main input parameter in intact rock and rock mass classifications as well as failure criteria, and the E is a parameter to identify rock stiffness and deformability of geomaterials especially rocks. Direct measurement of the UCS and E in accordance with the recommended standards such as ISRM (International Society for Rock Mechanics) and ASTM (American Society for Testing and Materials) is difficult, time-consuming, expensive, and even impossible in weak, highly fractured, inherently anisotropic, highly foliated, and stratified or laminated rocks due to preparing suitable core specimens for performing the test. For this reason, the predictive approaches are often applied for the indirect estimation of the UCS and E. Recently, various machine learning and soft computing approaches have been used to predict the two mentioned parameters based on simple laboratory tests results. In this regard, the neural networks, support vector regression, random forest, extreme gradient boosting, multiple layer perceptron, fuzzy systems, evolutionary algorithms, etc. are common predictive approaches, and predictor rock properties such as unit weight, porosity, P‐wave velocity, slake durability index, and rock surface harnesses are applicable parameters [1–8]. The machine learning and soft computing approaches, unlike traditional statistical methods such as simple and multiple regressions which must be used in similar rocks, are sufficiently generic to use in various rock types because they fit all value ranges of the UCS and E to other rock properties. So, they are very suitable for the general applications as well as they reliable for all rock types.
In the last few years, some researchers applied the machine learning and soft computing-based techniques to estimate both UCS and E of various rock types [e.g. 9–23]. In this regard, Ghasemi et al. [24] applied model trees as a predicting approach and Schmidt hardness, effective porosity, dry unit weight, P‐wave velocity, and slake durability index as input variables for predicting the UCS and E. They found that pruned and unpruned model trees provide suitable predictions of the parameters. Beiki et al. [18] predicted the UCS and E of carbonate rocks using genetic programming and multiple nonlinear regression models and found that the first method fitted the data more accurately than the second one, so it is the usefulness technique for estimating the UCS and E. Madhubabu et al. [25] and Aboutaleb et al. [19] have been used the MLR, ANN, and SVR for predicting both UCS and E of carbonate rocks together the R2 and RMSE to examine the accuracy of the results. Their studies revealed that the ANN and SVR has a better predictive efficiency than the MLR for predicting the UCS and E from physical and index characteristics of the rocks. In another study, Rezaei and Asadizadeh [20] paid attention to the application of intelligent techniques combinations including ANFIS, genetic algorithm (GA), and particle swarm optimization (PSO) in order to predict the UCS of very strong rock types. Their studies proved that the combinations of the methods have higher capability than the regression model. Also, they found that the density and Schmidt rebound hardness had more related to the UCS than the rock porosity. Khan et al. [26] applied the MLR, ANN, RF, and KNN for predicting the UCS and static E from physical, chemical, and mechanical properties of marble rock namely density, porosity, P-wave velocity, and dynamic E under different thermal conditions. They found that the KNN and RF are reliable approaches to predict both UCS and E. Also, it was found that P-wave velocity has strong correlations with the UCS and E. Based on predictive performance, the RF model was proposed to predict the UCS and E as the best model. Shahani et al. [21, 27] in a comprehensive study measured the UCS, E, dry and wet densities, and Brazilian tensile strength of soft sedimentary rocks and predicted the UCS and E by employing the MLR models, ANN, and ANFIS from other rock parameters. Their research indicated that the approaches are suitable ways to predict the UCS and E. It is also revealed that the prediction accuracy of the ANFIS is the best among all the employed models.
Rukhaiyar and Samadhiya [28] performed a polyaxial strength model for intact sandstone based on artificial neural network (ANN) to find out the influence of each independent parameter namely uniaxial compressive strength (UCS), minor principal stress (σ3), and intermediate principal stress (σ2) on the strength of sandstone, i.e., the major principal stress at failure (σ1). They found that the ANN based failure model gives the best result amongst all the considered polyaxial strength criteria, for the testing dataset. Behzadafshar et al. [29] proposed a new artificial neural network (ANN) model to approximate the elasticity modulus (E) of granite rock samples based on laboratory tests results. In this research, Rock index tests including point load, p-wave velocity and Schmidt hammer together with uniaxial compressive strength (UCS) tests were carried out to prepare a database comprised of 62 datasets for the analysis. Based on sensitivity analysis results for the developed ANN model, p-wave velocity has the most effect on E of the rock samples.
It can be easily understood from the mentioned studies that 1) statistical and soft computing approaches are suitable methods to predict engineering characteristics of rocks such as the UCS and E. 2) the machine learning and soft computing approaches are provided more accurate results than statistical methods such as simple and multiple regressions. 3) Combining several models in a proper way achieves a better result compared to only one model.
The purpose of this paper is to create an ensemble machine learning method for estimating UCS and E, based on existing research that shows how the combination of certain machine learning methods can improve their performance. However, the commonly used combinations frameworks are typically boosting and bagging. Boosting is vulnerable to overfitting and neither of these frameworks utilizes all the available data for learning base models. This paper seeks to utilize the knowledge contained within all the data by developing a stacking ensemble model to improve prediction accuracy. A little study in predicting rock brightness is done, Koopialipoor et al. [30] developed a stacking structure by employing the MLP, KNN, and RF for predicting the E from other rock properties namely porosity, Schmidt rebound hardness, P-wave velocity, and point load index. So, in the present research an attempt has been made to examine a stacking ensemble learning method for predicting the UCS and E of travertines and limestones as two major category of carbonate rocks which are one of the most abundant and common rock types in earth surface and they are encountered in many engineering projects worldwide. The contribution of this study can be listed as 1) assessing the engineering properties of 70 carbonate rock samples including 30 travertines and 40 limestones. 2) two stacking ensemble models are developed to predict E and UCS by enhancing some base models. The parameter tunning of the models is done by grid search. 3) Sensitivity analysis is performed to assess the relative importance of input variables on E and UCS. 4) extensive numerical experiments are conducted to compare the effectiveness of the proposed method with some popular learning methods. The results confirm that the proposed method outperform Support Vector Regressor (SVR), Random Forest (RF), extreme gradient boosting (XGBoost), and Multi-Layer Perceptron (MLP).
2. Materials and experimental studies
Providing process of necessary rock materials and their characteristics is planned in three steps namely sample selection and specimen preparation, experimental procedures and rock characteristics (laboratory investigations) as well as desk studies. Methodology flowchart of the research is presented in Fig 1. The former step includes operations for selecting suitable and applicable carbonate building stones extracted from quarries and used in many cities of Iran. Then, rock specimens with suitable shapes and dimensions were prepared for considered laboratory tests. The second step includes a comprehensive laboratory test program for evaluating engineering characteristics and behaviors of the selected samples. In the third step, data analysis was performed and uniaxial compressive strength and modulus of elasticity were predicted using stacking ensemble models.
2.1. Sample selection and specimen preparation
The number of 70 carbonate rock samples including 30 travertines and 40 limestones were taken from some quarries in different region of Iran, and moved to engineering geology laboratory. They are various in apparent properties such as color, luster, surface texture, etc. The travertines are light brown to gray in color, without any vein in surface, and with cavities filled by light brown to black materials in handy specimens. The limestones are white to cream or gray in color, with small, thin and light to dark veins, and without cavities in their surfaces. In the specimen preparation step, cube-shaped specimens with dimensions of 54×54×135mm were prepared from the selected samples in a stone cutting workshop before laboratory tests. The prepared specimens were washed with water to remove dust deposited on the stone surfaces during rock cutting and then have been dried at 105°C in oven, and were weighed by a scale with accuracy of 0.01 g. Finally, they were tested in dry condition in accordance with ASTM [31] and ISRM [32]. Also, necessary thin sections were provided to investigate petrographic properties of the rock samples.
2.2. Experimental procedures and rock characteristics
The mineralogical and petrographic studies, physical properties tests (dry unit weight and effective porosity), ultrasonic wave velocity test, and Brinell hardness test were carried out for determining considered characteristics.
The microscopy studies on thin sections were done in accordance with ASTM [33] and ISRM [32] to identify mineral content, texture, and petrographic properties of the rock samples. Fig 2 shows four microscopic images of some tested rocks as representative samples in polarized light (XPL). In the travertines, the samples are consisted of micrite and sparry calcite. The sparry calcite crystals are grown in internal walls of cavities. Some of the cavities are completely filled. In the limestones, the sample are consisted of sparry calcite or micrite. In the samples that composed of micrite, the sparry calcite crystals are grown in internal walls of cavities and some of the cavities are completely filled. There are fractures and veins filled by calcite crystals and iron dioxide. Some sample have fossil.
(Note: SC: Sparry calcite, M: Micrite, C: Cavity, V: Vein, F: Fossil).
Physical properties tests were carried out on the prepared specimens with regular shape method to determine dry unit weight (γd) and effective porosity (ne) in accordance with ASTM [31] and ISRM [32]. Four number of experiments were used for each rock sample and averaging was done from the four obtained numbers. The samples that showed wrong results were repeated again to reduce the test error. So, a total of 280 specimens were tested in this step to determine the average values of γd and ne. Ultrasonic wave velocity (P-wave velocity) of the selected rock samples was determined in the laboratory using a Peroceq digital ultrasonic tester (Model: ND 180; Trade name; Pundit Lab+; Transit time range: 0.1–9999 μs; Energizing pulse: 125, 250, 350, 500 V; Frequency range: 24–500 KHz) in accordance with ASTM [34] and ISRM [32]. The Brinell hardness test is performed based on ASTM [35]. The test method is generally used to test materials with coarse structure and rough surface. In this test method, a constant load (F) is applied to a carbide ball during a predetermined time period. Then, the created impression of the ball is measured with an optical system at two perpendicular diameters. The Brinell hardness number is calculated as:
(1)
where BHN is Brinell hardness number in kgf/mm2, F is applied load in kgf (F = 1000 kgf), Db is diameter of indenter ball (Db = 10 mm), and Di is diameter of impression (mm).
Uniaxial compressive strength test is described as a suggested method for determining UCS and E by ISRM [36] and ASTM-D-2938 [37]. In the current research, four prepared specimens were tested from each sample to determine UCS and E. The two mentioned parameters are calculated from the following equations:
(2)
(3)
In these equations, UCS is uniaxial compressive strength (MPa), F is maximum applied force upon the tested specimen (N), D is diameter of the tested cylindrical specimen (mm), E is secant modulus of elasticity (MPa), Δσ is the change of applied stress upon the tested specimen (MPa), and Δεa is the change of axial strength of the specimen during the test. Table 1 summarizes the average values of the obtained engineering properties of the tested rocks. The minimum and maximum values of UCS of the tested rocks are 11.66 and 38.49 MPa, respectively, which are moderate values based on ISRM [32]. The minimum and maximum values of E, γd, ne, and vp are between 2.17 and 8.03 GPa, 22.01 and 25.78 kN/m3, 0.37 and 7.55%, and 3759.79 and 5347.06 m/s, respectively. In accordance with Anon [38], the tested rocks have very low values of E, moderate to high values of γd, very low to moderate values of ne, moderate to very high values of vp. The minimum and maximum values of HBN are 271.81 and 975.79 kgf/mm2, respectively, which are high values based on ASTM E10-18 [35].
After calculating the required engineering characteristics of the rocks, distribution curve and histogram of the rock properties are provided which are presented in Fig 3, where Ishikawa formula is exploited for computing the number of bins [39].
The descriptive statistics of data including is also given in Table 2. The Pearson’s product moment correlation coefficient of variables indicates the strength of the linear relationship between independent and dependent variables [40]. It is computed and is given in Fig 4.
3. Machine learning algorithms
Ensemble methods aggregate some models to enhance their generalizability and robustness. Two common ensemble methods are bagging and boosting applied to rock properties prediction in the literature [27, 41, 42], while stacking ensemble, a promising ensemble method, has received less attention in the related works. Compared to bagging and boosting, stacking ensemble method has three important characteristics; 1) it fully utilized training data, the model can be learned from all samples. 2) training in its first level is done by cross-validation, thus the trained model is robust and training overfitting phenomenon is not occurred. 3) it integrates different types of base learners; therefore, it takes their advantages and deals with their disadvantages. The last issue is so important since finding a suitable model for various datasets is so difficult. This section describes the stacking ensemble and base models exploited in this research.
3.1. Stacking ensemble learning
The stacking ensemble learning model was introduced by wolpert [43]. Recently, it has been successfully applied in various applications [44–50]. It combines some models together in order to enhance accuracy, generalization ability, and robustness. Stacking ensemble model consists of two levels, some base models are learned in the first level and one meta-learner is trained in the second level. Meta-learner’s training data set is the altered version of original dataset and involves some synthetic features depend on base models’ predictions. The training phase of stacking ensemble is illustrated in Fig 5. The models in the first level predicts the output variable in k-fold cross-validation manner, where one fold is considered as a validation set and the method is learned on other folds. The prediction process for first level is shown in Fig 6.
For optimizing base models’ hyperparameters, the grid-search algorithm integrating with k-fold cross validation is applied. In the first step, the value range of hyperparameters are set, and then the model is trained by considering all combinations of hyperparameter values. Each combination is evaluated by computing the performance measure based on k-fold cross validation setting. Best combination is selected to train corresponding base model on all training data. These steps are shown in Fig 7.
After training base models on original data set, meta-learner is trained on synthetic data set. The details of stacking ensemble are given in Fig 8. To achieve suitable stacking ensemble model, base models should be diverse and have high performance. In this research, we design two stacking ensembles; one for predicting the UCS, and another for estimating the static E of carbonate rocks. Four classifiers are studied as base models, two base models for each stacking ensembles. The base models are SVR, RF, XGBoost, and MLP, since they have been successfully applied to the rock properties prediction [51]. SVR is a suitable method for dealing with nonlinear problems and has been achieved to high quality results when available data are rare [52]. XGBoost outperformed other machine learning methods in many challenges [53]. MLP is a well-known and powerful learning method to resolve challenges in the rock data, and RF frequently applied to rock properties prediction in the literature [41, 54–56]. These methods will be summarily described in next sections.
3.2. Support vector regression
The SVR is a prominent model for predicting dependent variable employing statistical learning theory. It is based on the structural risk minimization principal and exploits the kernel trick. By considering m training samples , where
and yi ∈ℝ, SVR estimates output variable by the following equation:
(4)
where b is intercept,
is the weight feature vector, and
is a mapping from the input space to high dimensional new space, and dk is the dimension of feature space that is implicitly defined. The notation of <.,.> indicate the dot product [57]. A kernel function is commonly employed in SVR to transform input data to a feature space with high dimensional for considering data non-linearity. Radial Basis Function (RBF) is the most widely used kernel function that computes the similarity of xi and xj by
, where γ is its parameter.
The optimization problem of SVR consists of two terms of regularization, and loss function as follow:
(5)
where l2 −norm and ε-insensitive is applied as loss function and regularization terms, respectively, and C>0 controls the trade-off between two terms. ξ and ξ* are slack variables that are introduce for tolerating misclassification in training data. For more details on SVR refer to Smola [57]. SVR’s various variants have been successfully applied for predicting engineering properties of rocks [58, 59].
3.3. Random forest
The random forest is an ensemble of classification and regression trees (CART) with suitable characteristics including 1) supporting nonlinearity, 2) being non-parametric, 3) fast training, 4) random subspace, and 5) being resistance to overfitting. The base tree models are built on randomly selected input features of randomly samples of original data set by bagging manner. This randomness increases the diversity of the base trees. For building each tree, input space partitions to two parts in the recursive manner in order to data in each leaf be pure as much as possible. The purity in the regression task is commonly defined by mean squared error measure, where a regression model is learned from data in each leaf. The predicted value in the regression task is computed by averaging forecasting values of each base learner. The RF has been recently applied for various applications including the prediction of engineering properties of rocks [41, 54, 55]. The number of base trees and maximum depth of them are two effective hyperparameters in the RF. Restricting tree depths in the training phase can enhance the generalization ability of them and reduce the memory usage.
3.4. XGBoost
The XGBoost is an effective scalable method that combines some trees in an iterative boosting manner. It learns some tree models like the RF, but it differs from the RF in the training details. ith predicted value is computed as
(6)
where fk is an independent tree and
indicates the space of regression trees
(7)
where
is a convex loss function that vanishes the violation of precited values
from target value yi,Ω is the regularization term that prevent from overfitting and consists of two terms: the number of leaves, T, and the L2-norm of ω controlling the complexity of the model. The optimization of Eq 7 is done in the additive manner. For studying more details about it, please refer to Chen [60]. Three important parameters of the XGBoost are the number of trees, maximum tree depth, and learning rate. Since the XGBoost learns each tree to correct the error of the existing sequence of trees, it is subject to overfit. To prevent overfitting, a weight factor is assigned to the correction made by each new tree. This weight factor is learning rate.
3.5. MLP
A Neural network is a powerful model for representing both linear and non-linear relations between inputs and outputs. The MLP is a most common neural network that it exploits back propagation algorithm for learning its weights. It is a supervised and feedforward network. Training phase of the MLP includes two steps: first, the selection of neural network architecture, and second, adjusting connections’ weights. A typical MLP has an input layer and an output layer. It may be had one or more hidden layers that extract some important features of input data and are not directly accessible. Some neurons are included in each layer and an activation function is assigned to each neuron. The MLP can learn nonlinear patterns from data. Two well-known weight optimization methods in the MLP are L-BFGS (Limited-Memory Broyden-Fletcher-Goldfarb-Shanno) [61] and Adam (adaptive moment estimation) [62]. The L-BFGS belongs to a class of Quasi-Newton methods and the Adam is an efficient stochastic optimization method that only needs first-order gradients plus little memory requirement.
4. Model development, results and discussions
This part explains the application of the stacking ensemble model for estimating the UCS and static E of carbonate rocks. Section 3.1 explains some settings included in our experiments, section 3.2 describes the performance measures that exploited for assessing the proposed methods, and finally Section 3.3 gives the results and discuss about them.
4.1. Settings
The dataset is spitted randomly to training and test, where 20% of data is considered as test and 80% of data form training data. The input data is normalized before applying machine learning methods to scale all features in the range of [0,1]. This step is so important to eliminate the effect of varying ranges of various features. This normalization is done by the following relation:
(8)
where
is the value of dth input variable of ith data,
and
are minimum and maximum values of dth input variable,
is the scaled value of
, and D is 4 in our data which equals the number of independent variables including γd,ne,vp, and HBN.
4.2. Performance measures
In this paper, the performance of the estimation is presented in the terms of the coefficient of determination (R2), Root Mean Squared Error (RMSE), Mean Squared Error (MSE), the Mean Absolute Error (MAE), Variance Accounted For (VAF), Index of Scatter (IOS), agreement index (IOA), Mean absolute percentage error (MAPE), Weighted Mean absolute percentage error (WMAPE), Performance Index (PI), and a20index are computed. These metrics are frequently employed to assess regression issues [63, 64]. RMSE, MSE, MAE, MAPE, WMAPE, and IOS indicated the error prediction. R2 specifies the appropriateness of the fitted model and is in the range of [0,1] and larger R2 values and smaller MSE, MAE, RMSE, MAPE, WMAPE, and IOS values indicate better performance. The measure of a20index indicates the quantity of samples that correspond to the observed values within the margin of ±20% deviation, as identified by m20, in relation to the predicted values. A higher a20index value specifies better predictive accuracy. The relations of these measures are given as follows:
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
4.3. Prediction performances
a. Tunning hyperparameters of base models.
In the first stage, we train some state-of-the-art machine learning methods consisting of XGBoost, RF, SVR and MLP. Grid search in the combination of k-fold cross validation is applied for parameter optimization of all base models. Mean of MSE is computed in each parameter configurations of grid search, and the best result on the validation data indicates the best parameter settings. The optimized parameters and their ranges are given in Table 3.
A MLP model with one hidden layer also is considered in our experiments as a base model. The increasing hidden layer may reduce the error of the model, but also increases both network complexity and training time and leads overfitting. The influence of the number of hidden layer neurons and the method of weight optimization are illustrated in Fig 9. As demonstrated in this figure, for estimating the UCS, the number of neurons is more effective than the E, and the number of neurons and weight optimization method were set to 14 and L-BFGS, respectively. For predicting the E, Adam weight optimization function concludes more stable result than L-BFGS. The number of neurons by considering this weight optimization function does not have many impacts on MSE, our experiments select 17 neurons.
Effects of the number of neurons in hidden layer and the solver of weight optimization in the MLP on MSE of predicting a) UCS, and b) E.
For the base model of the random forest, the effect of parameters is shown in Fig 10. The number of estimators is not so effective on the MSE for predicting E, and with maximum tree depth less than 3 overfitting is not occurred, based on the figure. The number of 200 trees with maximum depth of 3 are best configuration for predicting E. Similar manner exists for predicting UCS, where 400 trees with maximum of depth of 6 are found by grid search.
Effects of parameters of RF on MSE of forecasting a) UCS, and b) E.
The impact of the SVR parameters on the MSE is also illustrated in Fig 11. In both parts of this figure, two parameters are fixed to their best values and only the mean of negative of mean squared error is plotted for another parameter. As it shown, the SVR are sensitive to all parameters. C = 10 and γ = 1 are the best parameter settings for predicting both of UCS and E. The value of ∊ is set to 1 and 0.1 for predicting UCS and E, respectively.
Effect of parameters of the SVR on MSE of predicting a) UCS, and b) E.
The influence of the XGBoost’s parameters is also assessed and the results are given in Fig 12. Similar to the SVR parameters, one parameter value is changed in each time and other parameter values are set to their best values. The suitable values of these parameters for predicting E got achieved to 0.3, 5, and 400, respectively. The increasing values of all parameters leads to high error prediction; however, this effect is observed more strongly in the number of predictors. Increasing the number of predictors increases the model complexity and causes overfitting. Similar trend is maintained in predicting UCS, while the best values are 0.01, 10, and 800 for learning rate, maximum of depth, and the number of tree predictors.
Effects of parameters of the XGBoost on MSE of predicting a) UCS, and b) E.
b. Testing the stacking ensemble model.
We conducted two stacking ensembles; one for predicting E by exploiting the SVR and XGBoost as base models, and another stacking ensemble model for predicting the UCS by using the RF and MLP as the first stage’s models. The meta-learners are the SVM and MLP for predicting the E and UCS, respectively.
After hyperparameter tunning of base models, the hyperparameters of meta-learners are also tuned by grid search and then, two stacking ensemble models were trained for predicting the E and UCS. The range of parameters of meta-learners is also according to values given in Table 3. The results of grid search are illustrated in Fig 13. C = 10, epsilon = 0.1, and gamma = 0.01 are the best values of the SVR parameter. The number of neurons in the MLP’s hidden layer is also set to be 11.
Effects of parameters of a) SVR, and b) MLP as the meta-learner for predicting the UCS, and E, respectively.
The obtained performance results for predicting the UCS by four base models and the proposed stacking ensemble are also listed in Table 4. The suitability of the proposed model was confirmed with MAE, MSE, RMSE, R2, IOA, and IOS. The results confirm that the stacking ensemble is superior on all base models.
Table 5 gives the obtained performance metrics of the developed stacking ensemble and four base learners applied for predicting the E on the testing data. It is clear from the obtained results that the stacking ensemble out performs base models in the terms of MSE, MAE, RMSE, R2, VAF, MAPE, PI, a20_index, WMAPE, and IOS.
For more clarity, the predicted and experimental values of the UCS and E by various models studied in this research in both training and testing phase are displayed in Figs 14 and 15, respectively. Furthermore, the scatter plot of actual and predicted values by four base models and stacking ensemble model are shown in Figs 16 and 17. The correlation of 0.83 and 0.91 confirm the suitability of the proposed method in the comparison of the base models. The results show that the predicted values in stacking ensembles are closer to observed valued than the output of other models.
Measured and predicted UCS (MPa) for the rocks by the XGBoost, MLP, SVR, RF and stacking ensemble in the a) training phase and b) testing phase.
Measured and predicted E (GPa) for the rocks by the XGBoost, MLP, SVR, RF and stacking ensemble in the a) training phase and b) testing phase.
Scatter plots of the output of UCS for the a) XGBoost, b) MLP, c) SVR, d) RF, and e) Stacking Ensemble.
Scatter plots of the output of E for the a) XGBoost, b) MLP, c) SVR, d) RF, and e) Stacking Ensemble.
For more Analysis, the training time of the studied methods are also compared in Table 6. The reported training time was obtained from a workstation equipped with an Intel(R) Xeon (R) W-2150B CPU operating at a frequency of 3 GHz, complemented by 64G of RAM.
4.4. Sensitivity analysis
Sensitivity analysis assesses the efficacy of input parameters in forecasting the output parameter(s). This examination serves to ascertain the parameter that exerts the greatest influence on the prediction. Moreover, it can be employed to construct the most optimal AI models by disregarding inconsequential input parameters. These analyses of sensitivity may take on either a linear or nonlinear form. Numerous scholars have employed the cosine amplitude method (CAM) in conducting sensitivity analysis [64, 65]. CAM determines the sensitivity of input variables using below equation:
(19)
where Xi is the ith input, and Xj I the jth output. The computed CAM for four input variables with UCS (MPa) and E (GPa) are illustrated in Fig 18. As it shown, ne has the lowest influence on both two outputs and other three input parameters have relatively high impact.
RSE of input variables on the a) UCS (MPa), and b) E (GPa).
4.5. Discussion
In spite of high generalization ability and good accuracy of the studied base models in this research, applying them on prediction of the rock properties have some imperfections. The instances may be highly nonlinear and the SVR exploits only once from kernel function to transform the sample data to the high-dimensional feature space, while a single mapping cannot guarantee finding the optimal separable feature space. Exploiting MLP only in one level can lead to getting stuck in local optimum. The RF and XGBoost are based on bagging and boosting, respectively and suffer from their limitations. Both of them are the ensemble of some trees and don’t exploit the advantages of other models.
In the current study, it has been verified that the combination of certain models within the stacking ensemble framework can enhance the achieved results according to various metrics. The proposed approach exhibits significant improvements in RMSE, MSE, MAE, VAF, IOS, MAPE, WMAPE, PI, and a20index when estimating E. This suggests that the proposed method can serve as a viable alternative to other methods for estimating E, as it exhibits low error and high explanatory power in relation to the variance in E. Moreover, the stacking ensemble method also enhances the measures of RMSE, MSE, MAE, and IOS when predicting UCS. This indicates that the proposed method effectively captures the variability in UCS and produces predictions that are closer to the true values compared to the base models. However, it is worth noting that the proposed method does not yield satisfactory results in terms of VAF, MAPE, PI, WMAPE, and a20-index when predicting UCS. These metrics indicate that the proposed method explains less variance and has wider prediction intervals compared to other models. Therefore, further research is needed to address this limitation and explore the impact of marginal data on the estimation of UCS.
Additionally, the comparison of training time between the proposed method and other methods supports its suitability from a time perspective. It is important to acknowledge that the proposed method is an ensemble method, which logically results in longer training times compared to single methods such as SVM and MLP. In the comparison with XGBoost, the proposed method demonstrates significantly faster training times. On the other hand, RF exhibits the shortest training time due to its composition of regression trees that can be learned quickly. Furthermore, the parallel learning of the models in the first level can also contribute to reducing the overall training time.
5. Conclusions
The most important results and findings of the research can be concluded as follows:
1) The laboratory experiments upon the selected carbonate rocks indicated that they are moderate in strength with low values of the E. They have high surface hardness based on the Brinell hardness test. These rocks have moderate to high γd, very low to moderate ne, and moderate to very high vp relating to their composing materials and the presence of cavities, veins, and fractures in their textures and structures.
2) The rocks are recognized very suitable for predicting their engineering properties by using machine learning and soft computing approaches. Therefore, two stacking ensemble methods have been developed for predicting their UCSs and Es from four non-destructive simple laboratory test results namely γd, ne, vp, and HBN, where two base models were considered in the first level of each stacking ensemble.
3) The performance of the developed methods is confirmed in the terms of MAE, MSE, RMSE, and R2. These measures were calculated as 1.657, 3.867, 1.967, and 0.909 for predicting UCS, and 0.483, 0.386, 0.621, and 0.831 for predicting E, respectively. The values of 84.447, 0.130, 0.870, 0.786, 0.115, 1.000, and 0.029 for VAF, MAPE, PI, a20-index, WMAPE, IOA, and IOS further validate the superiority of the proposed method over both base methods, bagging and boosting methods.
4) The obtained results confirm that the exploiting stacking ensemble reduces the error prediction in comparison to the SVR, and MLP as two popular single models. This is due to the fitting ability of individual models can enhance by incorporating them together.
5) Further, our experiments confirm that the stacking ensemble results are superior to the RF and XGBoost as two widely used and successful ensemble models that are based on well-known ensemble methods of bagging and boosting. This is due to the stacking ensemble combines various high-quality models and can correct their errors. 6) It was observed that the stacking ensemble that was suggested did not result in any improvement in the prediction error of UCS on a few performance metrics, which suggests that further research on this topic may be necessary. It is worth to mention, however applying stacking ensemble is more time consuming than using base models alone, but according to the amount of data, the time spent in practice is quite reasonable.
7) Applying suitable stacking ensemble is proposed for predicting other rock properties.
Acknowledgments
The authors would like to thank Dr. Reza Ahari-Pour for his helps to carry out the microscopic study of thin sections. They also acknowledge the official supports of the Engineering Geology and Rock Mechanics Laboratory of Damghan University for performing all laboratory tests of the research.
References
- 1. Gokceoglu C. A fuzzy triangular chart to predict the uniaxial compressive strength of the Ankara agglomerates from their petrographic composition. Engineering Geology. 2002; 66: 39–51. https://doi.org/10.1016/S0013-7952(02)00023-6.
- 2. Kahraman S, Gunaydin O, Fener M. The effect of porosity on the relation between uniaxial compressive strength and point load index. International Journal of Rock Mechanics and Mining Sciences. 2005; 42: 584–9. https://doi.org/10.1016/j.ijrmms.2005.02.004.
- 3. Fener M, Kahraman S, Bilgil A, Gunaydin O. A comparative evaluation of indirect methods to estimate the compressive strength of rocks. Rock Mechanics and Rock Engineering. 2005; 38(4): 329–43.
- 4. Buyuksagisa IS, Goktan RM. The effect of Schmidt hammer type on uniaxial compressive strength prediction of rock. International Journal of Rock Mechanics and Mining Sciences. 2007; 44: 299–307. https://doi.org/10.1016/j.ijrmms.2006.07.008.
- 5. Tiryaki B. Predicting intact rock strength for mechanical excavation using multivariate statistics, artificial neural networks, and regression trees. Journal of Engineering Geology. 2008; 99: 51–60. https://doi.org/10.1016/j.enggeo.2008.02.003.
- 6. Gokceoglu C, Sonmez H, Zorlu K. Estimating the uniaxial compressive strength of some clay bearing rocks selected from Turkey by nonlinear multivariable regression and rule-based fuzzy models. Expert Systems. 2009; 26(2): 176–90. https://doi.org/10.1111/j.1468-0394.2009.00475.x.
- 7. Cevik A, Akcapinar Sezer E, Cabalar AF, Gokceoglu C. Modelling of the uniaxial compressive strength of some clay-bearing rocks using neural network. Applied Soft Computing. 2011; 11(2): 2587–94. https://doi.org/10.1016/j.asoc.2010.10.008.
- 8. Yagiz S, Sezer EA, Gokceoglu C. Artificial neural networks and nonlinear regression techniques to assess the influence of slake durability cycles on the prediction of uniaxial compressive strength and modulus of elasticity for carbonate rocks. International Journal for Numerical and Analytical Methods in Geomechanics. 2012; 36: 1636–1650. https://doi.org/10.1002/nag.1066.
- 9. Grima MA, Babuska R. Fuzzy model for the prediction of unconfined compressive strength of rock samples. International Journal of Rock Mechanics and Mining Sciences. 1999; 36: 339–49. https://doi.org/10.1016/S0148-9062(99)00007-8.
- 10. Horsrud P. Estimating mechanical properties of shale from empirical correlations. SPE Drilling Completion. 2001; 16: 68–73. https://doi.org/10.2118/56017-PA.
- 11. Singh VK, Singh D, Singh TN. Prediction of strength properties of some schistose rocks from petrographic properties using artificial neural networks. International Journal of Rock Mechanics and Mining Sciences. 2001; 38: 269–84. https://doi.org/10.1016/S1365-1609(00)00078-2.
- 12. Gokceoglu C, Yesilnacar E, Sonmez H, Kayabasi A. A neuro-fuzzy model for modulus of deformation of jointed rock masses. Computers and Geotechnics. 2004; 31: 375–83. https://doi.org/10.1016/j.compgeo.2004.05.001.
- 13. Gokceoglu C, Zorlu K. A fuzzy model to predict the uniaxial compressive strength and the modulus of elasticity of a problematic rock. Journal Engineering Applications of Artificial Intelligence. 2004; 17: 61–72. https://doi.org/10.1016/j.engappai.2003.11.006.
- 14. Sonmez H, Tuncay E, Gokceoglu C. Models to predict the uniaxial compressive strength and the modulus of elasticity for Ankara Agglomerate. International Journal of Rock Mechanics and Mining Sciences. 2004; 41: 717–29. https://doi.org/10.1016/j.ijrmms.2004.01.011.
- 15. Sonmez H, Gokceoglu C, Nefeslioglu HA, Kayabasi A. Estimation of rock modulus: For intact rocks with an artificial neural network and for rock masses with a new empirical equation. International Journal of Rock Mechanics and Mining Sciences. 2006; 43: 224–35. https://doi.org/10.1016/j.ijrmms.2005.06.007.
- 16. Tutmez B, Tercan AE. Spatial estimation of some mechanical properties of rocks by fuzzy modeling. Computers and Geotechnics. 2007; 34(1): 10–18. https://doi.org/10.1016/j.compgeo.2006.09.005.
- 17. Zorlu K, Gokceoglu C, Ocakoglu F, Nefeslioglu HA, Acikalin S. Prediction of uniaxial compressive strength of sandstones using petrography-based models. Engineering Geology. 2008; 96: 141–58. https://doi.org/10.1016/j.enggeo.2007.10.009.
- 18. Beiki M, Majdi A, Givshad AD. Application of genetic programming to predict the uniaxial compressive strength and elastic modulus of carbonate rocks. International Journal of Rock Mechanics and Mining Sciences. 2013; 63: 159–69. https://doi.org/10.1016/j.ijrmms.2013.08.004.
- 19. Aboutaleb S, Behnia M, Bagherpour R, Bluekian B. Using non-destructive tests for estimating uniaxial compressive strength and static Young’s modulus of carbonate rocks via some modeling techniques. Bulletin of Engineering Geology and the Environment. 2018; 77(4): 1717–28. https://doi.org/10.1007/s10064-017-1043-2.
- 20. Rezaei M, Asadizadeh M. Predicting unconfined compressive strength of intact rock using new hybrid intelligent models. Journal of Mining and Environment. 2020; 11(1): 231–46.
- 21. Shahani NM, Zheng X, Liu C, Li P, Hassan FU. Application of soft computing methods to estimate uniaxial compressive strength and elastic modulus of soft sedimentary rocks. Arabian Journal of Geosciences. 2022; 15(5): 1–9. https://doi.org/10.1007/s12517-022-09671-6.
- 22. Wang K, Pan H, Fujii Y. Study on energy distribution and attenuation of CO2 fracturing vibration from coal-like material in a new test platform. Fuel. 2024; 356: 129584. https://doi.org/10.1016/j.fuel.2023.129584.
- 23. Bahmed IT, Khatti J, Grover KS. Hybrid soft computing models for predicting unconfined compressive strength of lime stabilized soil using strength property of virgin cohesive soil. Bulletin of Engineering Geology and the Environment. 2024; 83(1): 46. https://doi.org/10.1007/s10064-023-03537-1.
- 24. Ghasemi E, Kalhori H, Bagherpour R, Yagiz S. Model tree approach for predicting uniaxial compressive strength and Young’s modulus of carbonate rocks. Bulletin of Engineering Geology and the Environment. 2018; 77(1): 331–43. https://doi.org/10.1007/s10064-016-0931-1.
- 25. Madhubabu N, Singh PK, Kainthola A, Mahanta B, Tripathy A, Singh TN. Prediction of compressive strength and elastic modulus of carbonate rocks. Measurement. 2016; 88: 202–13. https://doi.org/10.1016/j.measurement.2016.03.050.
- 26. Khan NM, Cao K, Yuan Q, Bin Mohd Hashim MH, Rehman H, Hussain S, et al. Application of Machine Learning and Multivariate Statistics to Predict Uniaxial Compressive Strength and Static Young’s Modulus Using Physical Properties under Different Thermal Conditions. Sustainability. 2022; 14(16): 9901. https://doi.org/10.3390/su14169901.
- 27. Shahani NM, Kamran M, Zheng X, Liu C, Guo X. Application of gradient boosting machine learning algorithms to predict uniaxial compressive strength of soft sedimentary rocks at Thar Coalfield. Advances in Civil Engineering. 2021; 1–19. https://doi.org/10.1155/2021/2565488.
- 28. Rukhaiyar S, Samadhiya NK. A polyaxial strength model for intact sandstone based on Artificial Neural Network. International Journal of Rock Mechanics and Mining Sciences. 2017; 95: 26–47. https://doi.org/10.1016/j.ijrmms.2017.03.012.
- 29. Behzadafshar K, Sarafraz ME, Hasanipanah M, Mojtahedi SF, Tahir MM. Proposing a new model to approximate the elasticity modulus of granite rock samples based on laboratory tests results. Bulletin of Engineering Geology and the Environment. 2019; 78: 1527–36. https://doi.org/10.1007/s10064-017-1210-5.
- 30. Koopialipoor M, Asteris PG, Mohammed AS, Alexakis DE, Mamou A, Armaghani DJ. Introducing stacking machine learning approaches for the prediction of rock deformation. Transportation Geotechnics. 2022; 34: 100756. https://doi.org/10.1016/j.trgeo.2022.100756.
- 31. ASTM. Annual Book of ASTM Standards, Soil and Rock, Construction. V. 8, Section 4. West Conshohocken, PA. 950. 1996a.
- 32.
ISRM. The Blue Book: The Complete ISRM Suggested Methods for Rock Characterization, Testing and Monitoring, 1974–2006. Compilation Arranged by the ISRM Turkish National Group, Ankara, Turkey R. Ulusay and Hudson J.A., Eds., Kazan Offset Press, Ankara. 2007.
- 33. ASTM. Standard guide for petrographic examination of dimension Stone (C1721). Book Standards, Vol. 04.07. 2009.
- 34. ASTM. Standard test method for laboratory determination of pulse velocities and ultrasonic elastic constants of rock. Designation: D2845–D2895. 1996b.
- 35.
ASTM E10-18. Standard Test Method for Brinell Hardness of Metallic Materials. ASTM International, West Conshohocken. 2018.
- 36. ISRM. Suggested methods for determining the uniaxial compressive strength and deformability of rock materials. International Journal of Rock Mechanics and Mining Sciences and Geomechanics Abstract. 1979; 16, 135–40.
- 37.
ASTM. Standard test method for unconfined compressive strength of intact rock core specimens. ASTM standards on disc 04.08; Designation D2938. 1995.
- 38. Anon OH. Classification of rocks and soils for engineering geological mapping. Part 1: rock and soil materials. Bull Int Assoc Eng Geol. 1979; 19(1): 364–437.
- 39. Ishikawa K. Guide to quality control: Quality Resources. 1986.
- 40. Khatti J, Grover K. A study of relationship among correlation coefficient, performance, and overfitting using regression analysis. Int. J. Sci. Eng. Res. 2022; 13: 1074–85.
- 41. Matin S, Farahzadi L, Makaremi S, Chelgani SC, Sattari G. Variable selection and prediction of uniaxial compressive strength and modulus of elasticity by random forest. Applied Soft Computing. 2018; 70: 980–7. https://doi.org/10.1016/j.asoc.2017.06.030.
- 42. Nasiri H, Homafar A, Chelgani SC. Prediction of uniaxial compressive strength and modulus of elasticity for Travertine samples using an explainable artificial intelligence. Results in Geophysical Sciences. 2021; 8: 100034. https://doi.org/10.1016/j.ringps.2021.100034.
- 43. Wolpert DH. Stacked generalization. Neural networks. 1992; 5(2): 241–59. https://doi.org/10.1016/S0893-6080(05)80023-1.
- 44.
Viswanathan V, Rajani NF, Bentor Y, Mooney R, editors. Stacked ensembles of information extractors for knowledge-based population. Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 2015.
- 45. Wang Y, Wang D, Geng N, Wang Y, Yin Y, Jin Y. Stacking-based ensemble learning of decision trees for interpretable prostate cancer detection. Applied Soft Computing. 2019; 77: 188–204. https://doi.org/10.1016/j.asoc.2019.01.015.
- 46. Rajadurai H, Gandhi UD. A stacked ensemble learning model for intrusion detection in wireless network. Neural computing and applications. 2020; 34: 15387–15395. https://doi.org/10.1007/s00521-020-04986-5.
- 47. Cui S, Yin Y, Wang D, Li Z, Wang Y. A stacking-based ensemble learning method for earthquake casualty prediction. Applied Soft Computing. 2021; 101, 107038. https://doi.org/10.1016/j.asoc.2020.107038.
- 48. Dong Y, Zhang H, Wang C, Zhou X. Wind power forecasting based on stacking ensemble model, decomposition and intelligent optimization algorithm. Neurocomputing. 2021; 462: 169–84. https://doi.org/10.1016/j.neucom.2021.07.084.
- 49. Martín J, Sáez JA, Corchado E. On the suitability of stacking-based ensembles in smart agriculture for evapotranspiration prediction. Applied Soft Computing. 2021; 108: 107509.
- 50. Ghaemi A, Safari A, Afsharirad H, Shayeghi H. Situational awareness and deficiency warning system in a smart distribution network based on stacking ensemble learning. Applied Soft Computing. 2022; 128: 109427. https://doi.org/10.1016/j.asoc.2022.109427.
- 51. Cao J, Gao J, Nikafshan Rad H, Mohammed AS, Hasanipanah M, Zhou J. A novel systematic and evolved approach based on XGBoost-firefly algorithm to predict Young’s modulus and unconfined compressive strength of rock. Engineering with Computers. 2021; 1–17. https://doi.org/10.1007/s00366-020-01241-2.
- 52. Han H, Shi B, Zhang L. Prediction of landslide sharp increase displacement by SVM with considering hysteresis of groundwater change. Engineering Geology. 2021; 280: 105876. https://doi.org/10.1016/j.enggeo.2020.105876.
- 53. Dhaliwal SS, Nahid AA, Abbas R. Effective intrusion detection system using XGBoost. Information. 2018; 9(7): 149. https://doi.org/10.3390/info9070149.
- 54. Zhang J, Ma G, Huang Y, Aslani F, Nener B. Modelling uniaxial compressive strength of lightweight self-compacting concrete using random forest regression. Construction and Building Materials. 2019; 210: 713–719. https://doi.org/10.1016/j.conbuildmat.2019.03.189.
- 55. Han Q, Gui C, Xu J, Lacidogna G. A generalized method to predict the compressive strength of high-performance concrete by improved random forest algorithm. Construction and Building Materials. 2019; 226: 734–42. https://doi.org/10.1016/j.conbuildmat.2019.07.315.
- 56. Barzegar R, Sattarpour M, Deo R, Fijani E, Adamowski J. An ensemble tree-based machine learning model for predicting the uniaxial compressive strength of travertine rocks. Neural Computing and Applications. 2020; 32(13): 9065–80. https://doi.org/10.1007/s00521-019-04418-z.
- 57. Smola AJ, Schölkopf B. A tutorial on support vector regression. Statistics and computing. 2004; 14(3): 199–222. https://doi.org/10.1023/B:STCO.0000035301.49549.88.
- 58. Gupta D, Natarajan N. Prediction of uniaxial compressive strength of rock samples using density weighted least squares twin support vector regression. Neural Computing and Applications. 2021; 33(22): 15843–50. https://doi.org/10.1007/s00521-021-06204-2.
- 59. Abbaszadeh Shahri A, Maghsoudi Moud F, Mirfallah Lialestani SP. A hybrid computing model to predict rock strength index properties using support vector regression. Engineering with Computers. 2020; 1–16. https://doi.org/10.1007/s00366-020-01078-9.
- 60.
Chen T, Guestrin C, editors. Xgboost: A scalable tree boosting system. Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining. 2016; https://doi.org/10.1145/2939672.2939785.
- 61. Liu DC, Nocedal J. On the limited memory BFGS method for large scale optimization. Mathematical programming. 1989; 45(1): 503–28. https://doi.org/10.1007/BF01589116.
- 62. Kingma DP, Ba J. Adam: A method for stochastic optimization. arXiv preprint arXiv,14126980. 2014.
- 63. Hosseini S, Khatti J, Taiwo BO, Fissha Y, Grover KS, Ikeda H, et al. Assessment of the ground vibration during blasting in mining projects using different computational approaches. Scientific Reports. 2023; 3(1): 18582. https://doi.org/10.1038/s41598-023-46064-5. pmid:37903881
- 64. Khatti J, Grover KS. A scientometrics review of soil properties prediction using soft computing approaches. Archives of Computational Methods in Engineering. 2023; 1–35. https://doi.org/10.1007/s11831-023-10024-z.
- 65. Khatti J, Samadi H, Grover KS. Estimation of settlement of pile group in clay using soft computing techniques. Geotechnical and Geological Engineering. 2023; 1–32. https://doi.org/10.1007/s10706-023-02643-x.