Body fat prediction through feature extraction based on anthropometric and laboratory measurements

Obesity, associated with having excess body fat, is a critical public health problem that can cause serious diseases. Although a range of techniques for body fat estimation have been developed to assess obesity, these typically involve high-cost tests requiring special equipment. Thus, the accurate prediction of body fat percentage based on easily accessed body measurements is important for assessing obesity and its related diseases. By considering the characteristics of different features (e.g. body measurements), this study investigates the effectiveness of feature extraction for body fat prediction. It evaluates the performance of three feature extraction approaches by comparing four well-known prediction models. Experimental results based on two real-world body fat datasets show that the prediction models perform better on incorporating feature extraction for body fat prediction, in terms of the mean absolute error, standard deviation, root mean square error and robustness. These results confirm that feature extraction is an effective pre-processing step for predicting body fat. In addition, statistical analysis confirms that feature extraction significantly improves the performance of prediction methods. Moreover, the increase in the number of extracted features results in further, albeit slight, improvements to the prediction models. The findings of this study provide a baseline for future research in related areas.


Introduction
Obesity, characterised by excess body fat, is a medical problem that increases one's risk of other diseases and health issues, such as cardiovascular diseases, diabetes, musculoskeletal disorders, depression and certain cancers [1][2][3]. These diseases could result in escalating the spiralling economic and social costs of nations [4]. Conversely, having extremely low body fat is also a significant risk factor for infection in children and adolescents [5], and it may cause pubertal delay [6], osteoporosis [7] and surgical complications [8]. Thus, the accurate prediction of both excess and low body fat is critical to identifying possible treatments, which would prevent serious health problems. Although a huge volume of medical data is available from a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 sensors, electronic medical health records, smartphone applications and insurance records, analysing the data is difficult [9]. There are often too many measurements (features), leading to the curse of dimensionality [10] from a data analytics viewpoint. With a relatively small size of patient samples, but a large number of disease measurements, it is very challenging to train a highly accurate prediction model [11]. In addition, redundant, irrelevant or noise features may further hinder the prediction model's performance [12].
Feature extraction, as an important tool in data mining for data pre-processing, has been applied to reduce the number of input features by creating new, more representative combinations of features [13]. This process reduces the number of features without leading to significant information loss [14]. In this study, three widely used feature extraction methods are utilised to reduce features. Specifically, by analysing large interrelated features, Factor Analysis (FA) can be used to extract the underlying factors (latent features) [15]. It is able to identify latent factors that adequately predict a dataset of interest. Unlike FA, which assumes there is an underlying model, Principal Component Analysis (PCA) is a descriptive feature reduction method that applies an optimal set of derived features, extracted from the original features, for model training [16]. PCA data projection concerns only the variances between samples and their distribution. Independent Component Analysis (ICA), a technique that assumes the data to be the linear mixtures of non-Gaussian independent sources [17], is widely used in blind source separation applications [18].
Feature extraction has been widely used in the medical area to map redundant, relevant and irrelevant features into a smaller set of features from the original data [19,20]. For example, Das et al. [21] applied feature extraction methods to extract significant features from the raw data before using an Artificial Neural Network (ANN) model for medical disease classification. Their results showed that feature extraction methods could increase the accuracy of diagnosis. Tran et al. [22] proposed an improved FA method for cancer subtyping and risk prediction with good results. Sudharsan and Thailambal [23] applied PCA to pre-process the experimental datasets used for predicting Alzheimer's disease. Their results showed that applying PCA for pre-processing could improve the precision of the prediction model. In the work of Franzmeier et al. [24], ICA was utilised to extract features from cross-sectional data for connectivity-based prediction of tau spreading in Alzheimer's disease with impressive results.
In addition, machine learning methods have been increasingly applied to solve body fat prediction problems [25]. Shukla and Raghuvanshi [26] showed that the ANN model is effective for estimating the body fat percentage using anthropometric data in a non-diseased group. Kupusinac et al. [27] also employed ANNs for body fat prediction and achieved high prediction accuracy. Keivanian et al. [28,29] considered a weighted sum of body fat prediction errors and the ratio of features, and optimised the prediction using a metaheuristic search-based feature selection-Multi-Layer Perceptron (MLP) model (MLP is a type of ANN). Chiong et al. [30] proposed an improved relative-error Support Vector Machine (SVM) for body fat prediction with promising results. Fan et al. hybridised a fuzzy-weighted operation and Gaussian kernel-based machine learning models to predict the body fat percentage, while Uçar et al. [31] combined a few machine learning methods (e.g. ANN and SVM) for the same purpose, and their models achieved satisfactory predictions.
In this study, we apply FA, PCA and ICA to extract critical features from the available features, using four machine learning methods-MLP, SVM, Random Forest (RF) [32], and eXtreme Gradient Boosting (XGBoost) [33]-to predict the body fat percentage. We consider five metrics, that is, the mean absolute error (MAE), standard deviation (SD), root mean square error (RMSE), robustness (MAC) and efficiency, in the evaluation process. We use experimental results based on real-world body fat datasets to validate the effectiveness of feature extraction for body fat prediction. One of the datasets is from the StatLib, based on body circumference measurements [34]; the other dataset is from the National Health and Nutrition Examination Survey (NHANES) based on physical examinations [35]. In addition, we employ the Wilcoxon rank-sum test [36] to validate whether the prediction accuracy based on feature extraction improves significantly or not. The motivation of this study is to assess and compare different feature extraction methods for body fat prediction as well as provide a baseline for future research in related areas. It is worth pointing out that the results presented here are new in the context of body fat prediction. We also explore the optimal number of features used for each of the feature extraction methods while balancing accuracy and efficiency.
The rest of this paper is organised as follows: Section 2 briefly introduces the feature extraction methods and prediction models. In Section 3, experimental results based on the realworld body fat datasets are provided; specifically, performance measurements are first described, and then experimental results based on feature extraction for the prediction of body fat percentage are discussed. Lastly, Section 4 concludes this study and highlights some future research directions.

Methods
In this section, we first discuss three widely used feature extraction methods: FA, PCA and ICA. Then, we present four well-known machine learning algorithms-MLP, SVM, RF and XGBoost.

Feature extraction methods
Feature extraction methods are widely used in data mining for data pre-processing [37]. They can reduce the number of input features without incurring much information loss [38]. In this case, they can alleviate the overfitting of prediction models by removing redundant, irrelevant or noise measurements/features. In addition, with less misleading features, the model accuracy and computation time could be further improved.
2.1.1 Factor analysis. This widely used statistical method for feature extraction is an exploratory data analysis method. FA can be used to reduce the number of observable features with a set of fewer latent features (factors) without losing much information [39]. Each latent feature is able to describe the relationships between the corresponding observed features. Since the factor cannot be directly measured with a single feature, it is measured through the relationships in a set of common features, if and only if one of these requirements is satisfied: (a) The minimum number of features is used to capture maximum variability in the data and (b) the information overlap among the factors is minimised. By doing so, (1) the most common variance between features is extracted by the first latent factor; (2) eliminating the factor extracted in (1), the second factor with the most variance between the remaining features is extracted; and (3) steps (1) and (2) are repeated until the rest of features are tested. FA is very helpful for reducing features in a dataset where a large number of features can be presented by a smaller number of latent features. An example of the relationship between a factor and its observed features is given in Fig 1, in which p denotes the number of observed features. If the models has k latent features, then the assumption in FA is given in Eq 1. Generally, FA calculates a correlation matrix based on the correlation coefficient to determine the relationship for each pair of features. Then, the factor loadings are analysed to check which features are loaded onto which factors where factor loadings can be estimated using maximum likelihood [40].
where ffw ir g p i¼1 g k j¼1 are factor loadings, which means that w ir is the factor loading of the ith variable on the rth factor (similar to weights or strength of the correlation between the feature and the factor) [41], and e i is the error term, which denotes the variance in each feature that is unexplained by the factor.

Principle component analysis.
PCA is a very useful tool for reducing the dimensionality of a dataset, especially when the features are interrelated [42]. This nonparametric method uses an orthogonal transformation to convert a set of features into a smaller set of features termed principal components. Using a covariance matrix, we are able to measure the association of each feature with other features. To decompose the covariance matrix, singular value decomposition [43] can be applied for linear dimensionality reduction by projecting the data into a lower dimensional space, which yields eigenvectors and eigenvalues of the principal components. In this case, we could obtain the directions of data distribution and the relative importance of these directions. A positive covariance between two features indicates that the features increase or decrease together, whereas a negative covariance indicates that the features vary in opposite directions. The first principal component could preserve as much of the information in the data as possible, whereas the second one could retain as much of the remaining variability as possible until no features are left. In other words, the extracted principal components are ordered in terms of their importance (variance). Considering that PCA is sensitive to the relative scaling of the original features, in practice, it is better to normalise the data before using PCA. An example of using a component to represent its corresponding features is given in Fig 2. As this figure shows, each component is a linear function of its corresponding features, whereas a feature in FA is a function of given factors plus an error term.

Independent component analysis.
ICA is a blind source separation technique [44]. It is very useful for finding factors hidden behind random signals, measurements or features based on high-order statistics. The purpose of ICA is to minimise the statistical dependence of the components of the representation. By doing so, the dependency among the extracted signals is eliminated. To achieve good performance, some assumptions should be met before using ICA [45]: (1) The source signals (features) should be statistically independent; (2) the mixture signals should be linearly independent from each other; (3) the data should be centred (zero-mean operation for every signals); and (4) the source signals should have a non-Gaussian distribution. One widely used application of ICA is the cocktail party problem [46]. As Fig 3 illustrates, there are two people speaking, and each has a voice signal. These signals are received by the microphones, which then send the mixture signals. Since the distance between the microphones and the people differ, the mixture signals from microphones differ as well. Using ICA for signal extraction, the original signals can be obtained. Notably, it is difficult for FA and PCA to extract source signals (original components).

Prediction models
In this section, four widely used machine learning models-MLP, SVM, RF and XGBoostare introduced.

MLP.
The MLP is a type of ANN that generally has three different kinds of layers, including the input, hidden and output layers [47]. Each layer is connected to its adjacent layers. Similarly, each neuron in the hidden and output layers is connected to all the neurons in the previous layer with a weight vector. The values from the weighted sum of inputs and bias term are fed into a non-linear activation function as outputs for the next layer. Fig 4 shows an example of MLP with three, two and one input, hidden and output neurons, respectively. We can see from the figure that the input layer has three input neurons (x 1 , x 2 , x 3 ) and one bias term with a value of b 1 . Their values, based on the inner product with the weight matrix, are fed into the hidden layer. In this step, the input is first transformed using a learned non-linear transformation-an activation function g(�)-that projects the input data into a new space where it becomes linearly separable. The outputs of two neurons in the hidden layer depend on the outputs of input neurons and a bias neuron in the same layer with a value of b 2 . The output layer has one neuron that takes inputs from the hidden layer with the activation function, where f(x) is the feed-forward prediction value from an input vector x.

SVM.
SVMs, founded on the structural risk minimisation principle and statistical learning theory [48], have been widely used in many real-world applications and have  displayed satisfactory performance (e.g., see [49][50][51]). Given n training samples fðx i ; y i Þg n i¼1 , the standard form of ε-SVM regression can be expressed as Eq (2). We can see from Fig 5 that, unlike the SVM for classification problems that classifies a sample into a binary class, the SVM regression fits the best line within a threshold value ε with tolerate errors (ξ i and x � i ).
arg min where w is a weight vector, w T is the transpose of w, b is a bias term, ξ i and x � i are slack variables of the ith sample, C is a penalty parameter, ε is a tolerance error, x i and y i are the ith input vector and output value, respectively, and ϕ(x) is a function that is able to map a sample from a low dimension space to a higher dimension space.
After solving the objective function in Eq (2) using the Lagrangian function [52] and Karush-Kuhn-Tucker conditions [53], we can obtain the best parameters ( � w and � b) for the SVM. The final prediction model, g(x), can be expressed as follows: where

RF.
The RF, proposed by Ho [55], is a decision tree-based ensemble model. For body fat prediction, the RF regression model uses an ensemble learning method for regression. It creates many decision trees based on the training set [56]. By combining multiple decision trees into one model, the RF model improves the prediction accuracy and stability. It is also able to avoid overfitting by utilising resampling and feature selection techniques. The training procedure of RF is given in Fig 6. As the figure illustrates, the RF generates many sub-datasets with the same size of samples from the given training samples based on the re-sampling strategy. Then, for each new training set, each decision tree is trained with the selected features based on recursive partitioning, where a decision tree search is applied for the best split from the selected features. The final output is based on the average of predictions from all the decision trees.

XGBoost.
XGBoost is also an ensemble model [57]. It employs gradient boosting [58] to group multiple results from the decision tree-based models as the final result. In addition, it uses shrinkage and feature sub-sampling to further reduce the impact of overfitting [59]. XGBoost is suitable in applications that require parallelisation, distributed computing, out-of-core computing, and cache optimisation, which is suitable in real-world applications that have high requirements of computation time and storage memory [60]. The training procedure of XGBoost is depicted in Fig 7. It can be seen from the figure that XGBoost is based on gradient boosting. More specifically, new models (decision trees) are built to predict the errors (residuals) of prior models (from f 1 to the current model). Once all the models are obtained, they are integrated together to make the final prediction.

Experimental results and discussions
In this section, we present the results of the computational experiments conducted based on two body fat datasets-Cases 1 and 2-to validate the effectiveness of feature extraction methods for body fat prediction. Case 1 is based on anthropometric measurements, while Case 2 is based on physical examination and laboratory measurements. We compare four well-known

Performance measures
In this study, we considered five performance measures. Specifically, the MAE and RMSE were used to evaluate the model's approximation ability, SD was used to measure the variability of the errors between the predicted and target values, MAC [61] was used to evaluate model robustness, and computation time was used to measure the efficiency. To better evaluate the performance, we randomly shuffled the data and ran the experiments of five-fold cross validation for 20 times, then averaged them to get the final results. The computation time included the time for feature extraction and 20 runs of five-fold cross validation. Our objective was to minimise the MAE, SD, RMSE and computation time while maximising MAC.
where n is the number of samples, y p i and y t i are prediction and target values of the ith sample, respectively, e i is the ith sample's absolute error, � e is the average of absolute errors, (y p ) T y t is the inner product operation for (y p ) T and y t , and (y p ) T is the transpose of y p .

Parameter settings
We used the grid search approach with cross validation for parameter selection [62]. The settings used in our experiments, obtained after some tuning process, are listed in Table 1.
A flowchart of different feature extraction methods used for body fat prediction based on K-fold cross validation with N repeated experiments is given in Fig 8 to further clarify the procedure of our experiments. In the figure, K = 5 and N = 20; i.e., the experiments were repeated 20 times and each experiment was conducted based on 5-fold cross validation.  Table 1. Parameter settings for the prediction models, where #neurons is the number of neurons, #iterations is the maximum number of iterations, regularisation is the regularisation parameter, σ 2 is the variance within the RBF kernel, #trees is the number of trees, and depth is the maximum depth of the tree.

Determination of the number of extracted features.
To determine the number of extracted features, we calculated the explained variance for each feature by using scikit-learn [63]. We only selected the principal components that have the largest eigenvalues based on a given threshold (i.e. how much information it contained). The four steps to determine the number of extracted features were as follows: (1) constructing the covariance matrix; (2) decomposing the covariance matrix into its eigenvectors and eigenvalues; (3) sorting the eigenvalues by decreasing order to rank the corresponding eigenvectors; and (4) selecting the k largest eigenvalues such that their cumulative explained variance reached the given threshold. The explained variance ratio for the StatLib dataset is given in Fig 9. Here, the threshold was set to 0.99, which means 99% of the information remained. In this case, six features were extracted from the 13 input features. Table 3 presents the results obtained by the MLP, SVM, RF and XGBoost for body fat prediction with and without feature extraction. As shown in the table, the SVM, RF and XGBoost perform better than MLP. The performance of SVM and XGBoost is similar, whereas that of RF is the best in terms of accuracy. However, it is clear that, by incorporating feature extraction, the learning models can achieve higher prediction accuracy, stability and robustness in most cases. The XGBoost model with FA feature extraction generated the most precise and stable results, albeit taking longer computation time than the standalone XGBoost. Using the feature extraction method increases the computation time because feature extraction pre-processing also takes time, even though it is more efficient to train the prediction model with less input features. Among all the prediction models, XGBoost with FA for feature extraction shows the best prediction accuracy (MAE = 3.433, SD = 4.188 and RMSE = 4.248), and the SVM with PCA obtained results in the shortest computation time (close to the standalone SVM).

Statistical analysis based on the Wilcoxon rank-sum test.
Although the results of MLP, SVM and XGBoost presented thus far have shown that the use of feature extraction can improve their performance, statistical analysis is needed to validate whether the differences between the results obtained are statistically significant. In this section, we report the results of statistical tests conducted based on the Wilcoxon rank-sum test [64]. Table 4 shows the

Prediction performance with more extracted features.
To investigate the impact of having a different number of anthropometric features on the prediction performance, we increased the number of extracted features from 6 (as calculated in Section 3.3.2) to 13 (the total number of input features) in this series of experiments. Tables 5-7 show the results obtained by the MLP, SVM, RF and XGBoost using FA, PCA and ICA, respectively. As shown in Tables 5-7, in most cases, the accuracy (RMSE and MAE) and stability (SD and MAC) were not necessarily enhanced by extracting more features as the inputs of the learning models. Among the models being compared, XGBoost-FA performs the best for predicting the body fat percentage in terms of MAE, RMSE, SD and MAC, which means it is able to predict the body fat percentage with the highest accuracy and stability on the StatLib dataset. It is critical to reduce the number of dimensions when the data size or the number of dimensions is large (big data scenarios). In addition, the prediction models with PCA outperform the corresponding versions with ICA in terms of all the metrics used. This might be due to the Gaussian distribution of the body fat dataset since PCA can process the Gaussian distribution data while ICA cannot.    Table 8.

Determination of the number of extracted features.
We ran the same experiment as in Section 3.3.2 to determine the number of extracted features. The explained variance ratio for the NHANES dataset is given in Fig 11. With the threshold set to 0.99, 12 features were extracted from the 38 input features. Table 9 presents results obtained through the MLP, SVM, RF and XGBoost for body fat prediction with and without feature extraction. These results are consistent with those shown in Table 3, and show that ensemble models such as XGBoost performs better than the MLP and SVM. Similarly, results show that incorporating feature extraction into the prediction models enhances the body fat prediction accuracy. The XGBoost model with PCA feature extraction generated the most precise and stable results, as well as shorter computation time than the standalone XGBoost. Table 10 presents statistical test results between the experimental results with and without feature extraction pre-processing. As shown in the table, the MLP, SVM, RF and XGBoost and their versions that use feature extraction are significantly different (the p-value is less than 0.05). This means the use of feature extraction methods are effective in improving the performance of MLP, SVM and XGBoost, but not that of RF (the performance of RF_FA, RF_PCA and RF_ICA is less than that of RF in Table 9).

Prediction performance with more extracted features.
To evaluate the prediction performance on increasing the number of extracted features, we conducted experiments in which the number of features used ranged from 12 (as calculated in Section 3.4.2) to 38 (the total number of input features). Tables 11-13 show the results obtained from the MLP, SVM, RF and XGBoost by using FA, PCA and ICA for feature extraction, respectively. From the tables, we can observe that with more features extracted, the prediction models can be further improved using feature extraction methods. Table 11 shows that XGBoost based on FA feature extraction has the best prediction accuracy (3.713, 4.707 and 4.728 in terms of (MAE, SD, RMSE) using 38 features. However, it performs satisfactorily using 24 features (3.772, 4.783, 4.803), which is more feasible in real applications. As shown in Table 12, the MLP has the best performance using 35 features. It has improved (from 4.  Tables 11-13 reveals that the MLP, SVM, RF, and XGBoost with feature extraction performed similarly or better than their corresponding baselines in terms of all metrics with only half the features (19 features). This shows the potential of greatly improving the efficiency in real-world applications. In addition, analysis reveals that PCA is more suitable for extracting features for the body fat dataset than ICA. The reason could be that this body fat dataset has a Gaussian distribution and PCA is better suited for Gaussian-distribution data whereas ICA is better suited for non-Gaussian distribution data.
Among the three feature extraction algorithms, PCA is the most effective one for this dataset. It greatly improves the performance of the prediction models being compared. In addition,

Conclusion
The accurate prediction of body fat is important for assessing obesity and its related diseases. However, researchers find it challenging to analyse the large volumes of medical data generated. The main purpose of this study is to analyse and compare the prediction effectiveness of four well-known machine learning models (MLP, SVM, RF and XGBoost) when combined with three widely used feature extraction approaches (FA, PCA and ICA) for body fat prediction. The results presented in this paper are new in the context of body fat prediction; they could, therefore, provide a baseline for future research in this domain. Experimental results showed that feature extraction methods can reduce features without incurring significant loss of information for body fat prediction. In Case 1, with only six extracted features, the prediction models exhibited better performance than the models without using feature extraction. This finding confirms the effectiveness of feature extraction. Among the comparison models, XGBoost with FA had the best approximation ability and high efficiency. With the increase in the number of extracted features, model performance can be further improved. For Case 2, PCA was the most effective in improving model performance. Although the MLP with PCA had the best prediction accuracy, it required significantly more computation time. This means XGBoost is more appropriate for real-world applications, given its similar prediction accuracy and greater efficiency. Statistical analysis based on the Wilcoxon rank-sum test confirmed that feature extraction significantly improved the performance of MLP, SVM and XGBoost. This finding confirms the effectiveness of using feature extraction in these models. Although, the prediction models can be further improved slightly by increasing the number of extracted features, the number of features determined by the explained variance ratio was sufficient in both the considered cases. The feature extraction results themselves are a novel contribution of this work. The results provided by XGBoost with PCA feature extraction could be used as the baseline for future research in related areas. In future studies, we plan to investigate ways to improve the feature extraction method specified for body fat datasets. Methods of improving the prediction model (e.g. an improved MLP [66]), using XGBoost with PCA as a baseline for body fat prediction, also need to be investigated. It is also worth noting that the findings of this work could be applied to other prediction problems with a large number of features, e.g., finance, engineering and healthcare. Finally, we will explore other applications of analysing the body fat percentage. For example, applying domain knowledge to group body fat percentages into different disease classes in order to confirm the relationship between the body fat percentage and specific disease(s).