Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Fig 1.

An example of the relationship between a factor and its observed features.

More »

Fig 1 Expand

Fig 2.

An example of using a component to represent its corresponding features.

More »

Fig 2 Expand

Fig 3.

An example of the process of extracting signals from the cocktail party problem with two speaking people (source signals) and two microphones (mixture signals).

More »

Fig 3 Expand

Fig 4.

An example of MLP with three input neurons, two hidden neurons, and one output neuron.

More »

Fig 4 Expand

Fig 5.

ε-SVM regression with the ε-insensitive hinge loss, meaning there is no penalty to errors within the ε margin.

More »

Fig 5 Expand

Fig 6.

An example of the RF model.

More »

Fig 6 Expand

Fig 7.

An example of the XGBoost model.

More »

Fig 7 Expand

Table 1.

Parameter settings for the prediction models, where #neurons is the number of neurons, #iterations is the maximum number of iterations, regularisation is the regularisation parameter, σ2 is the variance within the RBF kernel, #trees is the number of trees, and depth is the maximum depth of the tree.

More »

Table 1 Expand

Fig 8.

A flowchart of different feature extraction methods used for body fat prediction based on K-fold cross validation with N repeated experiments.

More »

Fig 8 Expand

Table 2.

Statistical properties of Case 1’s body fat dataset.

More »

Table 2 Expand

Fig 9.

Explained variance ratio for the StatLib dataset.

More »

Fig 9 Expand

Table 3.

Experimental results based on the StatLib dataset (best results are highlighted in bold).

More »

Table 3 Expand

Table 4.

Wilcoxon rank-sum tests for the MLP, SVM, RF, XGBoost, and the use of feature extraction, based on the StatLib dataset in terms of RMSE (p-values less than 0.05 are highlighted in bold).

More »

Table 4 Expand

Table 5.

Experimental results for the MLP, SVM, RF, and XGBoost, based on the StatLib dataset, with FA feature extraction (best results are highlighted in bold; # means the number of features).

More »

Table 5 Expand

Table 6.

Experimental results for the MLP, SVM, RF, and XGBoost, based on the StatLib dataset, with PCA feature extraction (best results are highlighted in bold; # means the number of features).

More »

Table 6 Expand

Table 7.

Experimental results for the MLP, SVM, RF, and XGBoost, based on the StatLib dataset, with ICA feature extraction (best results are highlighted in bold; # means the number of features).

More »

Table 7 Expand

Fig 10.

Comparison results in terms of computation time based on FA, PCA and ICA feature extraction for the StatLib dataset.

More »

Fig 10 Expand

Table 8.

Statistical properties of Case 2’s body fat dataset.

More details can be found at https://www.cdc.gov/nchs/nhanes/index.htm.

More »

Table 8 Expand

Fig 11.

Explained variance ratio for the NHANES dataset.

More »

Fig 11 Expand

Table 9.

Experimental results based on the NHANES dataset (best results are highlighted in bold).

More »

Table 9 Expand

Table 10.

Wilcoxon rank-sum tests for the MLP, SVM, RF, XGBoost, and the use of feature extraction, based on the NHANES dataset in terms of RMSE (p-values less than 0.05 are highlighted in bold).

More »

Table 10 Expand

Table 11.

Experimental results for the MLP, SVM, RF, and XGBoost, based on the NHANES dataset, with FA feature extraction (best results are highlighted in bold; # means the number of features).

More »

Table 11 Expand

Table 12.

Experimental results for the MLP, SVM, RF, and XGBoost, based on the NHANES dataset, with PCA feature extraction (best results are highlighted in bold; # means the number of features).

More »

Table 12 Expand

Table 13.

Experimental results for the MLP, SVM, RF, and XGBoost, based on the NHANES dataset, with ICA feature extraction (best results are highlighted in bold; # means the number of features).

More »

Table 13 Expand

Fig 12.

Comparison results in terms of computation time based on FA, PCA, and ICA feature extraction for the NHANES dataset.

More »

Fig 12 Expand