Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Fig 1.

Research methodology of the proposed machine learning framework.

More »

Fig 1 Expand

Table 1.

Parameter settings for the 16 base machine learning models.

More »

Table 1 Expand

Table 2.

Contingency table illustrating agreement and disagreement between two classifiers.

More »

Table 2 Expand

Fig 2.

Comorbidity distribution and mortality associations in the study cohort.

More »

Fig 2 Expand

Fig 3.

Correlation between selected features and the outcome (Death/Alive) in the training dataset.

More »

Fig 3 Expand

Fig 4.

Performance comparison of base, boosting, and bagging machine learning algorithms using repeated 10-fold cross-validation on the training data.

More »

Fig 4 Expand

Fig 5.

Disagreement metrics among classifier predictions on the test dataset.

More »

Fig 5 Expand

Fig 6.

Inter-rater agreement among classifier predictions on the test dataset.

More »

Fig 6 Expand

Table 3.

Diversity metrics for the eight selected sub-model sets on the test dataset.

More »

Table 3 Expand

Fig 7.

Comparison of diversity metrics across eight selected sub-model sets on the test dataset.

More »

Fig 7 Expand

Table 4.

Accuracy of stacking sub-model sets using five different meta-learners on the test dataset.

More »

Table 4 Expand

Table 5.

Results of significance tests comparing the best base classifier with the stacking model in each sub-model set.

More »

Table 5 Expand

Table 6.

Performance evaluation of selected stacking models that outperform the most accurate individual algorithm in their respective combinations.

More »

Table 6 Expand

Table 7.

Statistical comparison between stacked random forest (RF) and XGBoost (XGB) utilizing a neural network (NN) meta-learner and other stacking models.

More »

Table 7 Expand

Fig 8.

Performance metrics of the selected stacked models on the training dataset.

More »

Fig 8 Expand

Fig 9.

ROC curves of the best-performing stacked models on the test dataset.

Stacking NB, GBM using GLM meta-learner (stack.GLM.GBM.NB). Stacking SVM, GBM using the GBM meta-learner (stack. GBM.SVM.GBM). Stacking RF, CART, NN, GBM, XGB, Treebag using Random Forest meta-learner (stack. RF.RF.CART.NN.GBM.XGB.Treebag). Stacking RF, XGB using the Neural Network meta-learner (stack. NN.RF.XGB). Stacking NB, C5.0, GBM using the GBM meta-learner (stack. GBM.NB.C5.0.GBM).

More »

Fig 9 Expand

Fig 10.

Calibration plot of the stacked Random Forest (RF) and XGBoost (XGB) model using a Neural Network (NN) meta-learner under repeated 10-fold cross-validation.

More »

Fig 10 Expand

Table 8.

Training and prediction times for single models and stacking ensembles.

More »

Table 8 Expand

Fig 11.

Most influential predictors contributing to “Death” outcomes in the stacked RF–XGB model with an NN meta-learner.

More »

Fig 11 Expand

Fig 12.

SHAP-based interpretation of the stacked RF–XGB model using a Neural Network meta-learner.

More »

Fig 12 Expand

Fig 13.

Interaction effects between age and key clinical predictors influencing mortality in the stacked RF–XGB model with an NN meta-learner.

More »

Fig 13 Expand