Fig 1.
Research methodology of the proposed machine learning framework.
Table 1.
Parameter settings for the 16 base machine learning models.
Table 2.
Contingency table illustrating agreement and disagreement between two classifiers.
Fig 2.
Comorbidity distribution and mortality associations in the study cohort.
Fig 3.
Correlation between selected features and the outcome (Death/Alive) in the training dataset.
Fig 4.
Performance comparison of base, boosting, and bagging machine learning algorithms using repeated 10-fold cross-validation on the training data.
Fig 5.
Disagreement metrics among classifier predictions on the test dataset.
Fig 6.
Inter-rater agreement among classifier predictions on the test dataset.
Table 3.
Diversity metrics for the eight selected sub-model sets on the test dataset.
Fig 7.
Comparison of diversity metrics across eight selected sub-model sets on the test dataset.
Table 4.
Accuracy of stacking sub-model sets using five different meta-learners on the test dataset.
Table 5.
Results of significance tests comparing the best base classifier with the stacking model in each sub-model set.
Table 6.
Performance evaluation of selected stacking models that outperform the most accurate individual algorithm in their respective combinations.
Table 7.
Statistical comparison between stacked random forest (RF) and XGBoost (XGB) utilizing a neural network (NN) meta-learner and other stacking models.
Fig 8.
Performance metrics of the selected stacked models on the training dataset.
Fig 9.
ROC curves of the best-performing stacked models on the test dataset.
Stacking NB, GBM using GLM meta-learner (stack.GLM.GBM.NB). Stacking SVM, GBM using the GBM meta-learner (stack. GBM.SVM.GBM). Stacking RF, CART, NN, GBM, XGB, Treebag using Random Forest meta-learner (stack. RF.RF.CART.NN.GBM.XGB.Treebag). Stacking RF, XGB using the Neural Network meta-learner (stack. NN.RF.XGB). Stacking NB, C5.0, GBM using the GBM meta-learner (stack. GBM.NB.C5.0.GBM).
Fig 10.
Calibration plot of the stacked Random Forest (RF) and XGBoost (XGB) model using a Neural Network (NN) meta-learner under repeated 10-fold cross-validation.
Table 8.
Training and prediction times for single models and stacking ensembles.
Fig 11.
Most influential predictors contributing to “Death” outcomes in the stacked RF–XGB model with an NN meta-learner.
Fig 12.
SHAP-based interpretation of the stacked RF–XGB model using a Neural Network meta-learner.
Fig 13.
Interaction effects between age and key clinical predictors influencing mortality in the stacked RF–XGB model with an NN meta-learner.