Fig 1.
Adaptive Cluster-Guided Simple, Fast, and Efficient (ACG-SFE) model.
Fig 2.
Binary representation of feature subset.
Fig 3.
Numerical example for determining the optimal number of clusters in Algorithm 2.
Table 1.
List of 11 datasets and their description.
Table 2.
Hyperparameter settings.
Table 3.
Test accuracy (%) across 30 runs (worst, best, mean, and standard deviation) for six models.
Fig 4.
Average test classification accuracy of six feature selection models.
Fig 5.
Convergence curve of feature selection algorithms across 11 datasets with KNN classifier.
Table 4.
RMSE between train and test accuracy (%) across 30 runs (worst, best, mean, and standard deviation) for six models.
Fig 6.
Average RMSE of train and test classification accuracy of six feature selection models.
Table 5.
Number of selected features across 30 runs (worst, best, mean, and standard deviation) for six models.
Table 6.
Feature reduction rate (FRR) (%) across 30 runs (worst, best, mean, and standard deviation) for six models.
Fig 7.
Average FRR of six feature selection models.
Table 7.
F‑measure (%) across 30 runs (worst, best, mean, and standard deviation) for six models.
Fig 8.
Average F-measure of six feature selection models.
Fig 9.
Overall distribution of feature selection frequencies across 30 runs for the ACG-SFE model.
Fig 10.
Frequency of stable features selected by ACG‑SFE for each dataset.
Fig 11.
PCA scatter plots using stable features selected by ACG‑SFE for each dataset.
Fig 12.
t‑SNE plots using stable features selected by ACG‑SFE for each dataset.
Table 8.
Jaccard similarity (%) across 30 runs (worst, best, mean, and standard deviation) for five evolutionary feature selection models.
Fig 13.
Control chart of test accuracy stability across 30 runs of ACG‑SFE for each dataset.
Fig 14.
Control chart of RMSE between train and test accuracy across 30 runs of ACG‑SFE.
Fig 15.
Control chart of F‑measure stability across 30 runs of ACG‑SFE for each dataset.