Fig 1.
Methodological pipeline for multi-omics classification.
Overview of the methodological pipeline, illustrating graph construction, model training, and evaluation phases in multi-omics classification.
Table 1.
Comparison of similarity and distance metrics employed for multi-omics graph construction. Each method’s mathematical range, core capability, and tunable hyperparameters are summarized.
Fig 2.
Class label distribution for BRCA and ROSMAP datasets.
Distribution of class labels across BRCA subtypes and ROSMAP phenotypes. Balanced sampling ensured across omics modalities.
Fig 3.
Feature preselection workflow.
(A) Total feature count before selection across all omics datasets. (B) Features retained for training after dimensionality reduction and filtering.
Fig 4.
Train–test data splitting strategy.
Train-test splitting strategy for the experiments: 70% training, 30% testing. Stratified to maintain class balance.
Table 2.
Evaluation metrics utilized for performance assessment. Definitions and formulas are provided for classification quality quantification.
Fig 5.
Performance metric visualizations for the ROSMAP dataset across similarity network variants.
Heatmap representation of performance metrics for each similarity network.
Fig 6.
Performance metric visualizations for the BRCA dataset across similarity network variants.
Heatmap representation of performance metrics for each similarity network.
Fig 7.
AUC variability across similarity networks for BRCA and ROSMAP datasets.
Bar chart showing standard deviation of AUC for each similarity network variant, where lower values indicate more stable performance across repeated experiments.
Table 3.
Ablation study results (Mean AUC) for GCN and VCDN components.