Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Fig 1.

The figure illustrates the various steps of a ML pipeline and highlights considerations related to model fairness.

More »

Fig 1 Expand

Table 1.

Comparison of recent ML approaches for PD detection/progression (2021–2025), including our work.

More »

Table 1 Expand

Table 2.

This table shows the error rates and predictive values used in our analysis.

These metrics play a crucial role in evaluating performance.

More »

Table 2 Expand

Fig 2.

Proposed framework for the detection of PD.

More »

Fig 2 Expand

Fig 3.

The experimental route map of the proposed framework.

More »

Fig 3 Expand

Table 3.

Overview of the PPMI database for Parkinson’s disease, including patient demographic information, laboratory results, clinical records, and motor and non-motor assessment outcomes.

More »

Table 3 Expand

Fig 4.

Illustrates how PD patients and non-PD patients differ in terms of (a) age, (b) gender, and (c) race.

More »

Fig 4 Expand

Table 4.

Final chosen hyperparameters for machine learning models and adversarial attacks (this work).

Seeds follow seed = base.

More »

Table 4 Expand

Table 5.

MLP (DL baseline) with and without adversarial robustness (adversarial debiasing).

Entries are mean ± 95% CI over 10-fold CV. Higher is better for Accuracy/F1/AUROC; lower is better for fairness gaps (SPD, EOD).

More »

Table 5 Expand

Table 6.

This table provides the five-fold performance difference between the DT classifier’s performance with and without the enhanced preprocessing fairness mitigation technique.

We provided two types of fairness evaluation: group metrics (TPR, TNR, FPR, FNR, FDR, FOR, PPV, and NPV) and bias measures (SPD, DI, EOD, and AAOD).

More »

Table 6 Expand

Fig 5.

Improved preprocessing fairness mitigation strategy, we can see reduced bias disparities across race, age, and gender features in the Decision Tree model, leading to fairer and more equitable outcomes across these protected groups.

More »

Fig 5 Expand

Fig 6.

For each of the sensitive characteristics, we can provide a brief summary of the DT ML classifier’s group measure (bias or performance disparities) both with and without the use of the optimal pretreatment fairness mitigation strategy: (a) race, (b) age, and (c) gender.

More »

Fig 6 Expand

Table 7.

Report the difference in the RF classifier’s five-fold performance between using and not using the improved preprocessing fairness mitigation approach.

Bias measures (SPD, DI, EOD, and AAOD) and group metrics (TPR, TNR, FPR, FNR, FDR, FOR, PPV, and NPV) are two categories of fairness evaluation that are presented.

More »

Table 7 Expand

Fig 7.

Comparison of bias metrics of the RF ML model with and without using the optimized preprocessing fairness mitigation technique on (a) race, (b) age, and (c) gender features.

More »

Fig 7 Expand

Fig 8.

Comparison of the RF ML model’s group metric with and without the application of the improved preprocessing fairness mitigation method on (a) race, (b) age, and (c) gender features.

More »

Fig 8 Expand

Table 8.

Impact of a poison attack on fairness assessment using DT classifier.

The table reports the difference in five-fold performance and fairness metrics between using and not using the improved preprocessing fairness mitigation approach. Bias measures (SPD, DI, EOD, and AAOD) and group metrics (TPR, TNR, FPR, FNR, FDR, FOR, PPV, and NPV) are two categories of fairness evaluation that are presented.

More »

Table 8 Expand

Fig 9.

Comparison between the poison attack on fairness assessment using the DT’s bias measure while not utilizing and with utilizing the improved preprocessing fairness mitigation strategy on (a) race, (b) age, and (c) gender features.

More »

Fig 9 Expand

Fig 10.

Comparison of the poison-attacked decision tree ML model’s group metric with and without the application of the improved preprocessing fairness mitigation method on (a) race, (b) age, and (c) gender features.

More »

Fig 10 Expand

Table 9.

Impact of label leak attack on fairness assessment using five-fold random forest classifier.

The table reports the difference in performance and fairness metrics between using and not using the improved preprocessing fairness mitigation approach. Bias measures (SPD, DI, EOD, and AAOD) and group metrics (TPR, TNR, FPR, FNR, FDR, FOR, PPV, and NPV) are two categories of fairness evaluation that are presented.

More »

Table 9 Expand

Fig 11.

Comparison between the label leak attack on fairness assessment using random forest model’s bias measure while not utilizing and with utilizing the improved preprocessing fairness mitigation strategy on (a) race, (b) age, and (c) gender features.

More »

Fig 11 Expand

Fig 12.

Comparison between the label leak attack on fairness assessment using random forest model’s bias measure while without/with utilizing the improved preprocessing fairness mitigation strategy on (a) race, (b) age, and (c) gender features.

More »

Fig 12 Expand

Fig 13.

Comparison between the RF (a) and DT (b) model’s ML result while not utilizing and with utilizing the improved preprocessing fairness mitigation strategy and adversarial attack (c), and (d) is DT and RF.

More »

Fig 13 Expand