Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Fig 1.

An example SOC managing multiple independent entities.

More »

Fig 1 Expand

Fig 2.

Label inconsistencies arising from different viewpoints.

Label inconsistencies arise when entities interpret the same alert differently due to different viewpoints or security policies.

More »

Fig 2 Expand

Fig 3.

Schematic overview of federated learning in an SOC.

The process involves local training, model update, and global aggregation, highlighting (1) inconsistent labeling among different entities (ILADE) and (2) model inversion risk.

More »

Fig 3 Expand

Fig 4.

Training and testing phases of AFL.

More »

Fig 4 Expand

Fig 5.

Keyed Feature Hashing (KFH) overview.

KFH obfuscates alert data using a shared secret key among the security center and its participating entities, enabling consistent encoding across organizations and preventing index-probing enumeration that is feasible with fixed, unkeyed mappings.

More »

Fig 5 Expand

Fig 6.

AFL incorporating a filtering mechanism to address semantic label divergence.

More »

Fig 6 Expand

Fig 7.

Filter generation process.

The process uses obfuscated vectors from the KFH encoder to identify clusters prone to semantic label inconsistency.

More »

Fig 7 Expand

Table 1.

Dataset statistics.

More »

Table 1 Expand

Fig 8.

Comparison of representative FL optimizers.

The figure compares FedAvg, FedProx, FedAdam, and FedELC based on macro-averaged precision, recall, and F1-scores across 14 entities. FedAvg, FedProx, and FedAdam show similar performance, while FedELC is slightly lower. Given their comparable results, FedAvg was chosen as the baseline for its simplicity and stability.

More »

Fig 8 Expand

Table 2.

Hyperparameter settings for AFL and FedAvg experiments.

More »

Table 2 Expand

Fig 9.

Comparison of LOCAL and FL models.

The figure evaluates performance on local (left) and global (right) test datasets. Each bar indicates the mean F1-score over three runs. (A) LOCAL models generalize poorly across entities. (B) The FL model generalizes better overall but shows a sharp drop in Entity 10 due to semantic label divergence (ILADE).

More »

Fig 9 Expand

Fig 10.

t–SNE visualization of latent representations.

Benign (blue) and malicious (red) flows from Entity 10 are clustered in the same region as malicious flows (orange) from other entities, illustrating semantic label divergence (ILADE).

More »

Fig 10 Expand

Fig 11.

Local test results.

Results obtained when and . (A) Per-entity F1-scores (mean standard deviation over three runs) on local test sets. AFL shows comparable or higher performance than FL for several entities, while maintaining stable results under ILADE conditions. (B) Per-entity coverage (mean over three runs) on local test sets. AFL maintains consistently high coverage across institutions.

More »

Fig 11 Expand

Fig 12.

Global test results.

Results obtained when and . (A) Per-entity F1-scores (mean standard deviation over three runs) on the global test set. AFL maintains higher or comparable global performance compared to FL and LOCAL across entities, demonstrating stable global generalization. (B) Per-entity coverage (mean over three runs) on the global test set. AFL sustains near-complete coverage.

More »

Fig 12 Expand

Table 3.

Performance summary across different sweep settings of and .

Values are reported as mean std over three runs.

More »

Table 3 Expand