Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Fig 1.

Architecture of the AE network.

An AE comprises three components: the encoder, the latent space, and the decoder. Here, the encoder transforms the input data to an encoded representation, the latent space is a representation of the compressed knowledge of the input data, and the decoder reconstructs the input data from its encoded form.

More »

Fig 1 Expand

Fig 2.

Architecture of the anomaly detection model using the proposed FLAE.

When the FL process is initiated, all devices begin training simultaneously. Here, each device sends its weights to the global server, which then uses the FedAvg algorithm to aggregate the received weights. The global server then sends the aggregated weight back to each device and training continues.

More »

Fig 2 Expand

Fig 3.

Overview of the entire process of the local AE model, from data preprocessing to anomaly detection.

The dataset is split into training data, testing data, and label data. Label data are added manually by doubling their value. Once the model is trained, the MSELoss is calculated to define the threshold as the reconstruction error of the training data to detect anomalies. If the loss is greater than the threshold, the datapoint is labeled as an anomaly.

More »

Fig 3 Expand

Table 1.

MAE, MSE, and RMSE values of SG filter on the power consumption datasets.

More »

Table 1 Expand

Fig 4.

The actual power consumption values are compared with the predicted values obtained through the proposed FLAE method.

More »

Fig 4 Expand

Table 2.

Performance comparison of proposed FLAE with the state-of-the-art models on the power consumption dataset.

More »

Table 2 Expand

Fig 5.

Anomaly score of the active power and anomalies exceeding the threshold for 7000 hours.

The green line represents the anomaly score, calculated by measuring the deviation between actual and predicted values. The red dots on the graph indicate anomalies that exceed the threshold value.

More »

Fig 5 Expand

Fig 6.

Comparison between FL and non-FL models across six devices.

Results indicate that the proposed FLAE model performed well across all six devices, while non-FL models performed ineffectively in Device 5. This demonstrates the effectiveness of the proposed FLAE model in enabling accurate anomaly detection across a distributed network of devices.

More »

Fig 6 Expand

Fig 7.

The performance of the non-FL models trained on the power consumption dataset are evaluated for different dataset training sizes.

The F1-score and AUC score, as well as the training time, are reported for training sizes ranging from 20% to 100% of the full dataset.

More »

Fig 7 Expand

Fig 8.

The performance metrics of F1-score and AUC score, as well as the training time, were evaluated for the power consumption dataset using different window sizes ranging from 5 to 20 of the sliding window.

More »

Fig 8 Expand