Fig 1.
Adversarial training process where catastrophic overfitting occurs.
Fig 2.
Adversarial training processes in traditional FGSM adversarial training.
Table 1.
Attack success rates of different single-step attack methods.
Table 2.
Average total perturbation magnitudes generated by different single-step attack methods.
Table 3.
Maximum total perturbation magnitudes generated by different single-step attack methods.
Fig 3.
Comparison of adversarial samples generated by different single-step attack methods.
Fig 4.
Flowchart of fast adversarial training method based on adaptive similarity step size.
Fig 5.
Illustrative examples for Euclidean distance similarity and cosine similarity.
Fig 6.
Adversarial training process of ATSS on the ResNet18 model.
Fig 7.
Adversarial training process of ATSS on the VGG19 model.
Fig 8.
Adversarial training process of ATSS on the CIFAR-100 dataset.
Fig 9.
Adversarial training process of ATSS on the Tiny ImageNet dataset.
Fig 10.
Performance comparison during training among ATSS, FGSM-RS, and N-FGSM.
Table 4.
Classification accuracy and training time under adversarial attacks for different adversarial training methods.
Fig 11.
Comparison of ATSS and multi-step adversarial training method in robust accuracy and training time.
Table 5.
Classification accuracy against PGD-10 attacks on different datasets.
Table 6.
Test results under different similarity calculation strategies.
Table 7.
Test results under different influence coefficient.
Table 8.
Test results under different standard step size.