Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Fig 1.

EFEN-YOLOv8 architecture.

Thermal maps demonstrate the effectiveness of each improvement component. The enhanced model successfully captures defect information at shallow layers while maintaining focus on defect features through multi-scale fusion and attention mechanisms.

More »

Fig 1 Expand

Fig 2.

SAConv module architecture and computational flow.

The module employs multi-scale kernel operations followed by adaptive pooling and attention mechanisms for enhanced shallow feature extraction.

More »

Fig 2 Expand

Fig 3.

Geometric configuration for -FEIoU computation.

The predicted bounding box (blue), ground truth box (red), and minimum enclosing rectangle (yellow) define the spatial relationships used in loss calculation. Parameters b, w, and h represent box centers, widths, and heights respectively.

More »

Fig 3 Expand

Fig 4.

LSKA module architecture and computational flow.

The module processes input features through cascaded depth-wise convolutions: standard DW-Conv followed by dilated DW-D-Conv operations. Results are concatenated with the original feature map after convolution to produce the final attended output. Parameters: C denotes input channels, H and W represent spatial dimensions, d controls dilation rate, and k defines maximum receptive field extent.

More »

Fig 4 Expand

Fig 5.

WASPP module architecture and multi-scale feature integration.

The module employs parallel convolutional branches with varying receptive fields, followed by adaptive weighting mechanisms and feature concatenation. Each pathway contributes scale-specific information that is selectively emphasized through sigmoid-based attention before final fusion.

More »

Fig 5 Expand

Fig 6.

Representative defect categories in NEU-DET dataset.

Each class exhibits distinct morphological characteristics and varying degrees of visual complexity, with irregular spatial distributions that challenge detection algorithms.

More »

Fig 6 Expand

Fig 7.

Defect category distribution in GC10-DET dataset.

The ten defect classes represent diverse steel surface anomalies with varying scales, textures, and morphological characteristics.

More »

Fig 7 Expand

Table 1.

mAP values under different losses with a training-to-testing ratio of 9:1.

More »

Table 1 Expand

Table 2.

mAP values under different losses with a training-to-testing ratio of 8:2.

More »

Table 2 Expand

Table 3.

Effects of different LSKA convolution kernel in NEU-DET Dataset.

More »

Table 3 Expand

Table 4.

Effects of different LSKA convolution kernel in GC10-DET Dataset.

More »

Table 4 Expand

Table 5.

The ablation results of each module.

More »

Table 5 Expand

Fig 8.

Comparative feature extraction visualization through HiResCam analysis.

Heat maps demonstrate superior defect localization capabilities of our proposed architecture compared to baseline YOLOv8n across representative defect categories, revealing enhanced sensitivity to subtle anomalies and improved spatial feature extraction.

More »

Fig 8 Expand

Table 6.

Performance comparison with state-of-the-art detection methods on NEU-DET dataset.

More »

Table 6 Expand

Table 7.

Generalization performance comparison across different detection architectures.

More »

Table 7 Expand

Fig 9.

Comparative visualization of detection performance across different methods on industrial defect samples.

More »

Fig 9 Expand

Table 8.

Statistical significance analysis of ablation components across 5 random splits on NEU-DET dataset.

The evaluation indicators mainly include the mean (), standard deviation (), 95% confidence interval, improvement over baseline (), and p-value of mAP scores.

More »

Table 8 Expand

Table 9.

Statistical analysis of experimental results across 5 random splits.

The evaluation indicators mainly include the mean (), standard deviation (), and 95% confidence interval of mAP scores.

More »

Table 9 Expand

Fig 10.

Confusion matrices demonstrating classification performance under different training-testing data splits.

More »

Fig 10 Expand