Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Fig 1.

IR UAV Detection Trilemma.

More »

Fig 1 Expand

Table 1.

Related datasets.

More »

Table 1 Expand

Table 2.

YOLO-Based Visual UAV Detection Methods.

More »

Table 2 Expand

Fig 2.

Infrared UAV Dataset (AUVD-Seg300) and Segmentation Annotations.

More »

Fig 2 Expand

Fig 3.

Detailed technical architecture of YOLO11-AU-IR.

More »

Fig 3 Expand

Table 3.

Module replacement computational cost analysis.

More »

Table 3 Expand

Table 4.

Hardware and software platform configurations.

More »

Table 4 Expand

Table 5.

a. Comparative results of lightweight models (≤15MB) on AUVD-Seg300 dataset. b. Comparative results of medium and heavyweight models (>15MB) on AUVD-Seg300 dataset.

More »

Table 5 Expand

Fig 4.

Simplified flow diagram highlighting the integration of our key contributions (EADown, HSAN, ATFL) within the detection pipeline.

More »

Fig 4 Expand

Fig 5.

Workflow of YOLO11-AU-IR model development and testing.

More »

Fig 5 Expand

Fig 6.

Input images and segmentation results of various models on the AUVD-Seg300 dataset.

More »

Fig 6 Expand

Fig 7.

Model efficiency and performance trade-off analysis.

(a) Relationship between mAP@0.5 and inference time; (b) Relationship between mAP@0.5: 0.95 and inference time; (c) Relationship between mAP@0.5 and GFLOPs; (d) Relationship between mAP@0.5 and the number of parameters.

More »

Fig 7 Expand

Fig 8.

Comparison of training performance of different models on the AUVD-Seg300 dataset.

(a) Accuracy curve; (b) Recall curve; (c) mAP@0.50 curve; (d) mAP@0.50:0.95 curve.

More »

Fig 8 Expand

Table 6.

Comparison of experimental results for multi-scale object detection.

More »

Table 6 Expand

Table 7.

Three-fold cross-validation performance comparison.

More »

Table 7 Expand

Fig 9.

Normalized confusion matrix for YOLO11-AU-IR showing classification performance across UAV categories.

More »

Fig 9 Expand

Fig 10.

Error-focused Grad-CAM comparison.

More »

Fig 10 Expand

Table 8.

Ablation study results.

More »

Table 8 Expand

Fig 11.

Bar chart visualization of ablation study results showing progressive performance improvements with each module integration.

More »

Fig 11 Expand

Fig 12.

Training performance comparison across ablation configurations.

(a) Baseline, (b) Baseline+EADown, (c) Baseline+HSAN, (d) Baseline+EADown+HSAN, (e) YOLO11-AU-IR. Each subplot shows training/validation loss curves and four key metrics: precision, recall, mAP@0.50, and mAP@0.50:0.95.

More »

Fig 12 Expand

Fig 13.

Inference testing on the NVIDIA Jetson TX2 platform.

More »

Fig 13 Expand

Fig 14.

Computational performance comparison among Jetson edge devices.

More »

Fig 14 Expand

Table 9.

Model performance comparison for ONNX (INT8) deployment.

More »

Table 9 Expand

Table 10.

Model performance comparison for torchscript (optimize, INT8) deployment.

More »

Table 10 Expand

Table 11.

Module-wise resource consumption analysis on NVIDIA Jetson TX2.

More »

Table 11 Expand