Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Fig 1.

General workflow diagram.

More »

Fig 1 Expand

Fig 2.

Sample images for each class ((a) Anthracnose, (b) Bacterial Canker, (c) Cutting Weevil, (d) Die Back, (e) Gall Midge, (f) Powdery Mildew, (g) Sooty Mould, (h) Healthy).

More »

Fig 2 Expand

Table 1.

Hardware specifications for training and inferencing.

More »

Table 1 Expand

Table 2.

Comparative model performance metrics (Cross validation 5 fold).

More »

Table 2 Expand

Fig 3.

Deep learning model cross-validation result.

More »

Fig 3 Expand

Table 3.

10-Fold cross-validation performance summary for CNN models (GPU).

More »

Table 3 Expand

Fig 4.

Deep learning model cross-validation result.

More »

Fig 4 Expand

Table 4.

Paired t-test results between CNN models.

More »

Table 4 Expand

Fig 5.

Confusion matrix MobileNet (initial).

More »

Fig 5 Expand

Fig 6.

Grad-CAM illustrating the model’s focus for healthy and diseased leaves.

More »

Fig 6 Expand

Table 5.

Comparative analysis of features for healthy and diseased leaves (based on Grad-CAM insights).

More »

Table 5 Expand

Fig 7.

Anthracnose Gradcam.

More »

Fig 7 Expand

Fig 8.

Bacterial Canker Gradcam.

More »

Fig 8 Expand

Fig 9.

Cutting Weevil Gradcam.

More »

Fig 9 Expand

Fig 10.

Die back Gradcam.

More »

Fig 10 Expand

Fig 11.

Gall Midge Gradcam.

More »

Fig 11 Expand

Fig 12.

Powdery Mildew Gradcam.

More »

Fig 12 Expand

Fig 13.

Sooty Mould Gradcam.

More »

Fig 13 Expand

Fig 14.

Machine learning algorithms performance (cross validation).

More »

Fig 14 Expand

Table 6.

10-Fold cross-validation and test performance for classical ML models.

More »

Table 6 Expand

Fig 15.

Confusion matrices for the RandomForest, SVC, and logistic regression models.

More »

Fig 15 Expand

Table 7.

Paired t-test results between ML models.

More »

Table 7 Expand

Fig 16.

Hybrid model workflow diagram.

More »

Fig 16 Expand

Table 8.

Comparative analysis of preprocessing time and iterations.

More »

Table 8 Expand

Fig 17.

Model performance (controlled use-case).

More »

Fig 17 Expand

Table 9.

5-Fold cross-validation results on test dataset (controlled use case GPU).

More »

Table 9 Expand

Table 10.

Paired t-test results comparing MobileNet and hybrid model.

More »

Table 10 Expand

Table 11.

Performance comparison for random use case across devices.

More »

Table 11 Expand

Table 12.

Per-image inference time and time reduction for hybrid model vs MobileNet.

More »

Table 12 Expand

Fig 18.

Confusion matrix Mobilenet model (test data).

More »

Fig 18 Expand

Fig 19.

Confusion matrix Mobilenet model (Test data).

More »

Fig 19 Expand

Table 13.

Performance of different devices at varying levels.

More »

Table 13 Expand

Table 14.

Performance comparison between hybrid model and MobileNet (test set).

More »

Table 14 Expand

Table 15.

Performance comparison between hybrid model and MobileNet (without Rembg and CLAHE).

More »

Table 15 Expand

Fig 20.

Confusion matrix Mobilenet model (test data).

More »

Fig 20 Expand

Fig 21.

Confusion matrix hybrid model (test data).

More »

Fig 21 Expand

Fig 22.

Prototype interface of the application used for real-time disease detection in mango leaves.

More »

Fig 22 Expand