Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Fig 1.

Sample of grapevine leaf from the dataset.

A: RGB image; B: RGNIR image; C: normalized and cropped NIR image; D: companion skeleton. In the skeleton binarized image, the white color identifies the leaf profile and veins, the black color identifies other parts of the leaf and the background.

More »

Fig 1 Expand

Fig 2.

Illustration of the ResVAE architecture (training phase).

More »

Fig 2 Expand

Fig 3.

Illustration of the Pix2Pix framework (training).

More »

Fig 3 Expand

Fig 4.

L2L workflow illustration.

A random input vector is drawn from the ResVAE latent space representation and is input into the trained ResVAE decoder. This latter outputs a synthetic leaf skeleton, which in turn is fed into the trained generator of the Pix2Pix and translated into a corresponding colorized leaf.

More »

Fig 4 Expand

Fig 5.

Translation from unseen real companion skeleton.

A binarized leaf skeleton companion of a real leaf not belonging to the training set is passed through the generator of the Pix2Pix net to check. A: companion skeleton; B: synthetic colorized blade; C: real image.

More »

Fig 5 Expand

Fig 6.

Full L2L translation results.

Examples of synthetic colorized leaves along with the corresponding synthetic companion skeletons.

More »

Fig 6 Expand

Fig 7.

L2L-RGNIR translation results.

Examples of synthetic leaves colorized in the RGNIR channels along with the corresponding synthetic companion skeletons.

More »

Fig 7 Expand

Fig 8.

Refinement algorithm.

The generative procedure sometimes produces artifacts, that is leaf regions that appear outside the leaf blade. These artifacts are corrected by procedurally finding the contours of all the objects in the image and removing the objects outside the leaf contour. A: first leaf in Fig 6 presenting artifacts; B: inset showing the magnified artifacts; C: cleaned leaf.

More »

Fig 8 Expand

Fig 9.

AE for anomaly detection.

The AE is trained with images of real leaves to be the identity operator of the input. A synthetic leaf with a low level of similarity is recognized as an anomaly if fed into the trained AE and its anomaly score sx is high.

More »

Fig 9 Expand

Fig 10.

Quantification of anomaly via ROC curve and AUC index.

A point on the ROC curve represents—for a certain threshold on the anomaly score—the FPR vs the TPR as defined in (6). We found AUC = 0.25, which means that a synthetic image is classified as synthetic in the 25% of cases and in the 75% is considered real. The dotted line represents the result one would obtain by tossing a coin to decide whether an image is artificial or real.

More »

Fig 10 Expand

Fig 11.

Real and synthetic images of cucumber leaves.

Real images were acquired with the procedure described in the Material and Methods section, synthetic images were generated via the L2L algorithm. Diseased leaves are affected by powdery mildew with different severity levels: the whitish spots on the leaf blade are signs of early-to-mid powdery mildew infection. Notice that while advanced signs of podwery mildew are easily recognizable, early stages signs are much more elusive.

More »

Fig 11 Expand

Fig 12.

Segmentation masks.

The masks, which are produced by a U-net architecture trained with a mix of real and synthetic leaves, denote the diseased spots. The masks are empty for the healthy leaves (a-b), while they indicate disease spots with an accuracy above 94%—according to the segmentation of a human expert—for leaves affected by powdery mildew (c-d).

More »

Fig 12 Expand