Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Fig 1.

Schematic of tissue processing methodology.

The standard pipeline for processing and staining tissue for a clinical setting is compared to the rapid real-time DRAQ5&Eosin staining fluorescent microscopy method by Elfer (2016) [9]. The individual steps and times required are presented for both methods. Information about the standard clinical method was retrieved from Suvarna et al. [15] (2018), with further information about the slicing step acquired from Buesa et al. [16] (2010). Time units are given (ms: milliseconds, s: seconds, m: minutes, h: hours). Created in BioRender. Papachristos, M. G. (2026) https://BioRender.com/li1anrz.

More »

Fig 1 Expand

Fig 2.

Schematic of datasets.

The Commercial TMAs consisted of 49 needle biopsy prostate cores (samples from the two TMAs are the same but the slices are not sequential). The WCB TMA consisted of 103 cores. Commercial TMAs A and B along with the WCB TMA were all used as training/validation datasets (with 5-fold cross validation across the TMA cores). The WCB Tissue Slide Cores were used as the external testing and standardization dataset. Created in BioRender. Papachristos, M. (2026) https://BioRender.com/efd1l3q.

More »

Fig 2 Expand

Fig 3.

Schematic of research pipeline.

The overall pipeline involves: (1) Prostate Tissue being stained with DRAQ5 and Eosin, and imaged through the ZEISS LSM 880 AxioObserver fluorescent microscope. (2) Epithelial Segmentation of the DRAQ5&Eosin images based on a U-Net trained on data provided by Bulten et al. [24]. Image adapted from Bulten et al. [24] (Creative Commons license: http://creativecommons.org/licenses/by/4.0/) (3) Development of AI models on the PANDA dataset [25] for transfer learning. Image adapted from Bulten et al. [25] (Creative Commons license: http://creativecommons.org/licenses/by/4.0/). (4) DRAQ5&Eosin preprocessing, Gleason annotation by expert pathologists, and image cropping. (5) Development of image crop binary classifiers (Healthy vs Low Grade Cancer, Healthy vs High Grade Cancer, Low Grade vs High Grade Cancer) and pixel cancer epithelial segmentation AI algorithms. (6) Standardization Analysis for AI robustness on images with variable microscopy configurations (focus, noise, sampling density, lens). Created in BioRender. Papachristos, M. (2026) https://BioRender.com/1s34cb8.

More »

Fig 3 Expand

Table 1.

Time required to image one tile (1200x1200 pixels) based on the lens and zoom sampling density used with a confocal averaging of 8.

More »

Table 1 Expand

Fig 4.

DRAQ5 and Eosin fluorescent images and corresponding synthetic H&E image.

Top Row: Images. Bottom Row: Image crops of corresponding images from above.

More »

Fig 4 Expand

Fig 5.

DRAQ5 images under different microscopy configurations.

These include a z-stack for different levels of focus (−5, −2.5, 0, 2.5, 5 um), confocal averaging for different levels of noise (X8, X4, X2, X1), multiple zooms for different levels of sampling density (pixel area size of 0.69x0.69, 0.52x0.52, 0.35x0.35 um2), and two different lenses (20X/0.8 M27, 10X/0.45 M27).

More »

Fig 5 Expand

Fig 6.

Eosin images under different microscopy configurations.

These include a z-stack for different levels of focus (−5, −2.5, 0, 2.5, 5 um), confocal averaging for different levels of noise (X8, X4, X2, X1), multiple zooms for different levels of sampling density (pixel area size of 0.69x0.69, 0.52x0.52, 0.35x0.35 um2), and two different lenses (20X/0.8 M27, 10X/0.45 M27).

More »

Fig 6 Expand

Fig 7.

Epithelium mask, and Gleason grade mask of a DRAQ5&Eosin image.

Top Row: Image. Bottom Row: Zoomed region of the image.

More »

Fig 7 Expand

Table 2.

Total number of Healthy, Low Grade (Gleason pattern 3), and High Grade (Gleason pattern 4 or 5) image crops from the Training/Validation Datasets, and the External Testing and Standardization Dataset.

More »

Table 2 Expand

Fig 8.

Training/validation data machine learning results.

Figure contains Accuracies, Cohen Kappa Scores, ROCs, and AUCs of classification models for each fold, and Loss Curves of most predictive model of five folds for classification models. Figure also contains DICE Scores of Background, Healthy, and Cancerous Tissue for each fold, and Loss Curve of most predictive of five folds for segmentation models.

More »

Fig 8 Expand

Fig 9.

External testing and standardization dataset results.

Figure contains Accuracies, Cohen Kappas, and AUCs for classification models across different microscopy conditions (sharpness rank, confocal microscopy averaging, sampling density pixel area, microscopy lens type) for standardization dataset. Figure also contains DICE Scores of Background, Healthy, and Cancerous for the segmentation model across different microscopy conditions.

More »

Fig 9 Expand

Fig 10.

Examples of true positive, true negative, false positive, and false negative image segmentation of DRAQ5&Eosin image regions (after histogram equalization).

More »

Fig 10 Expand

Table 3.

Accuracies, Cohen Kappa scores, and AUCs of the classification models trained using PANDA compared to the classification models trained on ImageNet.

More »

Table 3 Expand