Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Fig 1.

Sample of OCT retina images.

Each column presents a pair of OCT retina images associated with a distinct label. The columns show classes labeled as (a) normal, (b) drusen, and (c) CNV.

More »

Fig 1 Expand

Table 1.

A brief review of deep learning-based methods and their results in 2023.

More »

Table 1 Expand

Table 2.

A brief review of hybrid and ensemble methods and their results in 2023.

More »

Table 2 Expand

Fig 2.

Structure of (a) ECB and (b) LTB blocks. Blocks ECB and LTB both comprise layers LFFN and MHCA. However, an ESA layer is present at the beginning of the LTB block. In the shown structure of the ECB block, the inner structure of LFFN is uncovered. ‘Conv 1x1’ and ‘DWC 3x3’ each denote a convolution with a kernel size of 1 and a depthwise convolution with a kernel size of 3, respectively. ‘Seq2Img’ is a function that reshapes a sequence of inputs to a two-dimensional output. Conversely, ‘Img2Seq’ serves as the inverse operation of ‘Seq2Img’.

More »

Fig 2 Expand

Table 3.

Full architectural specifications of micro and tiny MedViTs.

More »

Table 3 Expand

Fig 3.

Paired stitching and unpaired stitching strategies.

This visualization demonstrates how paired and unpaired stitching strategies select appropriate layers for stitching with anchors of the same length and different lengths, respectively.

More »

Fig 3 Expand

Fig 4.

Utilizing stitchable neural networks as an architecture search method based on pre-trained MedViTs on UCSD.

Utilizing stitchable neural networks results in an efficient model for classifying retinal OCT images from the NEH dataset.

More »

Fig 4 Expand

Fig 5.

Utilizing stitchable neural networks as an architecture search method based on pre-trained MedViTs on NEH.

Utilizing stitchable neural networks results in a model with superior performance compared to the state-of-the-art model for classifying retinal OCT images from the NEH dataset.

More »

Fig 5 Expand

Fig 6.

Class distribution percentages within the NEH and UCSD datasets.

More »

Fig 6 Expand

Table 4.

Transformations performed for data augmentation.

More »

Table 4 Expand

Table 5.

Pre-training and stitching hyperparameters.

More »

Table 5 Expand

Table 6.

Results of the pre-trained micro and tiny MedViT models on the NEH dataset.

More »

Table 6 Expand

Table 7.

Comparing the five-fold cross-validation results of the presented method with previous studies on the NEH dataset.

More »

Table 7 Expand