Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Fig 1.

Color fundus image and its different RGB channels.

(a) RGB image, (b) red channel, (c) green channel and (d) blue channel.

More »

Fig 1 Expand

Fig 2.

Preprocessing and vessel enhancement steps.

More »

Fig 2 Expand

Fig 3.

Feature extraction and classifier training/testing steps.

More »

Fig 3 Expand

Fig 4.

(a) An image from the DRIVE test set with its respective manual vessel segmentation, (b) first observer and (c) second observer.

More »

Fig 4 Expand

Fig 5.

(a) An image from the STARE dataset with its respective manual vessel segmentation, (b) first observer and (c) second observer.

More »

Fig 5 Expand

Fig 6.

(a) An image from the CHASE_DB1 dataset with its respective manual vessel segmentation, (b) first observer and (c) second observer.

More »

Fig 6 Expand

Fig 7.

An example of symmetric B-COSFIRE filter configured to detect vertical bars with the support center being indicated by the cross marker [17].

The numbered dots along the concentric circles represent the positions with the strongest DoG responses.

More »

Fig 7 Expand

Fig 8.

An example of asymmetric B-COSFIRE filter configured to detect vertical bar endings with the support center being indicated by the cross marker [17].

The numbered dots along the concentric circles represent the positions with the strongest DoG responses.

More »

Fig 8 Expand

Fig 9.

Preprocessing and vessel enhancement steps on an example image from DRIVE dataset.

(a) color fundus image, (b) manual segmentation (first observer), (c) green channel, (d) FOV mask, (e) image after applying CLAHE, (f) image after applying Retinex, (g) image after applying morphological Top-hat and Bottom-hat, (h) computed background, (i) image after subtracting the background, (j) image after applying morphological Top-hat, (k) image after applying B-COSFIRE filter and (m) final preprocessed image after applying Frangi filter.

More »

Fig 9 Expand

Table 1.

Effects of varying the feature extraction window size on classifier accuracy.

More »

Table 1 Expand

Table 2.

Effects of varying GLCM distance on classifier accuracy computed on a 5×5 window.

More »

Table 2 Expand

Fig 10.

Retina vessel segmentation accuracy of different classifiers using 5-fold cross-validation on sample data.

More »

Fig 10 Expand

Table 3.

Different combinations of features and their effect on vessel segmentation accuracy.

It should be noted that each feature combination contains all the features listed before that combination.

More »

Table 3 Expand

Fig 11.

Generalization error for AdaBoost classifier using 5-fold cross-validation on sample data.

More »

Fig 11 Expand

Table 4.

A comparison between different retinal vessel segmentation methods evaluated using DRIVE and SATRE datasets.

More »

Table 4 Expand

Table 5.

A comparison between different retinal vessel segmentation methods evaluated using CHASE_DB1 dataset.

More »

Table 5 Expand

Fig 12.

ROC curve of the proposed classifier.

(a) DRIVE, (b) STARE and (c) CHASE_DB1 (CHASE) datasets.

More »

Fig 12 Expand

Table 6.

The segmentation performance of the proposed method in case of cross-training/testing.

More »

Table 6 Expand

Table 7.

A comparison between the average accuracy of different segmentation methods with cross-training/testing.

More »

Table 7 Expand

Table 8.

A comparison between the average processing time of different segmentation methods per image.

More »

Table 8 Expand

Fig 13.

A visual comparison between different retinal vessel segmentation methods on a sample image from DRIVE dataset.

(a) color fundus image, (b) manual segmentation by second observer, (c) manual segmentation by first observer, (d) proposed segmentation, (e) Wang et al. [28], (f) Marín et al. [24], (g) Aslani et al. [35], (h) Han et al. [74], (i) Maharjan et al. [76].

More »

Fig 13 Expand

Fig 14.

A visual comparison between different retinal vessel segmentation methods on a sample image from STARE dataset.

(a) color fundus image, (b) manual segmentation by first observer, (c) proposed segmentation, (d) Peng et al. [75], (e) Hoover et al. [36] and (f) Soares et al. [15].

More »

Fig 14 Expand

Fig 15.

A visual comparison between different retina vessel segmentation methods on sample pathological images from STARE dataset.

(a) color fundus image, (b) manual segmentation by first observer, (c) proposed segmentation, (d) Rodrigues et al. [77], (e) Aslani et al. [35], (f) Wang et al. [28], (g) Han et al. [74].

More »

Fig 15 Expand