Fig 1.
Color fundus image and its different RGB channels.
(a) RGB image, (b) red channel, (c) green channel and (d) blue channel.
Fig 2.
Preprocessing and vessel enhancement steps.
Fig 3.
Feature extraction and classifier training/testing steps.
Fig 4.
(a) An image from the DRIVE test set with its respective manual vessel segmentation, (b) first observer and (c) second observer.
Fig 5.
(a) An image from the STARE dataset with its respective manual vessel segmentation, (b) first observer and (c) second observer.
Fig 6.
(a) An image from the CHASE_DB1 dataset with its respective manual vessel segmentation, (b) first observer and (c) second observer.
Fig 7.
An example of symmetric B-COSFIRE filter configured to detect vertical bars with the support center being indicated by the cross marker [17].
The numbered dots along the concentric circles represent the positions with the strongest DoG responses.
Fig 8.
An example of asymmetric B-COSFIRE filter configured to detect vertical bar endings with the support center being indicated by the cross marker [17].
The numbered dots along the concentric circles represent the positions with the strongest DoG responses.
Fig 9.
Preprocessing and vessel enhancement steps on an example image from DRIVE dataset.
(a) color fundus image, (b) manual segmentation (first observer), (c) green channel, (d) FOV mask, (e) image after applying CLAHE, (f) image after applying Retinex, (g) image after applying morphological Top-hat and Bottom-hat, (h) computed background, (i) image after subtracting the background, (j) image after applying morphological Top-hat, (k) image after applying B-COSFIRE filter and (m) final preprocessed image after applying Frangi filter.
Table 1.
Effects of varying the feature extraction window size on classifier accuracy.
Table 2.
Effects of varying GLCM distance on classifier accuracy computed on a 5×5 window.
Fig 10.
Retina vessel segmentation accuracy of different classifiers using 5-fold cross-validation on sample data.
Table 3.
Different combinations of features and their effect on vessel segmentation accuracy.
It should be noted that each feature combination contains all the features listed before that combination.
Fig 11.
Generalization error for AdaBoost classifier using 5-fold cross-validation on sample data.
Table 4.
A comparison between different retinal vessel segmentation methods evaluated using DRIVE and SATRE datasets.
Table 5.
A comparison between different retinal vessel segmentation methods evaluated using CHASE_DB1 dataset.
Fig 12.
ROC curve of the proposed classifier.
(a) DRIVE, (b) STARE and (c) CHASE_DB1 (CHASE) datasets.
Table 6.
The segmentation performance of the proposed method in case of cross-training/testing.
Table 7.
A comparison between the average accuracy of different segmentation methods with cross-training/testing.
Table 8.
A comparison between the average processing time of different segmentation methods per image.
Fig 13.
A visual comparison between different retinal vessel segmentation methods on a sample image from DRIVE dataset.
(a) color fundus image, (b) manual segmentation by second observer, (c) manual segmentation by first observer, (d) proposed segmentation, (e) Wang et al. [28], (f) Marín et al. [24], (g) Aslani et al. [35], (h) Han et al. [74], (i) Maharjan et al. [76].
Fig 14.
A visual comparison between different retinal vessel segmentation methods on a sample image from STARE dataset.
(a) color fundus image, (b) manual segmentation by first observer, (c) proposed segmentation, (d) Peng et al. [75], (e) Hoover et al. [36] and (f) Soares et al. [15].
Fig 15.
A visual comparison between different retina vessel segmentation methods on sample pathological images from STARE dataset.
(a) color fundus image, (b) manual segmentation by first observer, (c) proposed segmentation, (d) Rodrigues et al. [77], (e) Aslani et al. [35], (f) Wang et al. [28], (g) Han et al. [74].