Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Accuracy of a deep convolutional neural network in the detection of myopic macular diseases using swept-source optical coherence tomography

  • Takahiro Sogawa,

    Roles Writing – original draft

    Affiliation Department of Ophthalmology, Tsukazaki Hospital, Himeji, Japan

  • Hitoshi Tabuchi,

    Roles Conceptualization, Supervision

    Affiliations Department of Ophthalmology, Tsukazaki Hospital, Himeji, Japan, Department of Technology and Design Thinking for Medicine, Hiroshima University Graduate School, Hiroshima, Japan

  • Daisuke Nagasato ,

    Roles Project administration, Writing – original draft, Writing – review & editing

    d.nagasato@tsukazaki-eye.net

    Affiliations Department of Ophthalmology, Tsukazaki Hospital, Himeji, Japan, Department of Technology and Design Thinking for Medicine, Hiroshima University Graduate School, Hiroshima, Japan

  • Hiroki Masumoto,

    Roles Data curation, Formal analysis, Validation

    Affiliations Department of Ophthalmology, Tsukazaki Hospital, Himeji, Japan, Department of Technology and Design Thinking for Medicine, Hiroshima University Graduate School, Hiroshima, Japan

  • Yasushi Ikuno,

    Roles Conceptualization

    Affiliation Ikuno Eye Center, Osaka, Japan

  • Hideharu Ohsugi,

    Roles Methodology

    Affiliation Ohsugi Eye Clinic, Kobe, Japan

  • Naofumi Ishitobi,

    Roles Investigation

    Affiliation Department of Ophthalmology, Tsukazaki Hospital, Himeji, Japan

  • Yoshinori Mitamura

    Roles Writing – review & editing

    Affiliation Department of Ophthalmology, Institute of Biomedical Sciences, Tokushima University Graduate School, Tokushima, Japan

Abstract

This study examined and compared outcomes of deep learning (DL) in identifying swept-source optical coherence tomography (OCT) images without myopic macular lesions [i.e., no high myopia (nHM) vs. high myopia (HM)], and OCT images with myopic macular lesions [e.g., myopic choroidal neovascularization (mCNV) and retinoschisis (RS)]. A total of 910 SS-OCT images were included in the study as follows and analyzed by k-fold cross-validation (k = 5) using DL's renowned model, Visual Geometry Group-16: nHM, 146 images; HM, 531 images; mCNV, 122 images; and RS, 111 images (n = 910). The binary classification of OCT images with or without myopic macular lesions; the binary classification of HM images and images with myopic macular lesions (i.e., mCNV and RS images); and the ternary classification of HM, mCNV, and RS images were examined. Additionally, sensitivity, specificity, and the area under the curve (AUC) for the binary classifications as well as the correct answer rate for ternary classification were examined.

The classification results of OCT images with or without myopic macular lesions were as follows: AUC, 0.970; sensitivity, 90.6%; specificity, 94.2%. The classification results of HM images and images with myopic macular lesions were as follows: AUC, 1.000; sensitivity, 100.0%; specificity, 100.0%. The correct answer rate in the ternary classification of HM images, mCNV images, and RS images were as follows: HM images, 96.5%; mCNV images, 77.9%; and RS, 67.6% with mean, 88.9%.Using noninvasive, easy-to-obtain swept-source OCT images, the DL model was able to classify OCT images without myopic macular lesions and OCT images with myopic macular lesions such as mCNV and RS with high accuracy. The study results suggest the possibility of conducting highly accurate screening of ocular diseases using artificial intelligence, which may improve the prevention of blindness and reduce workloads for ophthalmologists.

Introduction

Myopia is a kind of refractive error wherein an image is formed in front of the retina due to increases in axial length and refractive power, regardless of the intensity of the error and age of onset [1]. Myopia is associated with macular complications such as myopic choroidal neovascularization (mCNV), retinoschisis (RS), and myopic chorioretinal atrophy, which can lead to blindness. Recently, the prevalence of myopia has been increasing annually around the world, especially in East Asia, and vision loss caused by myopia is considered a global social problem [26].

Traditionally, the evaluation of the retina has largely been conducted by ophthalmoscope. However, this device only observes the retina directly, making the completion of an objective evaluation difficult. Optical coherence tomography (OCT) has recently made it possible to obtain detailed tomographic images of the retina noninvasively and in a small amount of time. Swept-source OCT (SS-OCT), in particular, can capture high-quality images using a light source with deep penetration into the tissue, arithmetic mean, and tracking function of the fundus [79]. With the advancement of such OCT technology, research on myopic macular diseases such as RS and mCNV, which are directly related to decreased visual function, has progressed dramatically. Studies using OCT have shown that early surgical intervention is important for the maintenance of long-term visual function in the context of RS [1015] and that early anti–vascular endothelial growth factor drug administration can help to maintain long-term visual function in mCNV [1620]. Therefore, early detection and treatment of macular lesions associated with myopia are crucial to maintaining better vision. However, administering screening tests to all people with myopia is not realistic from the human resource or economic perspective [21].

In recent years, artificial intelligence (AI) technologies, including deep learning (DL), have made remarkable progress and, in the medical field, various applications in diagnostic imaging have been reported [22]. In the field of ophthalmology, many researchers, including the authors, have already reported applications of DL to image analysis using OCT, OCT angiography, and ultrawide-field fundus ophthalmoscopy [2333].

To the best of our knowledge, however, there have been no studies performed involving the automatic diagnosis of myopic macular disease using DL technology for SS-OCT images. If AI can establish diagnoses as accurately as ophthalmologists can using DL, such would significantly contribute to the early detection of myopia-related complications, which may help decrease the number of patients who would suffer from loss of vision.

In light of the above, this study sought to examine and compare DL's classification performance using OCT images without myopic macular lesions [i.e., no high myopia (nHM) vs. high myopia (HM)] and OCT images with myopic macular lesions (e.g., mCNV and RS).

Materials and methods

Image dataset

A total of 910 SS-OCT images of HM with normal eyes or myopic macular lesions were included in our study; images with reduced clarity of the eye due to severe cataract and/or severe vitreous hemorrhage were excluded. All images were taken using SS-OCT (Topcon DRI OCT-1 Atlantis; Topcon Corp., Tokyo, Japan). Horizontal scans (12 mm) on the fovea were performed by trained certified orthoptists. nHM was defined as having an axial length of less than 26 mm, while HM was defined having an axial length of 26 mm or more and with neither involving other obvious ocular diseases. The purpose of this study was to evaluate the DL's ability to detect a single condition; therefore, in cases with mCNV or RS with myopic macular lesions, images showing complications of other retinal diseases (e.g., diabetic retinopathy, retinal vein occlusion) were also excluded. These SS-OCT images were classified into nHM, HM, mCNV, and RS by retinal specialists. Some nHM and HM images included with comorbidities (mild cat, chorioretinal atrophy, epiretinal membrane, macular hole and so on). Some RS images included retinoschisis with retinal detachment. Representative images of each class are presented in Fig 1.

thumbnail
Fig 1. Representative horizontal scans of SS-OCT.

Normal OCT image without HM (A), OCT image with HM and no macular lesions (B), and OCT images with mCNV (C) and RS (D) of the left eye using SS-OCT.

https://doi.org/10.1371/journal.pone.0227240.g001

The obtained images were trained and validated using k-fold cross-validation (k = 5). With this approach, image data were split into k groups, and (k − 1) groups were used as training data, while one group was used for validation [34,35]. The process was repeated k times until each of the k groups reached the validation dataset.

Data augmentation techniques, including brightness, gamma correction, histogram equalization, noise addition, and inversion, were applied to the images in the training dataset to increase the amount of training data by sixfold. Then, deep neural network (DNN) models were constructed and trained using the preprocessed image data.

Because of the retrospective and observational nature of the study, the need for written informed consent was waived by the ethics committees. The data acquired in the course of the data analysis were anonymized before we accessed them. This study was conducted in compliance with the principles of the Declaration of Helsinki and was approved by the local ethics committees of Tsukazaki Hospital,

Deep-learning model and training

In this study, the following nine DNN models were constructed and trained: Visual Geometry Group-16 (VGG16), Visual Geometry Group-19 (VGG19), Residual Network-50 (ResNet50), InceptionV3, InceptionResNetV2, Xception, DenseNet121, DenseNet169, and DenseNet201. After the training, the performance of each model was evaluated using test data [3638].

The convolutional DNN automatically learns local features of images and identifies images based on said information [3941]. Among the nine DNN models used in this study, the network architecture of a well-known model, VGG16, in particular is explained (Fig 2). The original SS-OCT image size was 1,038 × 802 pixels but it was resized to 256 × 192 pixels to shorten the analysis time. The images were read as color images; the size of the input tensor was 256 × 192 × 3. Since each pixel value was in the range of zero to 255 pixels, it the value was first divided by 255 and normalized according to the range of zero to one pixel(s).

thumbnail
Fig 2. Overall architecture of the VGG16 model.

The image data were converted to a pixel resolution of 256 × 192 pixels and set as the input tensor. After placing the convolution layers (Conv 1, 2, and 3), activation function (ReLU), pooling layers (MP 1 and 2) after Conv 1 and 3 and a dropout layer (drop rate: 0.25) all were passed through two fully connected layers (FC 1 and 2). In the final output layer, the classification was performed using some class softmax functions.

https://doi.org/10.1371/journal.pone.0227240.g002

The VGG16 model consists of five convolutional blocks and some fully connected layers. Each block involves a convolutional layer and a maximum pooling layer. A convolutional layer is a block that captures features in images. Since the stride was set to 1, the downsizing of images was not performed in the convolutional layer. The Rectified Linear Unit (ReLU) activation function was used to solve the vanishing gradient problem [42]. Additionally, the MaxPooling layer's stride was set to 2, and the images were downsized to compress the information [43].

Next, after passing through five blocks, a flattened layer and two fully connected layers were realized. The flattened layer deletes position information from the tensor representing the features extracted by the convolutional block, while the fully connected layers compress the information received from the previous layers and pass it on to the next layer. The softmax function produce the probability of each class was deemed as the final output.

Fine-tuning was applied to increase the learning speed to achieve high performance with limited data [44]. The parameters obtained by learning ImageNet were used as initial parameters of the convolutional layer blocks.

For weights and biases, a Stochastic Gradient Descent (learning rate = 0.0001, momentum term = 0.9) was used as an optimizer to update the parameters [45,46]. The code used to perform the above is shown in S1 File.

An ensemble model was constructed by averaging the output of any network type among the nine network types. Thus, 29−1 types of ensemble models were constructed. Among them, the classification performance of the model with the highest AUC for binary classification and that of the model with the highest overall correct answer rate for ternary classification were evaluated and compared with those abilities of human ophthalmologists, described later.

The models were constructed and evaluated using Python Keras (https://keras.io/ja/) [Backend is TensorFlow (https://www.tensorflow.org/)]. For development and validation, the following computing setup was used: Intel Core i7-7700K® (Intel, Santa Clara, CA, USA) as the central processing unit and NVIDIA GeForce GTX 1080 (Nvidia, Santa Clara, CA, USA) as the graphics processing unit.

Comparison with ophthalmologists

For the binary classification of OCT images with or without myopic macular lesions, 46 OCT images without myopic macular lesions (nHM: 23 images; HM: 23 images) and 46 OCT images with myopic macular lesions (mCNV: 23 images; RS: 23 images) were included. For the binary classification of HM images and images with myopic macular lesions (mCNV images and RS images), 44 images of HM and 44 images of myopic macular disease (mCNV: 22 images; RS: 22 images) were included. In addition, for the ternary classification of HM images, mCNV images, and RS images, 23 images of each were included. The task of classifying these images was given to three human ophthalmologists and their results were compared with those of the neural networks. The metrics used to evaluate the classification performance of the neural networks and ophthalmologists were the AUC for the binary classifications and overall accuracy for the ternary classification, respectively.

Outcome

Performance results of the binary classification of OCT images with or without myopic macular lesions, the binary classification of HM images and images with myopic macular lesions (mCNV images and RS images), and the ternary classification of HM images, mCNV images, and RS images were examined. Outcome measures for the binary classifications were AUC, sensitivity, and specificity, obtained from the receiver operating characteristic (ROC) curve. Based on the probability value output by the neural network as an abnormal group, the ROC curve was obtained by changing the threshold value to pass judgment regarding whether or not they were myopic macular disease images. For outcomes of the ternary classification, among the three diagnostic possibilities output by the neural network, the maximum value was used for diagnosis. The overall accuracy and accuracy within each group were obtained by comparing the diagnosis given by the network and the actual diagnosis from the ophthalmologists.

Heatmap

A gradient-weighted class activation mapping (Grad-CAM) method [47] was used to create heatmap images that indicated where the DNN was focused. As an example, each heatmap in the VGG16 network is shown in Fig 3. The output of the second convolutional layer of the second convolutional block was maximized, and the Grad-CAM method was used. The ReLU function was employed to correct the loss function during backpropagation. This process was performed by Python Keras-Vis (https://raghakot.github.io/keras-vis/).

thumbnail
Fig 3. Representative horizontal scans of SS-OCT and corresponding heatmaps.

Presented are a normal SS-OCT image with nHM (A), and its corresponding superimposed heatmap (B); OCT image with HM and no macular lesions (C) and its corresponding superimposed heatmap (D); OCT image with myopic choroidal neovascularization (E) and its corresponding superimposed heatmap (F); and OCT image with myopic retinoschisis (G) and its corresponding superimposed heatmap (H). For all of them, the convolutional DNN focused on the macular area (red color) on the SS-OCT images (B, D, F, and H). In particular, the DNN focused on the lesion area of the SS-OCT images in the images with retinoschisis and myopic choroidal neovascularization.

https://doi.org/10.1371/journal.pone.0227240.g003

Statistical analysis

In the comparison of subjects' demographic data, an analysis of variance was used for age and axial length. The chi-squared test was used for categorical variables (sex ratio and right:left ratio).The 95% confidence interval of the AUC was calculated using the following formula, assuming a normal distribution [48]: nP・ ・ ・ The amount of Good groups, (1) 563 (2) 456

nN・ ・ ・ The amount of Normal images, (1) 233 (2) 233

Sensitivity and specificity when the threshold to determine prevalence was set to 0.5 were obtained. The 95% CIs for sensitivity and specificity were calculated using the Clopper–Pearson method. The correct answer rate in the context of ternary classification was also obtained using the Clopper–Pearson method.

A significant difference was determined when the p-value was less than 0.05 (p < 0.05). These statistical analyses were performed using Python SciPy (https://www.scipy.org/), Python statsmodel (http://www.statsmodels.org/stable/index.html), and R pROC (https://cran.r-project.org/web/packages/pROC/pROC.pdf).

Results

Table 1 shows demographic data of 910 subjects from whom 910 study images were obtained. There was no significant difference between the four groups in terms of the ratio of left and right eyes (p = 0.6585, chi-squared test); however, significant differences were found in age, sex, and axial length. (p < 0.001, p < 0.005, and p < 0.001, respectively; chi-squared test and analysis of variance).

Neural network performance

For the binary classification of OCT images with or without myopic macular lesions, an ensemble model of VGG16, VGG19, DenseNet121, InceptionV3 and ResNet50 showed the best performance as follows: AUC, 0.970; sensitivity, 90.6%; and specificity, 94.2%.

For the binary classification of MH images and images with myopic macular lesions (mCNV and RS), VGG16 showed the best performance as follows: AUC, 1.000; sensitivity, 100.0%; and specificity, 100.0% (Table 2).

Finally, for the ternary classification of HM images, mCNV images, and RS images, VGG16 and DenseNet121 showed the best performance as follows: HM, 96.5%; mCNV, 77.9%; and RS, 67.6%. The overall correct answer rate was 88.9% (Table 3).

Comparison of neural network and ophthalmologist outcomes

For the binary classification of a total of 92 OCT images with or without myopic macular lesions, the neural networks' performance was AUC: 0.837, whereas the ophthalmologists' performance was AUC: 0.877 (p = 0.86).

For the binary classification of a total of 88 HM images and images with myopic macular lesions (mCNV and RS), the neural networks' performance was AUC: 1.000, whereas the ophthalmologists' performance was AUC: 0.875 (p = 0.48).

Finally, for the ternary classification of a total of 69 images (HM, mCNV, and RS), the neural networks' performance for overall accuracy was 79.7%, whereas the ophthalmologists' performance for the same was 86.0% (p = 0.76).

In all three classifications, no significant difference was found between the results of neural networks and those of the ophthalmologists (Table 4).

thumbnail
Table 4. Results of the comparison between outcomes of neural networks and humans.

https://doi.org/10.1371/journal.pone.0227240.t004

Heatmap

The corresponding heatmaps of the representative SS-OCT images of nHM, HM, mCNV, and RS are shown in Fig 3. In the heatmaps, red is used to indicate the strength of deep convolutional neural network focus.Increases in color intensity were observed around the macula in nHM and HM images, in the highlighted area due to choroidal neovascular at the macula in mCNV images, and in the RS area at the macula in RS images.

Discussion

In this study, using the combination of nine DNN models including VGG16, VGG19, ResNet50, InceptionV3, InceptionResNetV2, Xception, DenseNet121, DenseNet169, and DenseNet201, the classification of myopic macular diseases (mCNV and RS) and no myopic macular disease was conducted using SS-OCT images. The results showed that our DL models was able to classify both no myopic macular disease and myopic macular diseases with high accuracy. The combination of DNN models provided a correct answer rate that was equivalent to that of the ophthalmologists for each classification. To our knowledge, this study is the first to report on the classification ability of DL with high accuracy among RS and mCNV images using SS-OCT images.

A few recent studies considering AI’s detection ability using OCT images have been conducted on age-related macular degeneration (AMD). Treder et al. [49] developed and evaluated a DL program to detect AMD in spectral-domain OCT(SD-OCT). Their approach was tested using 100 OCT images (AMD: 50; healthy control: 50) and yielded correct answer rates of 0.997 in the AMD group and 0.920 in the healthy control group for a high level of significance (p < 0.001). Yoo et al. [50] also evaluated the automated detection of AMD in both OCT and fundus images using a DL program. Here, the DL with OCT images alone showed an AUC of 0.906 and a correct answer rate of 82.6%, the DL with fundus images alone presented an AUC of 0.914 and a correct answer rate of 83.5%, and the DL with a combination of OCT and fundus images showed an AUC of 0.969 and a correct answer rate of 90.5%. Similarly, AI's diagnostic efficiency in OCT images has been reported in correlation with other diseases. Our results concerning AI's diagnostic performance in images with myopic macular diseases also showed similar sensitivity and AUC outcomes as those in previous reports. A neural network can devise and construct an optimal structure to learn and detect local features of complex image data with individual differences [39,41,51].

In our study, we succeeded in obtaining diagnostic accuracy comparable to that of a human ophthalmologist by using the ensemble method, which combines various DL models as an AI algorithm. In the classifications directed by AI, the lesion sites where AI actually detected the reported findings often differ from the essential lesion sites that ophthalmologists examine. However, in this study, heatmaps were used to show where the neural network focused, revealing increases in color intensity at the following sites: around the macula in nHM and HM SS-OCT images, at the RS site in RS images, and at the mCNV site in mCNV images. These focus sites match with the sites on which the ophthalmologists focus on during diagnosis, indicating that DNN accurately identified the locations of RS and mCNV lesion sites and classified between normal images and images of myopic macular diseases based on the features of the lesions. However, strictly speaking, it is difficult to compare the diagnostic performance between humans and AI. Liu et al. [52] conducted a systematic review and meta-analysis to compare the diagnostic accuracy of DL algorithms with that of health care providers using medical images. Among the previous studies they examined, there were 14 studies that compared DL models and health care providers using the same sample data and met other criteria, including the publication of raw data. When aggregating the performance data of these 14 studies, they found a mean sensitivity of 87.0% (95% CI: 83.0–90.2) and a mean specificity of 92.5% (95% CI: 85.1–96.4) for the DL models and a mean sensitivity of 86.4% (95% CI: 79.9–91.0) and a mean specificity of 90.5% (95% CI: 80.6–95.7) for health care providers. Their study therefore suggested that the diagnostic performance of DL models was equivalent to that of health care providers. However, they also pointed out the existing lack of quality studies comparing AI and medical professionals, with no established comparison method currently available for use. In the present study, the DL model we used was able to obtain the same correct answer rate relative to the ophthalmologists using the same sample data. However, there is still room for improvement in this type of research, including in the areas of increasing the number of images for training, further improving the AI algorithms, and combining OCT and fundus images.

At present, the early detection of myopic macular diseases requires an examination performed by an ophthalmologist, but there are not enough ophthalmologists worldwide to pursue this. Our study results found no significant difference in classification performance between the neural networks and ophthalmologists in the binary classification of OCT images with or without myopic macular lesions; in the binary classification of HM images and images with myopic macular lesions; or in the ternary classification of HM images, mCNV images, or RS images. This suggests that the conduct of automated diagnosis by AI using SS-OCT image data, which can be acquired noninvasively and easily, may be very useful in myopic macular disease screening.

Our study has a few limitations that should be considered. First, imaging diagnosis by AI would be impossible among patients with reduced clarity of the eye due to severe cataracts or severe vitreous hemorrhage and in patients for whom detailed imaging cannot be obtained due to severely poor fixation. For these reasons, such SS-OCT images were excluded from this study. Second, in this study, the demographic data varied between groups. Myopia is significantly more common in females than in males and the prevalence of myopic macular diseases is significantly higher in older populations; therefore, the influence of such demographic data seems to be unavoidable [5355]. Third, the AI algorithms created and tested herein might not be generelizable to other commercially available similar imaging devices, because we investigated by only the Topcon DRI OCT-1. Fourth, to shorten the analysis time, the original SS-OCT image with 1,038 × 802 pixels was resized to 256 × 192 pixels. Finally, mCNV and retinal hemorrhage showed similar findings in SS-OCT images. Therefore, data from sources other than OCT images are required to distinguish these conditions [56,57].

Conclusion

The DL model was able to classify between myopic macular diseases (mCNV and RS) and no myopic macular disease with high accuracy using SS-OCT images. These findings suggest that DL is useful in reducing ophthalmologists' workloads in screening and preventing vision loss in myopic macular disease patients.

Acknowledgments

We thank Masayuki Miki and the orthoptists at Tsukazaki Hospital for support in collecting the data.

References

  1. 1. McBrien NA, Adams DW. A longitudinal investigation of adult-onset and adult-progression of myopia in an occupational group. Refractive and biometric findings. Invest Ophthalmol Vis Sci. 1997;38: 321–333. pmid:9040464
  2. 2. Holden BA, Fricke TR, Wilson DA, Jong M, Naidoo KS, Sankaridurg P, et al. Global prevalence of myopia and high myopia and temporal trends from 2000 through 2050. Ophthalmology. 2016;123: 1036–1042. pmid:26875007
  3. 3. Hsu WM, Cheng CY, Liu JH, Tsai SY, Chou P. Prevalence and causes of visual impairment in an elderly Chinese population in Taiwan: the Shihpai Eye Study. Ophthalmology. 2004;111: 62–69. pmid:14711715
  4. 4. Iwase A, Araie M, Tomidokoro A, Yamamoto T, Shimizu H, Kitazawa Y; Tajimi Study Group. Prevalence and causes of low vision and blindness in a Japanese adult population: the Tajimi Study. Ophthalmology. 2006;113: 1354–1362. pmid:16877074
  5. 5. Yamada M, Hiratsuka Y, Roberts CB, Pezzullo ML, Yates K, Takano S, et al. Prevalence of visual impairment in the adult Japanese population by cause and severity and future projections. Ophthalmic Epidemiol. 2010;17: 50–57. pmid:20100100
  6. 6. Bourne RR, Stevens GA, White RA, Smith JL, Flaxman SR, Price H, et al. Causes of vision loss worldwide, 1990–2010: a systematic analysis. Lancet Glob Health.2013;1: e339–349. pmid:25104599
  7. 7. Tan CS, Chan JC, Cheong KX, Ngo WK, Sadda SR. Comparison of retinal thicknesses measured using swept-source and spectral-domain optical coherence tomography devices. Osli Retina. 2015;46: 172–179. pmid:25707041
  8. 8. Mrejen S, Spaide RF. Optical coherence tomography: imaging of the choroid and beyond. Surv Ophthalmol. 2013;58: 387–429. pmid:23916620
  9. 9. Yasuno Y, Hong Y, Makita S, Yamanari M, Akiba M, Miura M, et al. In vivo high-contrast imaging of deep posterior eye by 1-μm swept source optical coherence tomography and scattering optical coherence angiography. Optics express. 2007;15: 6121–6139. pmid:19546917
  10. 10. Gaucher D, Haouchine B, Tadayoni R, Massin P, Erginay A, Benhamou N, et al. Long-term follow-up of high myopic foveoschisis: natural course and surgical outcome. Am J Ophthalmol. 2007;143: 455–462. pmid:17222382
  11. 11. Gao X, Ikuno Y, Fujimoto S, Nishida K. Risk factors for development of full-thickness macular holes after pars plana vitrectomy for myopic foveoschisis. Am J Ophthalmol. 2013;155: 1021–1027. pmid:23522356
  12. 12. Fujimoto S, Ikuno Y, Nishida K. Postoperative optical coherence tomographic appearance and relation to visual acuity after vitrectomy for myopic foveoschisis. Am J Ophthalmol. 2013;156: 968–973. pmid:23938124
  13. 13. Lehmann M, Devin F, Rothschild PR, Gaucher D, Morin B, Philippakis E, et al. Preoperative factors influencing visual recovery after vitrectomy for myopic foveoschisis. Retina. 2019;39: 594–600. pmid:29200098
  14. 14. Hattori K, Kataoka K, Takeuchi J, Ito Y, Terasaki H. Predictive factors of surgical outcomes in vitrectomy for myopic traction maculopathy. Retina. 2018;38 Suppl 1: S23–S30.
  15. 15. Sun Z, Gao H, Wang M, Chang Q, Xu G. Rapid progression of foveomacular retinoschisis in young myopics. Retina.2019;39: 1278–1288. pmid:29746412
  16. 16. Ikuno Y, Sayanagi K, Soga K, Sawa M, Tsujikawa M, Gomi F, et al. Intravitreal bevacizumab for choroidal neovascularization attributable to pathological myopia: one-year results. Am J Ophthalmol. 2009;147: 94–100. pmid:18774550
  17. 17. Wu TT, Kung YH. Five-year outcomes of intravitreal injection of ranibizumab for the treatment of myopic choroidal neovascularization. Retina. 2017;37: 2056–2061. pmid:28590318
  18. 18. Ohno-Matsui K, Ikuno Y, Lai TYY, Gemmy Cheung CM. Diagnosis and treatment guideline for myopic choroidal neovascularization due to pathologic myopia. Prog Retin Eye Res. 2018;63: 92–106. pmid:29111299
  19. 19. Tan NW, Ohno-Matsui K, Koh HJ, Nagai Y, Pedros M, Freitas RL, et al. Long-term outcomes of ranibizumab treatment of myopic choroidal neovascularization in East-Asian patients from the radiance study. Retina. 2018;38: 2228–2238. pmid:28961671
  20. 20. Onishi Y, Yokoi T, Kasahara K, Yoshida T, Nagaoka N, Shinohara K, et al. Five-year outcomes of intravitreal ranibizumab for choroidal neovascularization in patients with pathologic myopia. Retina. 2019;39: 1289–1298. pmid:29746414
  21. 21. Mrsnik M. Global Aging 2013: Rising to the challenge. Standard & poor’s rating services; 2013. https://www.nact.org/resources/2013_NACT_Global_Aging.pdf
  22. 22. Todoroki K, Nakano T, Ishii Y et al. Automatic analyzer for highly polar carboxylic acids based on fluorescence derivatization-liquid chromatography. Biomed Chromatogr. 2015;29:445–451. pmid:25082081
  23. 23. Nagasato D, Tabuchi H, Masumoto H, Goto K, Tomita R, Fujioka T, et al. Automated detection of a nonperfusion area caused by retinal vein occlusion in optical coherence tomography angiography images using deep learning. PLoS One. 2019;14: e0223965. pmid:31697697
  24. 24. Masumoto H, Tabuchi H, Nakakura S et al. Accuracy of a deep convolutional neural network in detection of retinitis pigmentosa on ultrawide-field images. PeerJ 2019;7: e6900. pmid:31119087
  25. 25. Nagasawa T, Tabuchi H, Masumoto H, Ohsugi H, Enno H, Ishitobi N, et al. Accuracy of ultrawide-field fundus ophthalmoscopy-assisted deep learning for detecting treatment-naïve proliferative diabetic retinopathy. Int Ophthalmol. 2019;39: 2153–2159. pmid:30798455
  26. 26. Ohsugi H, Tabuchi H, Enno H, Ishitobi N. Accuracy of deep learning, a machine-learning technology, using ultra–wide-field fundus ophthalmoscopy for detecting rhegmatogenous retinal detachment. Sci Rep. 2017;7: 9425. pmid:28842613
  27. 27. Nagasato D, Tabuchi H, Ohsugi H et al. Deep neural network-based method for detecting central retinal vein occlusion using ultrawide-field fundus ophthalmoscopy. J Ophthalmol. 2018:1875431. pmid:30515316
  28. 28. Nagasawa T, Tabuchi H, Masumoto H, Masumoto H, Enno H, Ishitobi N, et al. Accuracy of deep learning, a machine-learning technology, using ultra–widefield fundus ophthalmoscopy for detecting idiopathic macular holes. PeerJ.2018;22;6: e5696.
  29. 29. Sonobe T, Tabuchi H, Ohsugi H, Masumoto H, Ishitobi N, Morita S, et al. Comparison between support vector machine and deep learning, machine-learning technologies for detecting epiretinal membrane using 3D-OCT. Int Ophthalmol. 2019;39: 1871–1877. pmid:30218173
  30. 30. Matsuba S, Tabuchi H, Ohsugi H, Enno H, Ishitobi N, Masumoto H, et al. Accuracy of ultra–wide-field fundus ophthalmoscopy-assisted deep learning, a machine-learning technology, for detecting age related macular degeneration. Int Ophthalmol. 2019;39: 1269–1275. pmid:29744763
  31. 31. Masumoto H, Tabuchi H, Nakakura S, Ishitobi N, Miki M, Enno H. Deep-learning classifier with an ultrawide-field scanning laser ophthalmoscope detects glaucoma visual field severity. J Glaucoma. 2018;27: 647–652. pmid:29781835
  32. 32. Grewal PS, Oloumi F, Rubin U, Tennant MTS. Deep learning in ophthalmology: a review. Can J Ophthalmol. 2018;53: 309–313. pmid:30119782
  33. 33. De Fauw J, Ledsam JR, Romera-Paredes B, Nikolov S, Tomasev N, Blackwell S, et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat Med. 2018;24: 1342–1350. pmid:30104768
  34. 34. Mosteller F, Tukey JW. Data analysis, including statistics In: Lindzey G, Aronson E, editors. Handbook of social psychology. Reading, MA: Addison–Wesley; 1968. pp. 80–203.
  35. 35. Kohavi R. A study of cross-validation and bootstrap for accuracy estimation and model selection. In: Proceedings of the 14th International joint conference on artificial intelligence. Montreal, Quebec, Canada: Morgan Kaufmann Publishers Inc.; 1995. pp. 1137–1143.
  36. 36. Simonyan, K., Andrew, Z. Very deep convolutional networks for large-scale image recognition. https://arxiv.org/pdf/1409.1556.pdf
  37. 37. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception architecture for computer vision. Proceedings of the IEEE conference on computer vision and pattern recognition. 2016; 2818–2826
  38. 38. Szegedy C, Ioffe S, Vanhoucke V, Alemi AA. Inception-v4, inception-resnet and the impact of residual connections on learning. AAAI. 2017;4: 12
  39. 39. Deng J, Dong W, Socher R, Li L, Kai L, Li F-F. ImageNet: a large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition. Miami, FL: IEEE; 2009. pp. 248–255.
  40. 40. Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, et al. ImageNet large scale visual recognition challenge. Int J Comp Vision. 2015;115: 211–252.
  41. 41. Lee CY, Xie S, Gallagher P, Zhang Z, Tu Z. Deeply-supervised nets. In Proceedings of the 18th International Conference on Artificial Intelligence and Statistics (AISTATS). San Diego, CA, USA: Journal of Machine Learning Research Workshop and Conference Proceedings; 2015. pp. 562–570.
  42. 42. Glorot X, Bordes A, Bengio Y. Deep sparse rectifier neural networks. In: Proceedings of the 14th International conference on artificial intelligence and statistics. Fort Lauderdale, FL: PMLR; 2011. pp. 315–323.
  43. 43. Scherer D, Müller A, Behnke S. Evaluation of pooling operations in convolutional architectures for object recognition In: Diamantaras K, Duch W, Iliadis LS, editors. Artificial neural networks–ICANN 2010. Berlin, Heidelberg: Springer Berlin; 2010. pp. 92–101.
  44. 44. Agrawal P, Girshick R, Malik J. Analyzing the performance of multilayer neural networks for object recognition In: Fleet D, Pajdla T, Schiele B, Tuytelaars T, editors. Computer vision–ECCV 2014. Cham: Springer International Publishing; 2014. pp. 329–344.
  45. 45. Qian N. On the momentum term in gradient descent learning algorithms. Neural Networks. 1999;12: 145–151. pmid:12662723
  46. 46. Nesterov Y. A method for unconstrained convex minimization problem with the rate of convergence O (1/k^2). Doklady AN USSR. 1983;269: 543–547.
  47. 47. Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D. Grad-CAM: visual explanations from deep networks via gradient-based localization. In: IEEE International Conference on Computer Vision (ICCV), 2017; 618–626.
  48. 48. Hanley JA, McNeil BJ. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology. 1982;143: 29–36. pmid:7063747
  49. 49. Treder M, Lauermann JL, Eter N. Automated detection of exudative age-related macular degeneration in spectral domain optical coherence tomography using deep learning. Graefes Arch Clin Exp Ophthalmol.2018;256: 259–265. pmid:29159541
  50. 50. Yoo TK, Choi JY, Seo JG, Ramasubramanian B, Selvaperumal S, Kim DW. The possibility of the combination of OCT and fundus images for improving the diagnostic accuracy of deep learning for age-related macular degeneration: a preliminary experiment. Med Biol Eng Comput. 2018;57: 677–687. pmid:30349958
  51. 51. Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, et al. ImageNet large scale visual recognition challenge. arXiv preprint arXiv 2014;1409.0575
  52. 52. Liu X, Faes L, Kale AU, Wagner SK, Fu DJ, Bruynseels A, et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. The Lancet Digital Health. 2019;1: e271–e297.
  53. 53. Asakuma T, Yasuda M, Ninomiya T, Noda Y, Arakawa S, Hashimoto S, et al. Prevalence and risk factors for myopic retinopathy in a Japanese population: the Hisayama Study. Ophthalmology. 2012;119: 1760–1765. pmid:22578442
  54. 54. Vongphanit J, Mitchell P, Wang JJ. Prevalence and progression of myopic retinopathy in an older population. Ophthalmology 2002:109: 704–711. pmid:11927427
  55. 55. Liu HH, Xu L, Wang YX, Wang S, You QS, Jonas JB. Prevalence and progression of myopic retinopathy in Chinese adults: the Beijing Eye Study. Ophthalmology 2010;117: 1763–1768. pmid:20447693
  56. 56. Mi L, Zuo C, Zhang X, Liu B, Peng Y, Wen F. Fluorescein Leakage within Recent Subretinal Hemorrhage in Pathologic Myopia: Suggestive of CNV? J Ophthalmol. 2018: 4707832. pmid:30186627
  57. 57. Liu B, Zhang X, Mi L, Chen L, Wen F. Long-term natural outcomes of simple hemorrhage associated with lacquer crack in high myopia: A risk factor for Myopic CNV? J Ophthalmol. 2018:3150923. pmid:29619253