Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Predicting demographic characteristics from anterior segment OCT images with deep learning: A study protocol

Abstract

Introduction

Anterior segment optical coherence tomography (AS-OCT) is a non-contact, rapid, and high-resolution in vivo modality for imaging of the eyeball’s anterior segment structures. Because progressive anterior segment deformation is a hallmark of certain eye diseases such as angle-closure glaucoma, identification of AS-OCT structural changes over time is fundamental to their diagnosis and monitoring. Detection of pathologic damage, however, relies on the ability to differentiate it from normal, age-related structural changes.

Methods and analysis

This proposed large-scale, retrospective cross-sectional study will determine whether demographic characteristics including age can be predicted from deep learning analysis of AS-OCT images; it will also assess the importance of specific anterior segment areas of the eyeball to the prediction. We plan to extract, from SUPREME®, a clinical data warehouse (CDW) of Seoul National University Hospital (SNUH; Seoul, South Korea), a list of patients (at least 2,000) who underwent AS-OCT imaging between 2008 and 2020. AS-OCT images as well as demographic characteristics including age, gender, height, weight and body mass index (BMI) will be collected from electronic medical records (EMRs). The dataset of horizontal AS-OCT images will be split into training (80%), validation (10%), and test (10%) datasets, and a Vision Transformer (ViT) model will be built to predict demographics. Gradient-weighted Class Activation Mapping (Grad-CAM) will be used to visualize the regions of AS-OCT images that contributed to the model’s decisions. The accuracy, sensitivity, specificity, and area under the receiver operating characteristic (ROC) curve (AUC) will be applied to evaluate the model performance.

Conclusion

This paper presents a study protocol for prediction of demographic characteristics from AS-OCT images of the eyeball using a deep learning model. The results of this study will aid clinicians in understanding and identifying age-related structural changes and other demographics-based structural differences.

Trial registration

Registration ID with open science framework: 10.17605/OSF.IO/FQ46X.

Introduction

Anterior segment optical coherence tomography (AS-OCT) is an imaging modality that provides non-contact, rapid, and high-resolution in vivo imaging of anterior segment structures [1]. With its technological advancement, AS-OCT is becoming more clinically useful for diagnosis, monitoring, and treatment of various ocular diseases, and also for preoperative evaluation (e.g. before refractive surgeries) [213]. For accurate diagnosis and disease-progression determination, however, it is crucial to establish the normal ocular structure, which is possible only based on a thorough understanding of structural changes and differences related to demographic characteristics.

Previous AS-OCT imaging studies have reported anatomic differences of anterior segment structures including the cornea, anterior chamber, iris, iridocorneal angle, and trabecular meshwork that correspond to demographic variables such as age, gender, height or weight [1426]. In analyzing large amounts of imaging data, the recent introduction of deep learning to medicine, which is a form of representation learning whereby multiple processing layers automatically learn data representations with multiple levels of abstraction [2729], has shown its potential additive value for the diagnosis and treatment of ocular diseases [3037].

Although several studies have related structural differences of the anterior segment to demographic characteristics [1426], each study analysis was limited to just a few specific structures, and their results, which were not verified, are difficult to apply to clinical practice. Furthermore, only a few studies have analyzed various demographic variables, necessitating investigation of structural differences using whole anterior segment structures with diverse demographic parameters. The aim of our study, accordingly, is to predict demographics from AS-OCT images using deep learning in normal eyes. Our findings will help clinicians to understand how anterior segment structures differ according to demographics and to identify the key structures that contribute to such differences.

Materials and methods

Study design and setting

We will conduct a retrospective cross-sectional study including a large number of subjects (at least 2,000) from Seoul National University Hospital (SNUH) in South Korea, and the protocol has been registered in the Open Science Framework (https://osf.io/fq46x). A flow chart of our study is shown in Fig 1.

thumbnail
Fig 1. Study flow chart.

Abbreviations: AS-OCT, anterior segment optical coherence tomography; CDW, clinical data warehouse; EMR, electronic medical record; Grad-CAM, Gradient-weighted Class Activation Mapping; SNUH, Seoul National University Hospital.

https://doi.org/10.1371/journal.pone.0270493.g001

Participants and data source

A list of eligible participants who meet the inputted search conditions (e.i. our study’s inclusion criteria (noted below)) on SUPREME®, a clinical data warehouse (CDW) of SNUH, will be obtained. AS-OCT (Visante; Carl Zeiss Meditec, Dublin, CA, USA) images and demographic data including age, gender, height, weight and body mass index (BMI) will be semi-automatically collected from an electronic medical record (EMR) database of BESTCare (ezCaretech, Seoul, Korea), a hospital information system of SNUH. As for the demographic data, we will obtain them through SUPREME®, which matches each patient’s data with his or her identification number.

Inclusion criteria

Patients who underwent AS-OCT imaging between 2008 and 2020 at SNUH and for whom there are available height and weight measurements taken within 6 months of (before or after) the imaging will be included.

Exclusion criteria

The exclusion criteria will be as follows: history of prior ocular surgery, laser treatment or trauma; any ocular or systemic diseases that could affect anterior segment structures, including corneal disease (e.g., corneal opacity, corneal dystrophy, keratoconus), iridocorneal angle abnormality (e.g., angle-closure glaucoma), or ocular inflammatory disease (e.g., uveitis); history of contact lens wear; any medications that could affect anterior segment structures such as the cornea, iridocorneal angle, or iris; any anterior segment abnormalities precluding visualization of the anterior segment structure (e.g., significant corneal opacity); any systemic diseases or therapies affecting height (e.g., Marfan syndrome) or weight (e.g., metabolic disease, cancer); AS-OCT images of poor quality or with artifacts.

Sample size calculation

We expect that at least 10,000 AS-OCT images will be obtained from at least 2,000 patients. Since it is considered impossible to estimate the sample size required to train a deep learning model in advance, we will adjust it during the study based on the model’s performance.

Study procedure

Data preparation.

Each of the horizontal AS-OCT images will be split in half along the vertical midline and resized to 384 x 384 using bicubic interpolation. The entire dataset will be split into training, validation, and test datasets at a ratio of 8:1:1, and with the training dataset, the right-side of the split images will be flipped horizontally to align with the left-side images for data augmentation.

Deep learning model development.

For demographics prediction, we will utilize a Vision Transformer (ViT) model (available at https://github.com/lukemelas/PyTorch-Pretrained-ViT) that recently was applied for image classification and achieved state of the art, top-1 accuracy among the models classified with ImageNet [38]. Among the several ViT models, we will use the ViT-Base model (ViT-B/16), which will be pre-trained with ImageNet-21k and fine-tuned with ImageNet-1k. An overview of the ViT model is illustrated in Fig 2.

thumbnail
Fig 2. Vision transformer model.

Abbreviations: BMI, body mass index; MLP, Multi-layer Perceptron.

https://doi.org/10.1371/journal.pone.0270493.g002

The 384 x 384 images will be fed into the network as multiple 16 x 16 patches (576 patches in total) that will be flattened to 1-dimensional vectors for inputs to the ViT encoder. The encoder consists of alternating layers of multi-head self-attention and Multi-layer Perceptron (MLP) blocks, and Layernorm (LN) and residual connections are respectively applied before and after every block [38]. The output of the transformer encoder will be passed through the MLP head, and the score value for each class will be obtained.

For each of the demographic characteristics, we will use the following cut-offs: ≤ 75 and > 75, ≤ 60, 60–75 and ≥ 75 years for age; ≤ 170 and > 170 cm for male height; ≤ 155 and > 155 cm for female height; < 70 and ≥ 70 kg for male weight; < 55 and ≥ 55 kg for female weight, and < 23, 23–25 and ≥ 25 kg/cm2 for BMI. Also, to adjust age, we will analyze other demographic characteristics including sex, height, weight and BMI using age-matched groups. As for age, height and weight, we will conduct the analysis dividing groups by sex to adjust it.

To visualize the regions of the AS-OCT images that contributed to the model’s decisions, Gradient-weighted Class Activation Mapping (Grad-CAM) [39] will be extracted from the first LN of the last block of the transformer encoder.

Hardware specifications.

CPU: Intel(R) Xeon(R) Gold 5120 CPU @ 2.20 GHz.

GPU: Tesla V100 32GB x2.

Software specifications.

Preprocessing: OpenCV 3.4.2.

Deep learning libraries: Pytorch 1.7.1, Python 3.7.

Statistical methods and performance evaluation

To evaluate the performance of the deep learning model, the accuracy, sensitivity, and specificity will be calculated by the following equations. Also, the area under the receiver operating characteristic (ROC) curve (AUC), which is drawn by thresholding the output value of the designed network after normalizing (0~1) by the softmax function, will be calculated.

Ethics and dissemination

This study was approved by the Institutional Review Board of SNUH (IRB No. H-2104-085-1212), and the collection and analysis of the data were permitted by the Big data Review Board (BRB) of SNUH. The study protocol followed the tenets of the Declaration of Helsinki. Informed consent will be waived due to the retrospective nature of the study. Study findings will be disseminated through publication in a peer-reviewed journal and presented at relevant conferences.

Results

To evaluate the feasibility of the DL model, we performed a pilot study. For prediction of age, a total of 2,615 AS-OCT images were used in the analysis: 2,102 images (360 and 1,742 for age ≤ 65 and > 65 years, respectively) as a training dataset; 261 images (54 and 207 for age ≤ 65 and > 65 years, respectively) as a validation dataset, and 252 images (36 and 216 for age ≤ 65 and > 65 years, respectively) as a test dataset.

For classifying age ≤ 65 vs. > 65 years, the ViT model achieved an AUC of 0.816, which was lower than that of DenseNet121 convolutional neural network (CNN, AUC 0.843). With pre-training, however, the ViT model outperformed DenseNet121 CNN, achieving an AUC of 0.885 (Table 1).

thumbnail
Table 1. Performance of deep learning models for prediction of age ≤ 65 vs. > 65 years.

https://doi.org/10.1371/journal.pone.0270493.t001

Discussion

AS-OCT is, on the strength of its technical advancement, becoming an increasingly potent imaging modality for evaluation of various ocular diseases in the field of ophthalmology. In addition, the ViT, which is an extended, recent application of Transformer to computer vision inspired by its success in natural language processing, has attained excellent results in image classification and has shown its usefulness in ophthalmology as well [40, 41]. Indeed, our pilot study demonstrated a promising potential of ViT for prediction of age from AS-OCT images. Also, it showed that the pre-trained ViT outperformed CNN, which result being also supported by Dosovitskiy et al.’s study [38]. Combining the advantages of deep learning with those of AS-OCT, we expect that our study protocol will help to enhance the utilization of AS-OCT in clinical practice. Furthermore, our study is of great value in that it will enable obtaining hidden information on systemic factors from ocular imaging, as was also demonstrated in earlier studies [4244].

The strengths of our study are its planned utilization of a deep learning method that will enable automated and fast analysis of massive, high-resolution AS-OCT imaging data from a large population. The limitations of the proposed research include potential selection bias due to its retrospective design, and its limited evaluation of structural differences among different ethnic groups. Further studies of prospective design that include diverse ethnic groups will further expand our knowledge and understanding of normal anterior segment structures.

Conclusion

This paper presents a study protocol for prediction of demographic characteristics based on AS-OCT images of the eyeball using a deep learning model. The results of this study will help clinicians to better understand anterior segment structural changes and differences according to demographic variables in normal eyes, which should ultimately aid in evaluation and management of ocular diseases in clinical practice.

References

  1. 1. Ang M, Baskaran M, Werkmeister RM, Chua J, Schmidl D, Aranha Dos Santos V, et al. Anterior segment optical coherence tomography. Prog Retin Eye Res. 2018;66:132–56. pmid:29635068.
  2. 2. Abou Shousha M, Karp CL, Perez VL, Hoffmann R, Ventura R, Chang V, et al. Diagnosis and management of conjunctival and corneal intraepithelial neoplasia using ultra high-resolution optical coherence tomography. Ophthalmology. 2011;118(8):1531–7. pmid:21507486.
  3. 3. Vajzovic LM, Karp CL, Haft P, Shousha MA, Dubovy SR, Hurmeric V, et al. Ultra high-resolution anterior segment optical coherence tomography in the evaluation of anterior corneal dystrophies and degenerations. Ophthalmology. 2011;118(7):1291–6. pmid:21420175.
  4. 4. Gumus K, Crockett CH, Pflugfelder SC. Anterior segment optical coherence tomography: a diagnostic instrument for conjunctivochalasis. Am J Ophthalmol. 2010;150(6):798–806. pmid:20869039.
  5. 5. Nanji AA, Sayyad FE, Galor A, Dubovy S, Karp CL. High-Resolution Optical Coherence Tomography as an Adjunctive Tool in the Diagnosis of Corneal and Conjunctival Pathology. Ocul Surf. 2015;13(3):226–35. pmid:26045235.
  6. 6. Nolan WP, See JL, Chew PTK, Friedman DS, Smith SD, Radhakrishnan S, et al. Detection of primary angle closure using anterior segment optical coherence tomography in Asian eyes. Ophthalmology. 2007;114(1):33–9. pmid:17070597.
  7. 7. Doors M, Tahzib NG, Eggink FA, Berendschot TTJM, Webers CAB, Nuijts RMMA. Use of Anterior Segment Optical Coherence Tomography to Study Corneal Changes After Collagen Cross-linking. Am J Ophthalmol. 2009;148(6):844–51. pmid:19781685.
  8. 8. Nolan WP, See JL, Aung T, Friedman DS, Chan YH, Smith SD, et al. Changes in angle configuration after phacoemulsification measured by anterior segment optical coherence tomography. J Glaucoma. 2008;17(6):455–9. pmid:18794679.
  9. 9. Qian CX, Hassanaly S, Harissi-Dagher M. Anterior segment optical coherence tomography in the long-term follow-up and detection of glaucoma in Boston type I keratoprosthesis. Ophthalmology. 2015;122(2):317–25. pmid:25264027.
  10. 10. Tarnawska D, Wylegala E. Monitoring cornea and graft morphometric dynamics after descemet stripping and endothelial keratoplasty with anterior segment optical coherence tomography. Cornea. 2010;29(3):272–7. pmid:20098306.
  11. 11. Ramakrishnan R, Mitra A, Kader MA, Das S. To study the efficacy of laser peripheral iridoplasty in the treatment of eyes with primary angle closure and plateau iris syndrome, unresponsive to laser peripheral iridotomy, using anterior-segment OCT as a tool. J Glaucoma. 2016;25(5):440–6. pmid:26372154.
  12. 12. Singh M, Chew PTK, Friedman DS, Nolan WP, See JL, Smith SD, et al. Imaging of trabeculectomy blebs using anterior segment optical coherence tomography. Ophthalmology. 2007;114(1):47–53. pmid:17070581.
  13. 13. Kang EM, Ryu IH, Lee G, Kim JK, Lee IS, Jeon GH, et al. Development of a Web-Based Ensemble Machine Learning Application to Select the Optimal Size of Posterior Chamber Phakic Intraocular Lens. Transl Vis Sci Technol. 2021;10(6):5. pmid:34111253.
  14. 14. Cheon MH, Sung KR, Choi EH, Kang SY, Cho JW, Lee S, et al. Effect of age on anterior chamber angle configuration in Asians determined by anterior segment optical coherence tomography; clinic-based study. Acta Ophthalmol. 2010;88(6):e205–10. pmid:20670345.
  15. 15. Gold ME, Kansara S, Nagi KS, Bell NP, Blieden LS, Chuang AZ, et al. Age-related changes in trabecular meshwork imaging. Biomed Res Int. 2013;2013:295204. pmid:24163814.
  16. 16. Invernizzi A, Giardini P, Cigada M, Viola F, Staurenghi G. Three-Dimensional Morphometric Analysis of the Iris by Swept-Source Anterior Segment Optical Coherence Tomography in a Caucasian Population. Invest Ophthalmol Vis Sci. 2015;56(8):4796–801. pmid:26218907.
  17. 17. Jonuscheit S, Doughty MJ, Martin R, Rio-Cristobal A. Relationship between Corneal Thickness and Radius to Body Height. Optom Vis Sci. 2017;94(3):380–6. pmid:27984505.
  18. 18. Kim BJ, Ryu IH, Kim SW. Age-related differences in corneal epithelial thickness measurements with anterior segment optical coherence tomography. Jpn J Ophthalmol. 2016;60(5):357–64. pmid:27324656.
  19. 19. Li Q, Zong Y, Wen H, Yu J, Zhou C, Jiang C, et al. Measurement of Iris Thickness at Different Regions in Healthy Chinese Adults. J Ophthalmol. 2021;2021:2653564. pmid:34055394.
  20. 20. Peterson JR, Blieden LS, Chuang AZ, Baker LA, Rigi M, Feldman RM, et al. Establishing Age-Adjusted Reference Ranges for Iris-Related Parameters in Open Angle Eyes with Anterior Segment Optical Coherence Tomography. PLoS One. 2016;11(1):e0147760. pmid:26815917.
  21. 21. Rigi M, Blieden LS, Nguyen D, Chuang AZ, Baker LA, Bell NP, et al. Trabecular-iris circumference volume in open angle eyes using swept-source fourier domain anterior segment optical coherence tomography. J Ophthalmol. 2014;2014:590978. pmid:25210623.
  22. 22. Xie X, Corradetti G, Song A, Pardeshi A, Sultan W, Lee JY, et al. Age- and refraction-related changes in anterior segment anatomical structures measured by swept-source anterior segment OCT. PLoS One. 2020;15(10):e0240110. pmid:33095821.
  23. 23. Xu L, Cao WF, Wang YX, Chen CX, Jonas JB. Anterior chamber depth and chamber angle and their associations with ocular and general parameters: the Beijing Eye Study. Am J Ophthalmol. 2008;145(5):929–36. pmid:18336789.
  24. 24. Yang Y, Hong J, Deng SX, Xu J. Age-related changes in human corneal epithelial thickness measured with anterior segment optical coherence tomography. Invest Ophthalmol Vis Sci. 2014;55(8):5032–8. pmid:25052994.
  25. 25. Yuen LH, He M, Aung T, Htoon HM, Tan DT, Mehta JS. Biometry of the cornea and anterior chamber in chinese eyes: an anterior segment optical coherence tomography study. Invest Ophthalmol Vis Sci. 2010;51(7):3433–40. pmid:20130280.
  26. 26. Sihota R, Vashisht P, Sharma A, Chakraborty S, Gupta V, Pandey RM. Anterior segment optical coherence tomography characteristics in an Asian population. J Glaucoma. 2012;21(3):180–5. pmid:21430553.
  27. 27. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436–44. pmid:26017442.
  28. 28. Miotto R, Wang F, Wang S, Jiang X, Dudley JT. Deep learning for healthcare: review, opportunities and challenges. Brief Bioinform. 2018;19(6):1236–46. pmid:28481991.
  29. 29. Chan HP, Samala RK, Hadjiiski LM, Zhou C. Deep Learning in Medical Image Analysis. Adv Exp Med Biol. 2020;1213:3–21. pmid:32030660.
  30. 30. Schmidt-Erfurth U, Sadeghipour A, Gerendas BS, Waldstein SM, Bogunovic H. Artificial intelligence in retina. Prog Retin Eye Res. 2018;67:1–29. pmid:30076935.
  31. 31. Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A, et al. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA. 2016;316(22):2402–10. pmid:27898976.
  32. 32. Ting DSW, Cheung CYL, Lim G, Tan GSW, Quang ND, Gan A, et al. Development and Validation of a Deep Learning System for Diabetic Retinopathy and Related Eye Diseases Using Retinal Images From Multiethnic Populations With Diabetes. JAMA. 2017;318(22):2211–23. pmid:29234807.
  33. 33. Takahashi H, Tampo H, Arai Y, Inoue Y, Kawashima H. Applying artificial intelligence to disease staging: Deep learning for improved staging of diabetic retinopathy. PLoS One. 2017;12(6):e0179790. pmid:28640840.
  34. 34. Chen X, Xu Y, Wong DWK, Wong TY, Liu J. Glaucoma detection based on deep convolutional neural network. 2015 37th annual international conference of the IEEE engineering in medicine and biology society (EMBC). IEEE; 2015. pp. 715–8.
  35. 35. Asaoka R, Murata H, Hirasawa K, Fujino Y, Matsuura M, Miki A, et al. Using Deep Learning and Transfer Learning to Accurately Diagnose Early-Onset Glaucoma From Macular Optical Coherence Tomography Images. Am J Ophthalmol. 2019;198:136–45. pmid:30316669.
  36. 36. Schlegl T, Waldstein SM, Bogunovic H, Endstrasser F, Sadeghipour A, Philip AM, et al. Fully Automated Detection and Quantification of Macular Fluid in OCT Using Deep Learning. Ophthalmology. 2018;125(4):549–58. pmid:29224926.
  37. 37. Prahs P, Radeck V, Mayer C, Cvetkov Y, Cvetkova N, Helbig H, et al. OCT-based deep learning algorithm for the evaluation of treatment indication with anti-vascular endothelial growth factor medications. Graefes Arch Clin Exp Ophthalmol. 2018;256(1):91–8. pmid:29127485.
  38. 38. Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:201011929. 2020.
  39. 39. Chefer H, Gur S, Wolf L. Transformer Interpretability Beyond Attention Visualization. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2021. pp. 782–91.
  40. 40. Wu J, Hu R, Xiao Z, Chen J, Liu J. Vision Transformer-based recognition of diabetic retinopathy grade. Med Phys. 2021;48(12):7850–63. pmid:34693536.
  41. 41. Sun R, Li Y, Zhang T, Mao Z, Wu F, Zhang Y. Lesion-Aware Transformers for Diabetic Retinopathy Grading. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2021. pp. 10938–47.
  42. 42. Poplin R, Varadarajan AV, Blumer K, Liu Y, McConnell MV, Corrado GS, et al. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat Biomed Eng. 2018;2(3):158–64. pmid:31015713.
  43. 43. Rim TH, Lee CJ, Tham YC, Cheung N, Yu M, Lee G, et al. Deep-learning-based cardiovascular risk stratification using coronary artery calcium scores predicted from retinal photographs. Lancet Digit Health. 2021;3(5):e306–e16. pmid:33890578.
  44. 44. Xiao W, Huang X, Wang JH, Lin DR, Zhu Y, Chen C, et al. Screening and identifying hepatobiliary diseases through deep learning using ocular images: a prospective, multicentre study. Lancet Digit Health. 2021;3(2):e88–e97. pmid:33509389.