Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Prediction of hypertension, hyperglycemia and dyslipidemia from retinal fundus photographs via deep learning: A cross-sectional study of chronic diseases in central China

  • Li Zhang,

    Roles Data curation, Formal analysis, Methodology, Writing – original draft, Writing – review & editing

    Affiliation School of Public Health, Xinxiang Medical University, Xinxiang, Henan Province, China

  • Mengya Yuan,

    Roles Formal analysis, Investigation, Methodology, Software, Validation

    Affiliation School of Public Health, Xinxiang Medical University, Xinxiang, Henan Province, China

  • Zhen An,

    Roles Conceptualization, Investigation, Resources, Software

    Affiliation School of Public Health, Xinxiang Medical University, Xinxiang, Henan Province, China

  • Xiangmei Zhao,

    Roles Data curation, Investigation, Methodology, Visualization

    Affiliation School of Public Health, Xinxiang Medical University, Xinxiang, Henan Province, China

  • Hui Wu,

    Roles Data curation, Investigation, Resources

    Affiliation School of Public Health, Xinxiang Medical University, Xinxiang, Henan Province, China

  • Haibin Li,

    Roles Data curation, Investigation, Methodology, Resources, Supervision

    Affiliation School of Public Health, Xinxiang Medical University, Xinxiang, Henan Province, China

  • Ya Wang,

    Roles Data curation, Formal analysis, Methodology, Software, Validation

    Affiliation School of Public Health, Xinxiang Medical University, Xinxiang, Henan Province, China

  • Beibei Sun,

    Roles Data curation, Formal analysis, Investigation, Methodology, Software, Validation

    Affiliation School of Public Health, Xinxiang Medical University, Xinxiang, Henan Province, China

  • Huijun Li,

    Roles Investigation, Methodology, Project administration

    Affiliation School of Public Health, Xinxiang Medical University, Xinxiang, Henan Province, China

  • Shibin Ding,

    Roles Investigation, Methodology

    Affiliation School of Public Health, Xinxiang Medical University, Xinxiang, Henan Province, China

  • Xiang Zeng,

    Roles Formal analysis, Investigation

    Affiliation School of Public Health, Xinxiang Medical University, Xinxiang, Henan Province, China

  • Ling Chao,

    Roles Investigation, Methodology

    Affiliation School of Public Health, Xinxiang Medical University, Xinxiang, Henan Province, China

  • Pan Li ,

    Roles Conceptualization, Data curation, Formal analysis, Methodology, Project administration, Software, Supervision, Writing – original draft, Writing – review & editing (PL); (WW)

    Affiliation School of Public Health, Xinxiang Medical University, Xinxiang, Henan Province, China

  • Weidong Wu

    Roles Funding acquisition, Project administration, Resources, Supervision, Writing – review & editing (PL); (WW)

    Affiliation School of Public Health, Xinxiang Medical University, Xinxiang, Henan Province, China

Prediction of hypertension, hyperglycemia and dyslipidemia from retinal fundus photographs via deep learning: A cross-sectional study of chronic diseases in central China

  • Li Zhang, 
  • Mengya Yuan, 
  • Zhen An, 
  • Xiangmei Zhao, 
  • Hui Wu, 
  • Haibin Li, 
  • Ya Wang, 
  • Beibei Sun, 
  • Huijun Li, 
  • Shibin Ding


Retinal fundus photography provides a non-invasive approach for identifying early microcirculatory alterations of chronic diseases prior to the onset of overt clinical complications. Here, we developed neural network models to predict hypertension, hyperglycemia, dyslipidemia, and a range of risk factors from retinal fundus images obtained from a cross-sectional study of chronic diseases in rural areas of Xinxiang County, Henan, in central China. 1222 high-quality retinal images and over 50 measurements of anthropometry and biochemical parameters were generated from 625 subjects. The models in this study achieved an area under the ROC curve (AUC) of 0.880 in predicting hyperglycemia, of 0.766 in predicting hypertension, and of 0.703 in predicting dyslipidemia. In addition, these models can predict with AUC>0.7 several blood test erythrocyte parameters, including hematocrit (HCT), mean corpuscular hemoglobin concentration (MCHC), and a cluster of cardiovascular disease (CVD) risk factors. Taken together, deep learning approaches are feasible for predicting hypertension, dyslipidemia, diabetes, and risks of other chronic diseases.


Hypertension, hyperglycemia, and dyslipidemia are disorders defined by dircect mesures of blood pressure, fasting plasma glucose, and triglyceride levels, respectively. These disorders frequently occur with each other and are among the primary risk factors for cardiovascular disease (CVD), the leading cause of morbidity and mortality worldwide [1]. As China faces the ageing of its population, changes in lifestyle and longer life expectancy have led to increased CVD events. CVD now accounts for more than 40% of deaths from all causes [2, 3]. With a rise in CVD on this scale, it is not only a serious public health problem but also a substantial burden on both healthcare systems and budgets. Thus, measures to prevent and control CVD in China are ugently needed.

Over the past few years, advances in the field of digital retinal photography and imaging techniques have made it possible to characterize subtle changes in retinal blood vessels precisely. From retinal fundus images, early microcirculation changes in chronic diseases prior to the onset of obvious clinical complications can be detected directly and non-invasively [4].

Changes in the retina have used by physians to assess a patient’s risk of a number of CVD including diabetes and hypertension [47]. That is, these features in the eyes may reflect the conditions of the cardiovascular system. Poplin et al. [8] showed that retinal images alone were sufficient to predict several CVD risk factors such as age, gender, smoking status, blood pressure, and body mass index (BMI). In this study, we predicted hypertension, hyperglycemia, dyslipidemia, and a collection of other risk factors from retinal fundus photographs in a cross-sectional study of chronic diseases in central China using deep learning approaches. The subjects in this study were mainly from rural areas of Xinxiang County, Henan Province, China.

Deep learning is a family of machine learning algorithms based on learning data representations. It allows a machine to be fed raw data and to automatically discover the reprnesentations needed for detection or classification [9, 10]. In recent years, deep learning algorithms such as convolutional neural networks (CNNs) have been widely applied to medical imaging analysis [1115]. Transfer learning with CNNs is a machine learning technology that learning of a new task (e.g., medical images) relies on the previously learned tasks (e.g., ImageNet, a dataset of millions of common everyday objects), the learning process can be faster, more accurate and need less training data [12]. In recent years, transfer learning has become integral to many applications, especially in medical imaging [1218]. Many applications on medical imaging have demonstrated promising results and reached expert-level diagnostic accuracies, such as assisting classification of Alzheimer's disease stages using 3D MRI scans [16], detection and quantification of macular fluid in OCT images [17], breast-mass identification using mammography scans [18], diagnosis of pediatric pneumonia using chest X-ray images [12] and detection of diabetic retinopathy in retinal fundus photographs [14].

The aim of the present study was to develop automated artificial intelligence models, applicable to large-scale population screening, which could be used to predict hypertension, hyperglycemia, dyslipidemia, and other risk factors for CVD based on retinal fundus images [8, 19]. Large-scale detection and early treatment of hypertension, hyperglycemia, and dyslipidemia enabled by this technology, especially in rural areas, may reduce both cardiovascular events and the economic burden on national health care systems.

Materials and methods

Study population

The dataset in this study was generated from April to June, 2017 through recruiting 625 participants, aged 24–83 years, across several rural villages of Xinxiang County, Henan province in central China to assess the relationships between retinal vascular profiles and chronic diseases. The protocol of this study was reviewed and approved by the Ethics Committee of Xinxiang Medical University for Human Studies (IRB registration number XY-HS04). Each subject signed an informed consent form and went through a series of health measurements and questionnaires. Blood samples of each subject were collected to assess biochemical alterations from April 20 to June 6, 2017.

Trained physicians collected the subjects' blood samples in the morning after overnight fasting using standard methods. Trained and certified medical students measured resting blood pressure using an automated OMRON HEM-7071 professional portable blood pressure monitor with the participant seated. Anthropometric measurements, including height, waistline, and hip circumference, were measured twice with a tape. Body weight was obtained using an automated weight monitor following the manufacturer's instruction. Body weight and the average of the height, waistline, and hip circumference were used to calculate the BMI and waist-hip ratio (WHR).

Smoking, alcohol drinking, and salt intake statuses were obtained using a questionnaire. For smoking and drinking, the participants were asked to self-identify as a current drinker (drinking more than 12 times in the past year) or smoker (having smoking habits in the past six months), former drinker or smoker, or non-drinker or non-smoker. Those who had a drinking or smoking history were then asked for additional details. For the purpose of this study, the population was binarized into those who were current drinkers or smokers and those who were not. For salt intake status, the participants were asked to self-identify whether their eating habits were salty, and the options included four categories (light, general, salty, very salty). The subjects were also classified into two groups, salty and non-salty intake population.

Paired color retinal fundus photographs of the participants were taken using the Canon CR-2 Digital Non-Mydriatic Retinal Camera. For each subject, we selected the paired 2 images, separately from left and right eyes. However, 28 subjects only have one image be selected since the other one is missing or with low quality. Finally, we obtained 1222 images from 625 subjects. Fundus images of this dataset are consistently sized (2736×1824 pixels).

Risk factors selected to develop classification models

The primary application of the deep neural network models was in the detection of hypertension, hyperglycemia, and dyslipidemia from retinal fundus images. Besides, we aimed to train deep neural network models to predict a variety of risk factors that are related to the development of hypertension, hyperglycemia and dyslipidemia from retinal fundus images, which included age, BMI, WHR, lifestyle data (drinking, smoking and salty taste status) and biochemical parameters from blood samples (hematocrit (HCT), total bilirubin (T-BIL), direct bilirubin (D-BIL), mean corpuscular hemoglobin concentration (MCHC), total cholesterol (TC) and low-density lipoprotein cholesterol (LDL-C)). The subjects and their corresponding retinal images for each diagnosis category of the above three disorders and other risk factors were separately divided into two classes based on their corresponding classification criterion. For example, for variable ‘hypertension’, the subjects whose systolic BP ≥ 130 or diastolic BP ≥ 85 mmHg or treatment of previously diagnosed hypertension are classified into abnormal group and all other subjects are grouped into normal group; for variable ‘smoking’, the subjects were divided into smoking or none-smoking based on their self-reported information. When training models for each factor, only the subjects with corresponding risk factor outcome information and their fundus images were selected. Classification criterion of each above risk factor are available in S1 Table.

Model development

In this study, we used a transfer learning strategy to process retinal images and to develop models having an accurate diagnosis of hypertension, hyperglycemia, dyslipidemia and other related risk factors. For training and testing processes, we used the open source machine learning platform TensorFlow ( [20]. All the experiments were run on a machine learning workstation with an Intel i7-6850K CPU @ 3.60 GHz with 16 GB of RAM memory and 4 NVIDIA GeForce Titanxp GPU card of 12 GB.

The training process of transfer learning includes loading a pre-trained convolutional neural network model and its pre-trained weights, and then retraining the parameters of the fully-connected and softmax layers to classify images [21]. The pre-trained model used in this study was the Inception-v3 image recognition neural network, which was trained with a dataset of 1000 classes and more than a million images of common everyday objects from the original ImageNet database [22, 23]. Though this Inception-v3 model was not developed for medical image recognition, it has been successfully used for classifying medical images base on transfer learning methods [12, 24], which include classification of retinal fundus images [8, 14, 25]. In this study, the convolutional layers from Inception-v3 were frozen and used as fixed feature extractors. Images were first input to the Inception-v3 neural network, which extracts general features from input images and converts the image data into feature vectors. Then a classification part with fully-connected and softmax layers was trained to classify the images and outcome the predicted labels.

We trained models separately for each selected risk factors. When training each model, the whole dataset is retinal images labeled into two classes base on subjects’ corresponding risk factor outcome information and this risk factor’s classification criterion. Then, the whole dataset was randomly divided into three portions: a training dataset (80%), a tuning validation dataset (10%), and a test dataset (10%). The training and tuning validation datasets were used to develop the model, and the test dataset was used to validate the performance of the final model. During the training processes, a back propagation algorithm was used to optimize the network’s internal parameters [22], and L2 regularization technique was used to avoid overfitting [26, 27].

Image preprocessing and augmentation

All images were resized into a consistent size (800 × 800 pixels) before training. To correct uneven illumination and brightness, and to adjust variations contrast of retinal images, we pre-processed all the images using the subtractive normalization approach (S1 Fig). The image normalization formula is as follows: where image is the original image, and image gaussian is the image processed by Gaussian filter, α = 4, β = -4 and γ = -128.

Training deep neural networks on imbalanced datasets, in which the majority of data instances belong to one class and far fewer instances belong to others, is an important problem as imbalanced datasets exist widely in the real world [28, 29]. Classifiers trained with imbalanced data are often biased towards the majority class and therefore cause higher misclassification rates for the minority class [28]. To overcome this challenge, only hypertension, hyperglycemia, and dyslipidemia and 13 related risk factors with the ratio of its two classes less than 4:1 were trained to obtain the classification model in this study (S1 Table). Minority classes in each variable were oversampled using an augmentation approach until the two classes were equal. Data augmentation was conducted using Augmentor, which was an image augmentation library designed to aid the artificial generation of image data for machine learning [30].

Statistical analysis

The output of each prediction model is two continuous numbers from 0 to 1, each referring a probability of each diagnostic label, whose sum is 1. For example, in the hypertension prediction model, the results were presented as ‘hypertension: 0.897 and non-hypertension: 0.103’. The final prediction was based on the predicted labels with a higher probability, which meant that the predicted label in the example above was hypertension. For each risk factor, the accuracy of its prediction model was measured by dividing the number of correctly labeled images by the total number of images that are available in this risk factor. ROC curves were used to plot the false positive rate versus the true positive rate of the model in predicting labels of the test images. The AUC was used to evaluate the model performance for classification of each binary risk factor.


Study subjects and characteristics used to develop classification models

We obtained 1222 retinal fundus images from 625 subjects from the cross-sectional syidy of chronic diseases dataset of Henan province in central China. The mean age of the subjects was 54.70 ± 11.67 years, and 55.86% of them were self-identified having at least one ‘chronic disease diagnosed by doctor,’ such as hypertension, hyperlipidemia, diabetes, or coronary disease. All characteristics of the subjects are shown in Table 1, which includes 50 variables from blood tests or self-reported questionnaires. In this study, to overcome the problem of machine learning generated by imbalanced dataset, only hypertension, hyperglycemia, dyslipidemia and 13 related risk factors with the ratio of its two classes less than 4:1 were trained to obtain the classification model (S1 Table).

Table 1. Characteristics of 625 subjects in our chronic disease cohort dataset.

Model performance in detecting hypertension, hyperglycemia and dyslipidemia

We evaluated the models’ performance in detecting hypertension, hyperglycemia and dyslipidemia from retinal fundus images by accessing prediction accuracy and generating the receiver operating characteristic curve (ROC). In the training process, we used an L2 regularization technique to prevent overfitting and stopped the training process when both accuracy and cross-entropy could not be improved further. The accuracy and cross-entropy of the three disorders are shown in S2 Fig and the ROC curves are shown in Fig 1. As a result, we achieved an accuracy of 78.7% in detecting hyperglycemia, with an area under the ROC curve (AUC) of 0.880; an accuracy of 68.8% in detecting hypertension, with an AUC of 0.766; an accuracy of 66.7% in detecting dyslipidemia, with an AUC of 0.703.

Fig 1. ROC curves for predicted models in detecting three disorders.

Model performance in classification of cardiovascular disease risk factors

Although hypertension, hyperglycemia, and dyslipidemia can be discriminated from each other by means of levels of fasting plasma glucose (FPG), systolic blood pressure (SBP) or diastolic blood pressure (DBP), and triglyceride (TG), the diagnosis and prevention of these disorders continues to bother the doctors in clinical practice. The underlying causes of these disorders include genetics, physical inactivity, aging, a proinflammatory state and hormonal changes. Obesity, age, smoking and a collection of CVD risk factors can lead to the development of hypertension, hyperglycemia and dyslipidemia [31, 32]. These risk factors should be considered when attempting to manage the prevention of cardiovascular in clinical practice.

We further selected a number of other parameters that appeared to be related to CVD and trained the deep learning models to classify each parameter via retinal fundus images. Our models achieved an AUC >0.7 in predicting age, gender, drinking status, smoking status, salty taste, BMI, WHR, and HCT (Table 2).

Table 2. Classification criterion and the model performance of each variable.


Changes in retinal vasculature are associated with cardiovascular disorders such as hypertension, metabolic syndrome, diabetes, and stroke [4, 5, 7]. Long term duration of diabetes and hypertension are the main factors to the onset of some eye diseases, such as diabetic retinopathy (DR) and hypertensive retinopathy (HR) [4]. In recent years, deep learning methods are increasingly used to improve clinical practice by using medical images including retinal fundus images [9, 12, 15]. The performance of these automated models could achieve as accurate as and in some cases superior to human experts in diagnosing diseases [14, 15, 25, 33, 34]. Triwijoyo et al. [33] developed a model of predicting HR, which achieved the prediction accuracy of 0.986. In detecting DR, there were also several studies achieved good performance with AUC > 0.989 [14, 25, 33, 34, 25]. However, these studies were focused on the complications of eyes caused by cardiovascular diseases. For DR, Tapp et al. [35] has shown that the prevalence of DR is less than 10% in those with diabetes duration of less than 5 years. In our study, we generated a retinal fundus image dataset from a population in rural areas of central China, and demonstrate that deep learning models have the ability to predict hypertension (AUC = 0.766), hyperglycemia (AUC = 0.880), and dyslipidemia (AUC = 0.703) using retinal fundus images alone. This result achieved a higher accuracy when comparing with a recent published study by Dai et al. [36], which used a different population in China as well and showed that hypertension can be predicted using fundus images with an accuracy of 0.609. These results demonstrate that early microcirculatory changes may reflect the disorders of some cardiovascular risk factors before the onset of clinical cardiovascular diseases or complication eye diseases. Besides, our study is not limited to predict the above three disorders. Consistent with the study by Poplin et al. [8] in a mainly Caucasian and Hispanic population, we found that cardiovascular risk factors like age, gender, smoking status, and BMI can be predicted directly using retinal fundus images of rural population in central China. Since most of the cardiovascular risk factors can be reflected by retinal fundus images alone, our deep learning methods may, therefore, offer a novel, noninvasive measurement of early changes in the vasculature and allow the identification of people at risk of cardiovascular diseases. Importantly, our results show that applying deep learning to retinal fundus images can also predict blood erythrocyte parameters, including HCT and MCHC (Table 2). Previous studies have confirmed that erythrocyte parameters are associated with cardiovascular diseases, such as metabolic syndrome [37] and that elevated blood erythrocyte parameters can have adverse effects on retinal vessel calibers [38].

CVD, the major cause of death in China, has become a major public health concern [3, 39, 40]. The increasing prevalence of CVD in China is closely linked to a number of risk factors, including hypertension, dyslipidemia, diabetes, smoking, obesity, and metabolic syndrome [31, 39]. An efficient and accurate identification of these risk factors is essential to ensure the prevention and control of CVD. For example, stroke can be reduced by 50% by controlling hypertension [3]. In this study, we applied a deep learning algorithm to analyze retinal fundus images to develop models that, without anthropometry and biochemical data, predicted many cardiovascular risk factors. This technology, coupled with informed policy and intervention strategies, offers a potentially automated approach to preventing and controlling CVD in large populations, especially in rural areas of China.

Despite the good performance of our models, our study has several limitations. Our dataset size is relatively small, although transfer learning algorithm can achieve a highly accurate model with a relatively small training dataset [1113]. A larger population with more cardiovascular events would make deep learning models that could be trained and evaluated with more accuracy and higher confidence. In addition, employment of the datasets from other sources to validate our models would be beneficial for all these predictions. Overcoming these limitations using these datasets also provides an opportunity to iteratively re-train the deep learning algorithms and improve model performance.

In conclusion, we show that the application of deep learning to retinal fundus images is useful in the prediction of the important CVD risk factors of hypertension, dyslipidemia, diabetes. More importantly, it makes cardiovascular risk assessment of a large population both technically and economically feasible. Our work also suggests that deep learning model analysis of retinal fundus images is useful to diagnose widespread systemic vascular diseases.

Supporting information

S1 Checklist. STROBE statement—checklist of items that should be included in reports of observational studies.


S1 Fig. Image pre-processing.

A: Original image, B: Image after pre-processing.


S2 Fig. Plots showing the model performance in the training and validation datasets.

The accuracy of three disorders are shown in A (Hyperglycemia), B (Hypertension) and C (Dyslipidemia). The cross-entropy of the three disorders were shown in D-F. Training dataset: orange; Validation dataset: blue.


S1 Table. Group members of subjects and their corresponding retinal images in each variable and their image number ratio of each two groups.



We greatly appreciate Dr. David C. Henke at the School of Medicine, University of North Carolina at Chapel Hill, USA, for revision of this manuscript.


  1. 1. Koene RJ, Prizment AE, Blaes A, Konety SH. Shared risk factors in cardiovascular disease and cancer. Circulation. 2016;133(11): 1104–14. pmid:26976915
  2. 2. Bundy JD, He J. Hypertension and related cardiovascular disease burden in China. Annals of global health. 2016;82(2): 227–33. pmid:27372527
  3. 3. Weiwei C, Runlin G, Lisheng L, Manlu Z, Wen W, Yongjun W, et al. Outline of the report on cardiovascular diseases in China, 2014. European heart journal supplements: journal of the European Society of Cardiology. 2016;18: F2–F11.
  4. 4. Nguyen TT, Wong TY. Retinal vascular changes and diabetic retinopathy. Current diabetes reports. 2009;9(4): 277–83. pmid:19640340
  5. 5. Kaushik S, Kifley A, Mitchell P, Wang JJ. Age, blood pressure, and retinal vessel diameter: separate effects and interaction of blood pressure and age. Investigative ophthalmology & visual science. 2007;48(2): 557–61.
  6. 6. Kifley A, Liew G, Wang JJ, Kaushik S, Smith W, Wong TY, et al. Long-term effects of smoking on retinal microvascular caliber. American journal of epidemiology. 2007;166(11): 1288–97. pmid:17934202
  7. 7. Wong TY, Duncan BB, Golden SH, Klein R, Couper DJ, Klein BE, et al. Associations between the metabolic syndrome and retinal microvascular signs: The Atherosclerosis Risk In Communities study. Investigative ophthalmology & visual science. 2004;45(9): 2949–54.
  8. 8. Poplin R, Varadarajan AV, Blumer K, Liu Y, McConnell MV, Corrado GS, et al. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nature Biomedical Engineering. 2018;2(3): 158–64. pmid:31015713
  9. 9. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553): 436–44. pmid:26017442
  10. 10. Schmidhuber J. Deep learning in neural networks: an overview. Neural networks: the official journal of the International Neural Network Society. 2015;61: 85–117.
  11. 11. Esteva A, Robicquet A, Ramsundar B, Kuleshov V, DePristo M, Chou K, et al. A guide to deep learning in healthcare. Nature medicine. 2019;25(1): 24–9. pmid:30617335
  12. 12. Kermany DS, Goldbaum M, Cai W, Valentim CCS, Liang H, Baxter SL, et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell. 2018;172(5): 1122–31. pmid:29474911
  13. 13. Yu KH, Beam AL, Kohane IS. Artificial intelligence in healthcare. Nature Biomedical Engineering. 2018;2(10): 719–31. pmid:31015651
  14. 14. Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. Jama. 2016;316(22): 2402–10. pmid:27898976
  15. 15. Bi WL, Hosny A, Schabath MB, Giger ML, Birkbak NJ, Mehrtash A, et al. Artificial intelligence in cancer imaging: Clinical challenges and applications. CA: a cancer journal for clinicians. 2019;69(2): 127–57.
  16. 16. Maqsood M, Nazir F, Khan U, Aadil F, Jamal H, Mehmood I, et al. Transfer learning assisted classification and detection of Alzheimer's disease stages using 3D MRI scans. Sensors (Basel). 2019;19(11): 2645.
  17. 17. Schlegl T, Waldstein SM, Bogunovic H, Endstrasser F, Sadeghipour A, Philip AM, et al. Fully automated detection and quantification of macular fluid in OCT using deep learning. Ophthalmology. 2018;125: 549–558. pmid:29224926
  18. 18. Samala RK, Chan HP, Hadjiiski L, Helvie MA, Wei J, Cha K. Mass detection in digital breast tomosynthesis: Deep convolutional neural network with transfer learning from mammography. Medical physics. 2016;43: 6654. pmid:27908154
  19. 19. Schmidt-Erfurth U, Sadeghipour A, Gerendas BS, Waldstein SM, Bogunovic H. Artificial intelligence in retina. Progress in retinal and eye research. 2018;67: 1–29. pmid:30076935
  20. 20. Abadi M, Barham P, Chen J, Chen Z, Davis A, Dean J, et al. TensorFlow: A system for large-scale machine learning. 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ‘16) 2016.
  21. 21. Shin HC, Roth HR, Gao M, Lu L, Xu Z, Nogues I, et al. Deep convolutional neural networks for computer-aided detection: CNN Architectures, Dataset Characteristics and Transfer Learning. IEEE transactions on medical imaging. 2016;35(5): 1285–98. pmid:26886976
  22. 22. Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Advances in neural information processing systems; 2012.
  23. 23. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the Inception architecture for computer vision. Proceedings of the IEEE conference on computer vision and pattern recognition; 2016.
  24. 24. Chang J, Yu J, Han T, Chang H-j, Park E. A method for classifying medical images using transfer learning: A pilot study on histopathology of breast cancer. IEEE 19th International Conference on e-Health Networking, Applications and Services (Healthcom), 2017.
  25. 25. Li Z, Keel S, Liu C, He Y, Meng W, Scheetz J, et al. An automated grading system for detection of vision-threatening referable diabetic retinopathy on the basis of color fundus photographs. Diabetes Care. 2018;41: 2509–2516. pmid:30275284
  26. 26. Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J Mach Learn Res. 2014;15(1): 1929–58.
  27. 27. Han S, Pool J, Tran J, Dally W. Learning both weights and connections for efficient neural network. Advances in neural information processing systems; 2015.
  28. 28. Lopez V, Fernandez A, Garcia S, Palade V, Herrera F. An insight into classification with imbalanced data: Empirical results and current trends on using data intrinsic characteristics. Information Sciences. 2013;250: 113–41.
  29. 29. Yan Y, Chen M, Shyu M, Chen S. Deep learning for imbalanced multimedia data classification. 2015 IEEE International Symposium on Multimedia (ISM); 2015.
  30. 30. Bloice MD, Stocker C, Holzinger A. Augmentor: an image augmentation library for machine learning. 2017; arXiv:170804680.
  31. 31. Bays H, Abate N, Chandalia M. Adiposopathy: sick fat causes high blood sugar, high blood pressure and dyslipidemia. Future cardiology. 2005;1(1): 39–59. pmid:19804060
  32. 32. Halpern A, Mancini MC, Magalhaes ME, Fisberg M, Radominski R, Bertolami MC, et al. Metabolic syndrome, dyslipidemia, hypertension and type 2 diabetes in youth: from diagnosis to treatment. Diabetology & metabolic syndrome. 2010;2: 55.
  33. 33. Triwijoyo BK, Budiharto W, Abdurachman E. The Classification of Hypertensive Retinopathy using Convolutional Neural Network. Procedia Computer Science. 2017;116: 166–173.
  34. 34. Gargeya R, Leng T. Automated identification of diabetic retinopathy using deep learning. Ophthalmology. 2017;124: 962–969. pmid:28359545
  35. 35. Tapp RJ, Shaw JE, Harper CA, de Courten MP, Balkau B, McCarty DJ, et al. The prevalence of and factors associated with diabetic retinopathy in the Australian population. Diabetes Care. 2003;26: 1731–1737. pmid:12766102
  36. 36. Dai G, He W, Xu L, Pazo EE, Lin T, Liu S, et al. Exploring the effect of hypertension on retinal microvasculature using deep learning on East Asian population. PLOS ONE. 2020;15: e0230111. pmid:32134976
  37. 37. Wu S, Lin H, Zhang C, Zhang Q, Zhang D, Zhang Y, et al. Association between erythrocyte parameters and metabolic syndrome in urban Han Chinese: a longitudinal cohort study. BMC public health. 2013;13: 989. pmid:24144016
  38. 38. Liew G, Wang JJ, Rochtchina E, Wong TY, Mitchell P. Complete blood count and retinal vessel calibers. PLOS ONE. 2014;9(7): e102230. pmid:25036459
  39. 39. Hu SS, Kong LZ, Gao RL, Zhu ML, Wang W, Wang YJ, et al. Outline of the report on cardiovascular disease in China, 2010. Biomedical and environmental sciences: BES. 2012;25(3): 251–6. pmid:22840574
  40. 40. Li H, Ge J. Cardiovascular diseases in China: Current status and future perspectives. International journal of cardiology Heart & vasculature. 2015;6: 25–31.