Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Prediction of pathological complete response to neoadjuvant chemotherapy in locally advanced breast cancer by using a deep learning model with 18F-FDG PET/CT

  • Gülcan Bulut ,

    Contributed equally to this work with: Gülcan Bulut, Hasan Ikbal Atilgan, Gökalp Çınarer, Kazım Kılıç, Deniz Yıkar, Tuba Parlar

    Roles Writing – review & editing

    gulcanbulut07@gmail.com

    Affiliation Division of Medical Oncology, International Medicana Hospital, Izmir, Turkey

  • Hasan Ikbal Atilgan ,

    Contributed equally to this work with: Gülcan Bulut, Hasan Ikbal Atilgan, Gökalp Çınarer, Kazım Kılıç, Deniz Yıkar, Tuba Parlar

    Roles Data curation

    Affiliation Department of Nuclear Medicine, Mustafa Kemal University Medical School, Hatay, Turkey

  • Gökalp Çınarer ,

    Contributed equally to this work with: Gülcan Bulut, Hasan Ikbal Atilgan, Gökalp Çınarer, Kazım Kılıç, Deniz Yıkar, Tuba Parlar

    Roles Formal analysis

    Affiliation Department of Computer Engineering, Faculty of Engineering and Architecture, Bozok University, Yozgat, Turkey

  • Kazım Kılıç ,

    Contributed equally to this work with: Gülcan Bulut, Hasan Ikbal Atilgan, Gökalp Çınarer, Kazım Kılıç, Deniz Yıkar, Tuba Parlar

    Roles Investigation

    Affiliation Department of Computer Programming, Yozgat Vocational High School, Bozok University, Yozgat, Turkey

  • Deniz Yıkar ,

    Contributed equally to this work with: Gülcan Bulut, Hasan Ikbal Atilgan, Gökalp Çınarer, Kazım Kılıç, Deniz Yıkar, Tuba Parlar

    Roles Data curation

    Affiliation Division of Nuclear Medicine, Hatay Training and Research Hospital, Hatay, Turkey

  • Tuba Parlar

    Contributed equally to this work with: Gülcan Bulut, Hasan Ikbal Atilgan, Gökalp Çınarer, Kazım Kılıç, Deniz Yıkar, Tuba Parlar

    Roles Validation, Writing – original draft

    Affiliation Department of Computer Technologies, Mustafa Kemal University, Hatay, Türkiye

Abstract

Objectives

The aim of the study is 18F-FDG PET/CT imaging by using deep learning method are predictive for pathological complete response pCR after Neoadjuvant chemotherapy (NAC) in locally advanced breast cancer (LABC).

Introduction

NAC is the standard treatment for locally advanced breast cancer (LABC). Pathological complete response (pCR) after NAC is considered a good predictor of disease-free survival (DFS) and overall survival (OS).Therefore, there is a need to develop methods that can predict the pCR at the time of diagnosis.

Methods

This article was designed as a retrospective chart study.For the convolutional neural network model, a total of 355 PET/CT images of 31 patients were used. All patients had primary breast surgery after completing NAC.

Results

Pathological complete response was obtained in a total of 9 patients. The study results show that our proposed deep convolutional neural networks model achieved a remarkable success with an accuracy of 84.79% to predict pathological complete response.

Conclusion

It was concluded that deep learning methods can predict breast cancer treatment.

1. Introduction

Breast cancer is the most common form of cancer and the second most common cause of cancer death amongst women. Neoadjuvant chemotherapy (NAC) is the standard treatment for locally advanced breast cancer (LABC) and tumor downstage to achieve breast-conserving surgery [1]. Pathological complete response (pCR) after NAC is considered a good predictive marker for disease-free survival (DFS) and overall survival (OS), particularly in patients with more aggressive subtypes such as triple-negative or HER2-positive breast cancer [2, 3]. Therefore, there are numerous studies examining clinicopathological features that can be used to predict pCR in patients receiving NAC [4]. Clinical tumor size (cT) and tumor grade are other clinicopathological features used to predict pCR [4, 5]. Apart from molecular subtypes, no other biomarker, including Ki 67 value and residual cancer, burden has so far been validated as predictive markers for pCR after NAC [6].

Early detection of response to NAC is important to prevent the toxicity of ineffective chemotherapy [7]. Conventional imaging tools including fluorine 18 (18F) fluorodeoxyglucose (FDG) PET/CT and magnetic resonance imaging (MRI) have been used in the evaluation of the responders after NAC [8]. PET/CT provides reliable information not only tumor size changes but also about the evaluation of tumor response [9]. 18F-FDG PET/CT, which has been widely used in imaging of cancers, shows glucose metabolism of cancer cells. 18F-FDG PET/CT has been used for staging and restaging of cancers and for assessing therapy response. 18F-FDG PET/CT can provide early detection of response to chemotherapy in locally advanced and metastatic breast cancers [10]. 18F-FDG PET/CT is a valuable imaging tool for the early assessment of response to NAC in breast cancer [7]. Some researchers study the role of radiomic features from 18F-FDG PET/CT images to predict pathological complete response to neoadjuvant chemotherapy in locally advanced breast cancer patients [11]. 18F-FDG PET/CT can allow early detection of pathological complete response after NAC in triple negative breast cancer [12]. Changes in SUVmax value after NAC is associated with pathological complete response [13].

In recent years, deep neural networks have been used effectively in medical image processing field to classify radiological images [14]. The most popular architecture is the convolutional neural networks (CNNs) model. CNNs model minimizes the data pre-processing and learns features from large amounts of data with using convolutions and pooling functions. Numerous studies have been presented regarding the use of CNNs for diagnostic purposes in radiological imaging of the breast [1517]. However, few studies have reported that it is used to predict pathological complete response to neoadjuvant chemotherapy using the 18F-FDG PET/CT images [18, 19]. 18F-FDG PET/CT has a high sensitivity and specificity when predicting breast cancer metastasis [20]. 18F-FDG PET/CT images were studied in diagnosis, staging, prediction of prognosis and evaluation of response to therapy using CNNs in clinical field including oncology lung cancer, head and neck cancer, prostate cancer, cervical cancer, and sarcomas [21].

To the best of our knowledge, there have not been any study to predict the complete pathological responses of locally advanced breast cancer to neoadjuvant chemotherapy using residual deep neural network model on 18F-FDG PET/CT images. The primary aim of this study was to examine whether the clinicopathological features and 18F-FDG PET/CT imaging by using deep learning method are predictive for pCR after NAC in locally advanced breast cancer.

2. Material and method

2.1. Patients

The study included 31 patients with a mean age of 54.26±10.71 years (min: 33, max: 82) as can be observed in Table 1. Medical records of the patients who received NAC after diagnosis of LABC in the Medical Oncology Division of Defne Hospital between 2013 and 2020, were retrospectively evaluated. This study was reviewed by the approval of Mustafa Kemal University Medical School Clinical Research Committee with the decision number 13 on the date 29.11.2019. The participants provided written informed consent. All patients details included in the study were de-identified.

As can be seen in Table 1, one patient had radiological T1, 11 patients T2, 17 patients T3 and two patients T4. Seven patients had radiological N1, 20 patients N2 and four patients N3 lymph node metastasis. The mean SUVmax values of the primary tumor were 12.84±7.56 (min: 4.25, max: 32.24). Nine of 31 patients (29.03%) had complete pathological response, while the remaining 22 patients (70.97%) had residual disease on surgical pathological specimens.

The patients were selectively selected according to inclusion and exclusion criteria. The inclusion criteria for the study were female gender, clinical status of II to III according to the 8th Edition tumor-node-metastasis (TNM) classification of the American Joint Committee on Cancer Staging [22], complete blood count performed prior NAC, availability of postoperative pathology reports after surgical procedure. As part of the NAC regimen, the patients were administered taxane-based regimens (paclitaxel 80 mg/m2 for 12 weeks or 4 cycles of docetaxel 75 mg/m2, every 3 weeks) combined and anthracycline-based regimens (4 cycles of doxorubicin 60 mg/m2 and cyclophosphamide 600 mg/m2, or cyclophosphamide 600 mg/m2 and epirubicin 50 mg/m2, every 3 weeks). Human epidermal growth factor receptor 2 (HER2) positive patients treated with trastuzumab (At the time of the study, there was no access to pertuzumab in neoadjuvant therapy in our country) [23]. The exclusion criteria are absence of 18F-FDG PET/CT images, incomplete NAC, and absence of surgical treatment.

2.2. Subtypes of breast cancer

Tumor size and lymph node involvement level were evaluated in all patients included in the study. Needle biopsy specimens performed before NAC and tissues removed by surgical procedure were subjected to histopathological and immunohistochemical (IHC) examinations.

Estrogen receptor (ER), progesterone receptor (PR), and HER2 status were determined by the IHC method; the specimens of patients with a staining level of ≥ 1% in the tumor cells were considered as having positive ER and PR status; further, HER2 status was regarded as positive if it was 3+ and negative if it was ≤1+. Then, HER2 status was confirmed by fluorescence in situ hybridization (FISH) for patients with 2+ HER2 status on IHC testing. Breast cancer was classified into four subtypes: HR+, HER2+; HR+, HER2-; HR-, HER2+; and HR-, HER2-.

2.3. pCR

In the postoperative pathological evaluation, the absence of invasive tumors in the breast tissue or lymph node (regardless of the presence of an in-situ component) was defined as pCR (ypT0/ypN0).

2.4. 18F- FDG PET/CT imaging

All patients had one PET/CT imaging in the staging of breast cancer before NAC. Blood glucose level was measured after fasting at least 6 hours. If the blood glucose level was less than 180 mg/dL, 3.7 MBq/kg 18F-FDG was injected intravenously. Images were acquired from vertex to mid thigh approximately 60 minutes after injection, using an integrated PET/CT scanner (Siemens Biograph mCT, Siemens Healthcare, Erlangen, Germany). CT scans were acquired with tube voltage 120 kV, effective tube current intensity 80–250 mAs, rotation time 0.5 s, detector configuration 16×1.2 mm, slice thickness 5 mm and used for attenuation correction and fusion with PET images. PET scan was acquired with 1.5–2 minute acquisition time for each 6–8 bed positions. PET, CT and PET/CT fusion images were evaluated with SyngoVia workstation (Siemens AG, Muenchen, Germany). Thirty one PET/CT imagings of 31 patients were used for deep learning process. First the PET/CT fusion slices of the primary tumor in the breast was detected for the deep learning process. One out of every three slices were recorded as Digital Imaging and Communication in Medicine (DICOM) format [24]. DICOM format was converted to JPEG format with Syngo FastView software (Siemens AG, Muenchen, Germany) (Fig 1). Deep convolutional neural networks models were applied these images. The sizes of JPEG images were between 57–97 KB and the resolution was 96 dpi with 1605x1064 pixels. The SUVmax values of the primary tumors were calculated on DICOM images with Syngo FastView software by drawing VOIs on tumors and using the following formula: SUVmax = activity concentration / (injected activity/patient weight). SUVmax = tracer uptake in ROI / (injected activity/patient weight).

thumbnail
Fig 1. [Deep convolutional neural networks model]: Diagram of image cropping for the residual deep convolutional neural networks algorithm.

The cubic shaped region-of-interest was selected at the largest cross-sectional area of the lesion and resized to 224 × 224 pixels. (a) pCR: 0 (b) pCR: 1.

https://doi.org/10.1371/journal.pone.0290543.g001

The study mainly focused on PET/CT images, deep convolutional neural networks algorithm was used for only the PET/CT fusion images. SUVmax values were noted but not added to the proposed deep learning process.

2.5. Model development

Convolutional Neural Networks (CNN) are the most popular type of artificial neural network in the field of image classification [25]. Numerous studies have been published on CNNs with regards to their potential use in diagnosing cancer based on radiological images. Deep neural networks are difficult to train because of the vanishing gradients problem. Residual networks (ResNet) architecture was proposed by He et al. [26] in the field of image recognition to solve this problem.

ResNet architecture presents a residual learning block for reducing the deterioration of deep neural networks. As can be observed in Fig 2, the shortcut connection adds the input x to the output F(x) function. The resulting function H(x) = F(x)+x is transmitted to the next layer and the process is repeated for the other residual learning blocks. In this way, the learning rate, and the number of learned parameters increase with depth. This sometimes negatively affects the network training time and learning speed. For this reason, it is tried to prevent this situation using the bottleneck technique. With the bottleneck technique, the 224x224 input image is reduced to 56x56. ReLu and normalization operations are applied in each layer. Each layer obtains triple convolutional layers with filters. In ResNet, a bulk normalization layer is added after each convolutional layer, which is an important feature that distinguishes it from other architectures. With the normalization process, the learning and error rate is also reduced. The model allows inputs to propagate more quickly between layers using residual learning blocks. By using non-linear ReLu in the model, the neuron density is lowered, and the number of operations is reduced. The ResNet architecture produces better accuracy without increasing the complexity of the model by using a feedforward network with a shortcut connection which adds new inputs into the network and generates new outputs.

ResNet has many variants with different numbers of layers such as 18, 34, 50, 101, and 152. We choose ResNet-152 as it achieves the best accuracy among other ResNet models. Fig 3 illustrates the basic architecture of ResNet-152 for ImageNet [26].

3. Results

For the convolutional neural networks model, a total of 355 PET/CT images of 31 patients were used. The images consist of two classes, 107 (30.1%) belong to patients who complete responded to neoadjuvant chemotherapy, and 248 (69.9%) belong to patients who did not complete responded to neoadjuvant chemotherapy. In the experiments, the data set was divided into 80% training and 20% testing. ResNet-152 architecture was used for the training and testing using five-fold cross validation. Experiments were carried out in a cloud environment using the Python programming language on the K80 Tesla graphics card. Fast.ai library is used for hyperparameter selections and network setup.

We performed five-fold cross-validation to improve the reliability of the ResNet-152 model. Sensitivity, specificity and F-score values obtained from the test set for each fold (Table 2). C1, C2, C3, C4, and C5 represent the training-testing groups, respectively. The "# of Samples" column shows the number of images belonging to each class in the relevant group.

As can be observed in Table 2, 87% sensitivity, specificity and F-score values were obtained on the weighted average for the experiment performed on the C1 group. 85% F-score with weighted average 86% sensitivity and specificity in C2 group, 81% sensitivity, 80% specificity, 81% F-Score in C3 group. In the C4 group, the best values of the study, 92% sensitivity, specificity and F-score were obtained. In the C5 group, 80% sensitivity, 79% specificity and 80% F-score values were obtained on a weighted average.

When the results are examined, it is observed that the classification model gives successful results for each training set (Fig 4).

thumbnail
Fig 4. [Area under the curve)] The classification performance using AUC (area under the curve) and accuracy analysis for each group.

https://doi.org/10.1371/journal.pone.0290543.g004

The performance of the classification model was measured at 84% accuracy, 90% AUC score, 85% sensitivity, 84% specificity and 85% F-score when averaged using five-fold cross-validation (Table 3).

ROC curves showing the true positive rate and false positive rate for the C1, C2, C3, C4 and C5 experimental groups are shown in Fig 4. As a result of classification performed on test sets in these experiments, C1, C2, C3, C4, and C5 86%, 89%, 90%, 98% and 89% AUC scores were obtained, respectively. On average, a 90% AUC score was obtained.(Fig 5).

thumbnail
Fig 5. [Receiver operating characteristic curves] Receiver operating characteristic (ROC) curves of ResNet-152 on 18F-FDG PET/CT images for each cross-validation group.

https://doi.org/10.1371/journal.pone.0290543.g005

4. Discussion

The experimental results show that our proposed deep neural network model achieved a remarkable success with an accuracy of 85% to predict pathological complete response to neoadjuvant chemotherapy using fused 18F-FDG PET/CT images. A total of 355 18F-FDG PET/CT images were obtained from 31 locally advanced breast cancer patients and the tumor regions were cropped by specialist doctors.

Artificial intelligence has been used in many studies such as diagnosis, staging or response to chemotherapy. The literature review shows that deep learning methods have achieved the significant improvements for medical image analysis. A three-dimensional deep convolutional neural networks (3D DCNN) model used to recognize benign pleural disease from malignant pleural mesothelioma with FDG PET/CT images. They had clinical features with an accuracy of 82.4% and AUC of 0.896 [27].

Antunovic et al. [11] predicted pCR to NAC in breast cancer with AUC values from 0.70 to 0.73 using multiple regression model. Li et al. [28] found an accuracy of 0.857 (AUC = 0.844) on the training split set and 0.767 (AUC = 0.722) on the independent validation set to determine impacts of radiomic features on pCR to NAC using 18F-FDG PET/CT images in breast cancer. When patient age was combined with PET, the accuracy of training split set increased to 0.857 (AUC = 0.958) and the accuracy of the independent validation set increased to 0.8 (AUC = 0.73). They use unsupervised and supervised machine learning methods to select the most important features for their model. Hwang et al. [29] conducted a study to detect aLN metastasis. They obtained an accuracy of 77.1% in ultrasound images, an accuracy of 77.9% in MRI images, and an accuracy of 81.1% in PET/CT images. Our proposed model outperforms with an accuracy of 85% in PET/CT images. Choi et al. examined pCR to NAC using CNNs with PET/CT and PET/MR images in advanced breast cancer patients. When compared with SUVmax values, their model increased the AUC value from 0.652 to 0.886 and accuracy from 84% to 97% in baseline, and AUC value from 0.687 to 0.980 and accuracy from 70% to 95% in interim [8].

Despite the strengths of the current study, some limitations should be taken into account when considering the results. First, as with any retrospective study, there was a bias inherent in its nature. Although patient selection is from retrospective files, pet images were reprocessed in this study and is a cohort study. Second, power calculation was not done for estimation of sample size selected in the study. All patients who match the inclusion criteria were included in the study. Addition of other indexes such as SUVmax, SUVmean, TLG, MTV, T and N grade, HU would make the analysis better. The last limitation, perhaps the most important limitation, is that the study was conducted with rather small number of patients. Future prospective and large database studies should be used to further validate and investigate our results.

5. Conclusions

We examined the pathological complete response to neoadjuvant chemotherapy of locally advanced breast cancer patients using a deep convolutional neural network model on fused 18F-FDG PET/CT images. Our proposed ResNet-152 deep convolutional neural networks architecture achieved a remarkable classification performance with an accuracy of 84.79%. In conclusion, deep convolutional neural networks can have significant impact to analyze 18F-FDG PET/CT images.

References

  1. 1. Mauri D., Pavlidis N., and Ioannidis J. P., "Neoadjuvant versus adjuvant systemic treatment in breast cancer: a meta-analysis,"Journal of the National Cancer Institute, vol. 97, no. 3, pp. 188–194, 2005. pmid:15687361
  2. 2. Cortazar P. et al., "Pathological complete response and long-term clinical benefit in breast cancer: the CTNeoBC pooled analysis,"The Lancet, vol. 384, no. 9938, pp. 164–172, 2014. pmid:24529560
  3. 3. Liedtke C. et al., "Response to neoadjuvant therapy and long-term survival in patients with triple-negative breast cancer,"Journal of clinical oncology, vol. 26, no. 8, pp. 1275–1281, 2008. pmid:18250347
  4. 4. Von Minckwitz G. et al., "Definition and impact of pathologic complete response on prognosis after neoadjuvant chemotherapy in various intrinsic breast cancer subtypes,"J Clin oncol, vol. 30, no. 15, pp. 1796–1804, 2012. pmid:22508812
  5. 5. Zhang F. et al., "A nomogram to predict the pathologic complete response of neoadjuvant chemotherapy in triple-negative breast cancer based on simple laboratory indicators,"Annals of surgical oncology, vol. 26, no. 12, pp. 3912–3919, 2019. pmid:31359285
  6. 6. Luo G. et al., "Blood neutrophil–lymphocyte ratio predicts survival in patients with advanced pancreatic cancer treated with chemotherapy,"Annals of surgical oncology, vol. 22, no. 2, pp. 670–676, 2015. pmid:25155401
  7. 7. Groheux D., Mankoff D., Espié M., and Hindié E., "18 F-FDG PET/CT in the early prediction of pathological response in aggressive subtypes of breast cancer: review of the literature and recommendations for use in clinical trials,"European journal of nuclear medicine and molecular imaging, vol. 43, no. 5, pp. 983–993, 2016.
  8. 8. Choi J. H. et al., "Early prediction of neoadjuvant chemotherapy response for advanced breast cancer using PET/MRI image deep learning,"Scientific Reports, vol. 10, no. 1, pp. 1–11, 2020.
  9. 9. Kang H., Lee H. Y., Lee K. S., and Kim J.-H., "Imaging-based tumor treatment response evaluation: review of conventional, new, and emerging concepts,"Korean journal of radiology, vol. 13, no. 4, pp. 371–390, 2012. pmid:22778559
  10. 10. Kelloff G. J. et al., "Progress and promise of FDG-PET imaging for cancer patient management and oncologic drug development,"Clinical Cancer Research, vol. 11, no. 8, pp. 2785–2808, 2005. pmid:15837727
  11. 11. Antunovic L. et al., "PET/CT radiomics in breast cancer: promising tool for prediction of pathological response to neoadjuvant chemotherapy,"European journal of nuclear medicine and molecular imaging, vol. 46, no. 7, pp. 1468–1477, 2019. pmid:30915523
  12. 12. Groheux D. et al., "18F-FDG PET/CT for the early evaluation of response to neoadjuvant treatment in triple-negative breast cancer: influence of the chemotherapy regimen,"Journal of Nuclear Medicine, vol. 57, no. 4, pp. 536–543, 2016.
  13. 13. Akdeniz N. et al., "The role of basal 18F-FDG PET/CT maximum standard uptake value and maximum standard uptake change in predicting pathological response in breast cancer patients receiving neoadjuvant chemotherapy,"Nuclear Medicine Communications, vol. 42, no. 3, pp. 315–324, 2021. pmid:33315727
  14. 14. Krizhevsky A., Sutskever I., and Hinton G. E., "Imagenet classification with deep convolutional neural networks,"Advances in neural information processing systems, vol. 25, pp. 1097–1105, 2012.
  15. 15. Agnes S. A., Anitha J., Pandian S. I. A., and Peter J. D., "Classification of mammogram images using multiscale all convolutional neural network (MA-CNN),"Journal of medical systems, vol. 44, no. 1, pp. 1–9, 2020.
  16. 16. Gour M., Jain S., and Sunil Kumar T., "Residual learning based CNN for breast cancer histopathological image classification,"International Journal of Imaging Systems and Technology, vol. 30, no. 3, pp. 621–635, 2020.
  17. 17. Zhang H., Han L., Chen K., Peng Y., and Lin J., "Diagnostic efficiency of the breast ultrasound computer-aided prediction model based on convolutional neural network in breast cancer,"Journal of Digital Imaging, vol. 33, pp. 1218–1223, 2020. pmid:32519253
  18. 18. Lu Z. et al., "Deep-learning–based characterization of tumor-infiltrating lymphocytes in breast cancers from histopathology images and multiomics data,"JCO clinical cancer informatics, vol. 4, pp. 480–490, 2020. pmid:32453636
  19. 19. Yamashita R., Nishio M., Do R. K. G., and Togashi K., "Convolutional neural networks: an overview and application in radiology,"Insights into imaging, vol. 9, no. 4, pp. 611–629, 2018. pmid:29934920
  20. 20. Ren T. et al., "Convolutional neural network detection of axillary lymph node metastasis using standard clinical breast MRI,"Clinical breast cancer, vol. 20, no. 3, pp. e301–e308, 2020. pmid:32139272
  21. 21. Kirienko M., Biroli M., Gelardi F., Seregni E., Chiti A., and Sollini M., "Deep learning in Nuclear Medicine—focus on CNN-based approaches for PET/CT and PET/MR: where do we stand?,"Clinical and Translational Imaging, pp. 1–19, 2021.
  22. 22. Kim E. J., Park H. S., Kim J. Y., Kim S. I., Cho Y.-U., and Park B.-W., "Assessment of the prognostic staging system of American Joint Committee on Cancer 8th Edition for breast cancer: comparisons with the conventional anatomic staging system,"Journal of breast cancer, vol. 23, no. 1, pp. 59–68, 2020. pmid:32140270
  23. 23. Bulut G., Ozdemir Z. N. Significance of Neutrophil-Lymphocyte Ratio and Thrombocyte-Lymphocyte Ratio in Predicting Complete Pathological Response in Patients with Local Advanced Breast Cancer," EJMI;vol 6 no:1 pp:78–83,2022
  24. 24. Mustra M., Delac K., and Grgic M., "Overview of the DICOM standard," in 2008 50th International Symposium ELMAR, 2008, vol. 1: IEEE, pp. 39–44.
  25. 25. Long J., Shelhamer E., and Darrell T., "Fully convolutional networks for semantic segmentation," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431–3440.
  26. 26. He K., Zhang X., Ren S., and Sun J., "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  27. 27. Kitajima K. et al., "Deep learning with deep convolutional neural network using FDG-PET/CT for malignant pleural mesothelioma diagnosis,"Oncotarget, vol. 12, no. 12, p. 1187, 2021. [Online]. Available: https://www.oncotarget.com/article/27979/pdf/. pmid:34136087
  28. 28. Li P. et al., "18 F-FDG PET/CT radiomic predictors of pathologic complete response (pCR) to neoadjuvant chemotherapy in breast cancer patients,"European journal of nuclear medicine and molecular imaging, vol. 47, no. 5, pp. 1116–1126, 2020. pmid:31982990
  29. 29. Hwang S. O., Lee S.-W., Kim H. J., Kim W. W., Park H. Y., and Jung J. H., "The comparative study of ultrasonography, contrast-enhanced MRI, and 18F-FDG PET/CT for detecting axillary lymph node metastasis in T1 breast cancer,"Journal of breast cancer, vol. 16, no. 3, pp. 315–321, 2013. pmid:24155761