Figures
Abstract
The utilization of artificial intelligence (AI) is expanding significantly within medical research and, to some extent, in clinical practice. Deep learning (DL) applications, which use large convolutional neural networks (CNN), hold considerable potential, especially in optimizing radiological evaluations. However, training DL algorithms to clinical standards requires extensive datasets, and their processing is labor-intensive. In this study, we developed an annotation tool named DLLabelsCT that utilizes CNN models to accelerate the image analysis process. To validate DLLabelsCT, we trained a CNN model with a ResNet34 encoder and a UNet decoder to segment the pancreas on an open-access dataset and used the DL model to assist in annotating a local dataset, which was further used to refine the model. DLLabelsCT was also tested on two external testing datasets. The tool accelerates annotation by 3.4 times compared to a completely manual annotation method. Out of 3,715 CT scan slices in the testing datasets, 50% did not require editing when reviewing the segmentations made by the ResNet34-UNet model, and the mean and standard deviation of the Dice similarity coefficient was 0.82±0.24. DLLabelsCT is highly accurate and significantly saves time and resources. Furthermore, it can be easily modified to support other deep learning models for other organs, making it an efficient tool for future research involving larger datasets.
Citation: Mustonen H, Isosalo A, Nortunen M, Nevalainen M, Nieminen MT, Huhta H (2024) DLLabelsCT: Annotation tool using deep transfer learning to assist in creating new datasets from abdominal computed tomography scans, case study: Pancreas. PLoS ONE 19(12): e0313126. https://doi.org/10.1371/journal.pone.0313126
Editor: Hadeel K. Aljobouri, Al-Nahrain University, IRAQ
Received: June 12, 2024; Accepted: October 19, 2024; Published: December 3, 2024
Copyright: © 2024 Mustonen et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The annotation tool can be found from https://github.com/MIPT-Oulu/DLLabelsCT Any patients related data is not available due to Finnish legislation.
Funding: HH: (grant no. 00210395) Finnish Culture Foundation https://skr.fi/en, (grant no. 202210072) Mary and George C. Ehrnroot Foundation https://marygeorg.fi/en/home/, (grant no. 5785) Finnish Medical Foundation https://laaketieteensaatio.fi/en/home/, Sigrid Jusélius Foundation https://www.sigridjuselius.fi/en/ AI: (grant no. 10221743) Finnish Culture Foundation https://skr.fi/en, Jane and Aatos Erkko Foundation https://jaes.fi/en/frontpage/, Technology Industries of Finland Centennial Foundation https://techfinland100.fi/en/, (grant no. 220106) Wihuri Foundation https://wihurinrahasto.fi/?lang=en MTN: Jane and Aatos Erkko Foundation https://jaes.fi/en/frontpage/, Technology Industries of Finland Centennial Foundation https://techfinland100.fi/en/ None of the funding sources had no involvement in study design; in the collection, analysis and interpretation of data; in the writing of the report; and in the decision to submit the article for publication.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Artificial intelligence (AI) has shown promising results in the field of medical research and medical image analysis [1]. Deep learning (DL) is a subset of AI that involves the use of artificial neural networks to learn and recognize patterns in data, such as computed tomography (CT) and magnetic resonance imaging (MRI) scans, which can provide detailed information about the abdominal organs or location and characteristics of tumors [2, 3]. To enhance the specificity and sensitivity of the DL algorithms for instance in cancer detection, it is crucial to have access to high-quality data representing different cancer types and stages [2]. This requires large datasets of medical images, which are annotated by expert radiologists, vital for DL algorithms to learn accurately. The task is time-consuming and costly [2]. Annotation tools typically provide semi-automated segmentation methods, which are either based on previous annotations requiring input from the user, like selecting the region for the automatic segmentation method [4]. Fully automated segmentation methods do not necessarily require user input and with large datasets it would improve efficiency and be beneficial.
Fully automated methods are the most effective when the segmented object differs heavily from its surroundings [5]. In CT scans the surrounding organs have similar Hounsfield Unit (HU) -values, making automatic segmenting more difficult [5]. DL-based segmentation methods have proven very accurate for segmenting organs in abdominal CT scans [5]. There are only a few validated annotation tools that support DL-methods, such as RIL-Contour, which supports deep learning models using Keras running on Tensorflow [6]. However, Philbrick et al. concentrated primarily on the mechanical aspects of the annotation tool, rather than illustrating how a deep learning model could be trained to aid in the annotation process [6]. Training of DL algorithms to clinical standards requires extensive datasets, and their processing is exceedingly time-consuming. Tools that automatically annotate organs or tumors from CT images can expedite the management of these large datasets. Yet, the availability of validated and published DL-based annotation tools remains limited.
The purpose of this study was to evaluate the efficacy of our newly developed deep learning-based annotation tool, named DLLabelsCT (Deep Learning Labels Computed Tomography), in accelerating the annotation process of organs in CT scans versus traditional manual annotation. DLLabelsCT is available from https://zenodo.org/records/10226990. The pancreas was selected as the organ for annotation, since its segmentation is a difficult task, due to the varying of the parenchymal shape, density, contrast enhancement and size within the abdominal CT-scan [7] making it a more challenging object for DL compared, for instance, to the liver.
Materials and methods
Datasets
Four independent CT scan datasets were used. The CT imaging in these datasets was performed in the portal venous phase and the image size was 512-by-512 pixels with varying pixel spacings. The scans were originally in Digital Imaging and Communications in Medicine (DICOM) format. Windowing was performed on DICOM-images, with the window width wl of 400 HU and the window center wc of 50 HU, for improved contrast between the tissue types. Windowing sets the images output values based on the following equation,
(1)
where x is the input value, y is the output value and ymin and ymax are the minimum and maximum possible outputs for the image format, i.e. for 16-bit images the values are 0 and 65535. Values outside the window were set to the maximum and minimum. The individual slices were then transformed into 16-bit PNG images from the DICOM format. The PNG images were then used to train the convolutional neural network (CNN) model.
Training dataset.
To train the DL-based segmentation method described in this study, an open-access National Institutes of Health Clinical Center dataset of contrast enhanced CT scans from the Cancer Imaging Archive was utilized [8–10]. The dataset consisted of 80 abdominal CT scans from healthy subjects with annotations for the pancreas. The dataset is referred to as Pancreas-CT from hereafter. The dataset contained a total of 18,942 axial slices and the slice thickness of the scans was 1.5–2.5 mm. This dataset was divided into training and testing datasets, with the testing dataset containing 20% of the scans. The patients were randomly split into the datasets without any stratification. According to the Cancer Imaging Archive, the volumes were acquired using CT scanning systems by Philips (Amsterdam, Netherlands) and Siemens (Erlangen, Germany). The X-ray tube voltages of the scanners were 120 kVp. This dataset was initially downloaded in November 2022.
Validation dataset.
A dataset from the Oulu University Hospital was used to further train the segmentation model and to validate our novel in-house annotation tool. We refer to this dataset later as the Oulu validation dataset. The Oulu validation dataset contained CT scans from 606 patients, 313 with healthy pancreas, 218 with cancerous pancreas and 75 with other pancreatic diseases, such as intraductal pancreatic mucinous neoplasms, chronic pancreatitis etc. The slice thickness in Oulu validation dataset’s scans varied between 0.625–7 mm. These scans contained a total of 96,429 axial slices. The CT scanners were manufactured by Toshiba (Minato, Tokyo, Japan), Philips (Amsterdam, Netherlands), Canon Medical Systems (Otawara, Tochigi, Japan), Siemens (Erlangen, Germany) and GE Medical Systems (Chicago, Illinois, United States). Tube voltages ranged between 80–140 kVp. This dataset was accessed and annotated between January and March 2023.
Testing datasets.
Two accessory datasets from the Kuopio University Hospital (CT scans of 56 patients) and from Turku University Hospital (CT scans of eight patients) were combined to form the external testing dataset for accessing the performance of the annotation tools. All CT scans were of patients with pancreatic cancer. The dataset had a total of 26,154 axial slices, of which 12,227 were in the wanted portal venous phase. The slice thickness in these scans was 1–5 mm. The CT scanners were made by Toshiba (Minato, Tokyo, Japan), GE Medical Systems (Chicago, Illinois, United States) and Siemens (Erlangen, Germany). Tube voltages were 80–120 kVp. These datasets were accessed and annotated between April and May 2023.
The second testing dataset was from the Oulu University Hospital, containing 70 CT-scans with pancreatic cancer and which had not been previously used in the training dataset. We refer to this dataset later as the Oulu testing dataset. This dataset had a total of 17,558 axial slices with slice thickness of 0.5–5 mm. The CT scanners were made by Toshiba (Minato, Tokyo, Japan), GE Medical Systems (Chicago, Illinois, United States), Siemens (Erlangen, Germany), Philips (Amsterdam, Netherlands) and Canon Medical Systems (Otawara, Tochigi, Japan). Tube voltages were 80–120 kVp. The validation datasets were divided into two, with each half annotated by using a different tool (see chapters Annotation tools and Method evaluation). This testing dataset was accessed and annotated after the Oulu validation dataset, between March and April 2023.
Deep learning model
A 2-D CNN model with an encoder-decoder architecture was trained and used to segment pancreas from the CT-scans. The architecture utilizes a downsampling encoder that provides features for the upsampling decoder, which provides a prediction for each pixel in the image (Fig 1). The model employs a ResNet34 [11] encoder that had been pretrained on the ImageNet [12] dataset, combined with a U–Net [13] decoder with randomly initialized weights. We refer to the model later as ResNet34UNet. This model was used because it provided good results on breast mass segmentation in our previous study [14].
Model training.
Initial training of the ResNet34UNet model was conducted on the open Pancreas-CT dataset. The training was implemented in a fully supervised manner by using the pixel-accurate annotations provided with the dataset. The training was performed on the training part of the Pancreas-CT dataset, including slices that did not contain pancreas.
Five-fold-cross-validation with non-interlapping groups was used, with the groups generated with Scikit-learn’s [15] GroupKFold function. The model was trained for 50 epochs with a batch size of 16. The loss function utilized during training was Focal Tversky loss (FT) [16, 17], defined as
(2)
where TP is the number of true positive predictions, FP is the number of false positive predictions and FN is the number of false negative predictions, the α parameter is 0.7, β is 0.3 and γ is 0.75. Multiple data augmentations were used during training (Table 1). The augmentations were retrieved from the Streaming Over Lightweight Transformations (SOLT) [18] library version 0.1.8 and each of the augmentations had a 50% chance of occurring. Adam [19] was used as the optimizer during the training, with a multi-step learning rate scheduler. The initial learning rate and weight decay were set at 10−4. The learning rate was reduced by a factor of 0.1 after 20, 30 and 40 epochs. The computer used had a Nvidia GeForce RTX 3090 graphics processing unit (GPU), an AMD Ryzen 9 5950 16-core central processing unit (CPU) and 64 GB of 2400 MHz random-access memory (RAM). The computer’s OS was Ubuntu 20.04.1 and the used Python version was 3.10.10 with PyTorch [20] 1.13.1 and CUDA version 11.6. With this hardware the initial training took 24 hours.
After training the ResNet34UNet model, it was used to provide initial annotations for the Oulu validation dataset. These initial annotations were then reviewed by our medical professionals (H.H, M.N and M.N). This checked segmented dataset was then combined with the Pancreas-CT dataset to train a new ResNet34Unet model, using the same training arguments as the initial model. To reduce training time, the number of images in the combined dataset was reduced by removing slices not containing pancreas. The training dataset contained 31,634 slices. The model training and application process is summarized in (Fig 2).
Annotation tools
Two different annotation tools were used and compared in the study. Our novel in-house tool DLLabelsCT was developed for the current study using Python (version 3.10.10), which uses DL to assist in annotating (Fig 3). Additionally a previously developed MATLAB-based (2020a, Natick, MA, United States) tool named MammogramAnnotationTool [21] without any intelligent features (Fig 4) was used. This tool was modified to support CT scans and was initially used in annotating the datasets, the modified tool was renamed CTAnnotationTool. DLLabelsCT was developed to speed up the annotation process. DLLabelsCT combines segmenting CT scans with a PyTorch based CNN and annotating the images manually. It uses a PyQt (2022, Dorchester, United Kingdom) based interface for the annotating. PyQt was selected as the base since it uses Python, as does PyTorch, and it is simple to use and understand. DLLabelsCT saves the annotations and the individual axial slices as PNG images, which can then directly be used as data in our DL model’s training pipeline. The tool supports segmentation models with ResNet encoders and UNet or Feature Pyramid Network (FPN) [22] decoders.
The shown masks can be modified, and new labels added from the menu on the right.
The study can be changed from the window on the left and labels can be added on the CT scan on the window to the right.
Method evaluation
The external testing dataset and the Oulu testing dataset were used for testing the accelerating effect of our fully automated segmentation tool. Experienced, independent, and mutually blinded pancreatic surgeons (H.H for external and M.N. for Oulu) provided annotations using both the conventional CTAnnotationTool and the novel fully automated DLLabelsCT. The CTAnnotationTool was used to annotate 32 patients’ pancreas from the external testing dataset and 31 patients from the Oulu testing dataset. The rest of the patients, 32 from the external testing dataset and 39 from the Oulu testing dataset, were annotated using DLLabelsCT with the initial DL model segmentations. Initial segmentations review was done with similar method between reviewers; segmentation needs to fill the entire pancreas and if not, it was corrected. The number of annotated slices and the time required for annotation were recorded for all patients. Additionally, we assessed the extent of revisions needed to the segmentations made by the ResNet34Unet model. The Dice similarity coefficient (DSC), defined as
(3)
where TP is the number of true positive predictions, FP is the number of false positive predictions and FN is the number of false negative predictions, was calculated between the boolean masks made by the model and the segmentations modified with the fully automated segmentation tool.
Ethical statement
This study complied with the ethical standards of the Oulu University Institutional Research Committee and the Declaration of Helsinki (as revised in 2013). Data collection from included hospitals was approved by the Finnish Social and Health Data Permit Authority (FINDATA, Dnro THL/3606/14.02.00/2020). Individual patient consent for this retrospective analysis was waived. Additionally, ethical permission from the local wellbeing services county (Pohde) and ethical permission EETMK: 81 / 2008 were received.
Results
ResNet34UNet model trained on the training subset of Pancreas-CT had lower segmentation accuracies than the ResNet34UNet model trained on the Oulu validation dataset and the training subset of Pancreas-CT (Table 2, Fig 5). The segmentation accuracy was higher on the external testing dataset than the Oulu testing dataset (Table 2). The pancreas detection results (whether the model’s segmentation mask and the final annotation mask overlap) for each individual slice in the CT scan were high for both of the models, with the ResNet34UNet model trained on the training subset of Pancreas-CT and the Oulu validation dataset having the higher accuracy on the testing subset of Pancreas-CT, and the accuracy was higher on the external testing dataset than the Oulu testing dataset (Table 3).
Model 1 is the ResNet34UNet model trained on the training subset of the Pancreas-CT dataset and Model 2 is the ResNet34UNet model trained on both the training subset of the Pancreas-CT dataset and Oulu validation dataset.
Model 2 was used in segmenting the external testing dataset and Oulu testing dataset. Results presented as mean±standard deviation.
Model 2 was used in segmenting the external testing dataset and Oulu testing dataset. Results presented as mean±standard deviation.
Annotating the datasets was faster with DLLabelsCT than with CTAnnotationTool, even though the dataset annotated with DLLabelsCT contained more slices (Tables 4 and 5). The ResNet34UNet model’s segmentations made annotating 3.4 times faster on average (Tables 4 and 5). In the external testing dataset 62% of the model segmentations were accepted without revision, while 44% of the segmentations in Oulu testing dataset needed no adjusting. In total 50% of the segmentations did not need adjusting.
Discussion
The current study demonstrates that it is possible to create a highly accurate DL- based fully automated segmentation tool for annotating organs from abdominal CT scans with a relatively small amount of data. Furthermore, we show that the use of fully automated software saves a significant amount of time, making it cost-effective.
The used DL segmentation method has higher DSC than the initial method proposed with the Pancreas-CT dataset [7], with the ResNet34UNet model having a mean DSC of 0.82 over the mean DSC of 0.72 of the method proposed in the previously mentioned article. Newer DL-based segmentation methods have achieved a larger DSC than our method, with a DL segmentation method achieving a mean DSC of 0.90 on the Pancreas-CT dataset [23]. The DL method used in our study is more of an example of the potential of DL and DLLabelsCT can be easily modified to support different PyTorch-based segmentation models.
DL neural networks are under intensive research and have a wide range of possible applications also in the field of medical research. For instance, genomics, metagenomics, histological and radiological image recognition benefit from AI evolution [23]. Recently various studies have shown promising results from DL network for pancreas segmentation. However, prior DL-based pancreatic segmentation studies utilized the Pancreas-CT dataset (n = 82). While the previously suggested DL networks attained commendable performance for pancreas segmentation (mean DSC of 0.866 and 0.854), the available data remains insufficient to establish the reliability of these networks since DL-based medical image segmentation is highly dependent on the number of data points [24, 25].
To the best of our knowledge, the only study conducted with a relatively large amount of data (1006 patients) for normal healthy pancreatic segmentation is the Korean study, whose results closely aligned with our own, DSC (0.84 and our 0.82) [26]. It’s important to note, however, that our dataset included 43% of patients with pathological pancreases. Additionally, our segmentation method for the pancreas achieved a DSC of 0.80 within the Oulu testing dataset, which exclusively comprises patients diagnosed with pancreatic ductal adenocarcinoma which is significantly impacting to pancreas normal shape, size and volume of pancreatic parenchyma [27]. Previous studies have shown more modest performance in similar conditions [28]. Promising results have also been achieved using neural networks for cancer detection and disease prognosis classification [29, 30]. Using CNN, small pancreatic cancers with a diameter of less than 2cm were identified from CT images with the same or even better accuracy than radiology specialists [31].
DL methods can give more reliable outputs than other segmentation methods, such as active contours, region growing, histogram-based methods etc. since the DL models learn the varying shapes and contrast of the intended target [32]. DL methods do not require any additional input from the user and the initial annotation masks can be created with no imminent user. The disadvantages of DL are the long training time and the large computational requirements.
Developing DL applications for pre-operative assessment for instance, of pancreatic tumors requires large patient cohorts to capture disease variations [33]. Challenges in developing algorithms from large nation-wide datasets include varying imaging equipment, contrast agent concentrations, and slice thicknesses [34]. The algorithm must also learn patient-specific factors such as comorbidities, age, body composition, and circulatory issues to function effectively [34]. Given resource limitations, manual annotation of massive datasets is unfeasible. Automated DL annotation tools like DLLabelsCT are necessary for processing these datasets. Validation, reporting, and publication of these tools are crucial for assessing the quality and reliability of the data processed. In our study, the in-house tool DLLabelsCT was 3.4 times faster than manual annotation, even with pancreatic cancer cases.
There are some limitations to this study. First, the small amount of CT scans used in evaluating differences between the annotation tools causes bias and could hamper the comparison. This was compensated for by having data from multiple sources with different types of data, which contributes to a better generalizability of the developed model. Similarly, different data from various CT scanners and sources were used in training the DL model. A technical limitation arises from using DL with a reasonable speed which requires a compatible GPU. Nonetheless, DLLabelsCT can be used without a GPU and the model’s segmentation masks can be provided from a separate computer with a GPU.
Conclusion
The results demonstrate that our DL based fully automated segmentation and annotation tool DLLabelsCT for pancreas is highly accurate and saves time and resources significantly. Moreover, it could easily modify to detect other organs as well with small data amount and will be an efficient tool for future research with larger datasets. The annotation tool is publicly available at https://zenodo.org/doi/10.5281/zenodo.10226989
Acknowledgments
Special thanks: Esa Liukkonen and Anne Kukkonen for their excellent technical assistance on data facilitation
References
- 1. Montagnon E, Cerny M, Cadrin-Chênevert A, Hamilton V, Derennes T, Ilinca A, et al. Deep learning workflow in radiology: A primer. Insights Imaging. 2020 Feb; 11(22). pmid:32040647
- 2. Mazurowski MA, Buda M, Saha A, Bashir MR. Deep learning in radiology: An overview of the concepts and a survey of the state of the art with focus on MRI. J. Magn. Reason. Imaging. 2019 Apr; 49(4):939–954. pmid:30575178
- 3. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature 2015;521(7553):436–444. pmid:26017442
- 4. Aljabri M, AlAmir M, AlGhamdi M, Abdel-Mottaleb M, Collado-Mesa F. Towards a better understanding of annotation tools for medical imaging: A survey. Multimed. Tools Appl. 2022 Jul;81(18):25877–25911. pmid:35350630
- 5. Gibson E, Giganti F, Hu Y, Bonmati E, Bandula S, Gurusamy K, et al. Automatic multi-organ segmentation on abdominal CT with dense V-networks. IEEE Trans. Med. Imaging. 2018 Aug;37(8):1822–1834. pmid:29994628
- 6. Philbrick KA, Weston AD, Akkus Z, Kline TL, Korfiatis P, Sakinis T, et al. RILContour: A medical imaging dataset annotation tool for and with deep learning. J. Digit. Imaging. 2019 Aug;32(4):571–581. pmid:31089974
- 7. Roth H, Farag A, Lu L, Turkbey EB, Summers RM. Deep convolutional networks for pancreas segmentation in CT imaging. Proc. SPIE 9413, Medical Imaging 2015: Image Processing, 94131G,
- 8. Roth H, Farag A, Turkbey EB, Lu L, Liu J, Summers RM. 2016. Data from Pancreas-CT (Version 2) [Data set]. The Cancer Imaging Archive.
- 9. Roth H, Lu L, Farag A, Shin H, Liu J, Turkbey EB, et al. DeepOrgan: Multi-level deep convolutional networks for automated pancreas segmentation. Medical Image Computing and Computer-Assisted Intervention (MICCAI 2015), Lecture Notes in Computer Science 2015;9349, 556–664.
- 10. Clark K, Vendt B, Smith K, Freymann J, Kirby J, Koppel P, et al. The Cancer Imaging Archive (TCIA): Maintaining and operating a public information repository. J. Digit. Imaging. 2013 Dec;26(6):1045–1057. pmid:23884657
- 11.
He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. Proc. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770–778. https://doi.org/10.1109/CVPR.2016.90
- 12. Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, et al. ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 2015 Dec;115(3):211–52.
- 13. Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. Medical Image Computing and Computer-Assisted Intervention (MICCAI 2015), Lecture Notes in Computer Science 2015;9351, 234–241.
- 14.
Isosalo A, Mustonen H, Turunen T, Ipatti PS, Reponen J, Nieminen MT, et al. Evaluation of different convolutional neural network encoder-decoder architectures for breast mass segmentation. Proc. SPIE 12037, Medical Imaging 2022: Imaging Informatics for Healthcare, Research, and Applications, 120370W. https://doi.org/10.1117/12.2628190
- 15. Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, et al. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research. 2011;12:2825–2830. URL: https://dl.acm.org/doi/ pmid:34820480
- 16.
Abraham, N. & Khan, N. M. A novel focal Tversky loss function with improved attention U-net for lesion segmentation. 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI), 683–687. https://doi.org/10.1109/ISBI.2019.8759329
- 17. Tversky A. Features of similarity. Psychological review. 1977;84(4):327–352.
- 18. Tiulpin A. Solt: Streaming over lightweight transformations [Computer software]. (2019).
- 19.
Kingma DP, Ba J. Adam: A method for stochastic optimization. International Conference on Learning Representations (ICLR), 2014. URL: https://arxiv.org/abs/1412.6980v5
- 20. Paszke A, Gross S, Massa F, Lerer A, Bradbury J, Chanan G, et al. PyTorch: An imperative style, high-performance deep learning library. Advances in Neural Information Processing Systems, 2019;32:8024–8035. URL: https://dl.acm.org/doi/
- 21. Isosalo A, Inkinen S, Heino H, Turunen T, Nieminen M. MammogramAnnotationTool: Markup tool for breast tissue abnormality annotation. Softw. Impacts, 2024;19, 100599.
- 22.
Lin TY, Dollar P, Girshick R, He K, Hariharan B, Belongie S. Feature pyramid networks for object detection. 2017 IEEE Conference on Computer Vision and Pattern Recognition, 2017, 2117–2125. https://doi.org/10.1109/CVPR.2017.106
- 23. Dai S, Zhu Y, Jiang X, Yu F, Lin J, Yang D. TD-Net: Trans-deformer network for automatic pancreas segmentation. Neurocomputing. 2023 Jan;517:279–293.
- 24. Yan Y, Zhang D. Multi-scale U-like network with attention mechanism for automatic pancreas segmentation. PLOS ONE. 2021 May 27;16(5):e0252287. pmid:34043732
- 25. Li J, Lin X, Che H, Li H, Qian X. Pancreas segmentation with probabilistic map guided bi-directional recurrent UNet. Phys Med Biol. 2021 Jun 7;66(11):115010. pmid:33915526
- 26. Lim S, Kim YJ, Park Y, Kim D, Kim KG, Lee D. Automated pancreas segmentation and volumetry using deep neural network on computed tomography. Sci. Rep. 2022 Mar 8;12(4075). pmid:35260710
- 27. Lee ES. Imaging diagnosis of pancreatic cancer: A state-of-the-art review. WJG. 2014;20(24):7864–7877. pmid:24976723
- 28. Shen C, Roth HR, Hayashi Y, Oda M, Miyamoto T, Sato G, et al. A cascaded fully convolutional network framework for dilated pancreatic duct segmentation. Int. J. CARS. 2022 Feb;17(2):343–354. pmid:34951681
- 29. Bhinder B, Gilvary C, Madhukar NS, Elemento O. Artificial intelligence in cancer research and precision medicine. Cancer Discov. 2021 Apr 1;11(4):900–915. pmid:33811123
- 30. Janssen BV, Verhoef S, Wesdorp NJ, Huiskens J, de Boer OJ, Marquering H, et al. Imaging-based machine-learning models to predict clinical outcomes and identify biomarkers in pancreatic cancer. Ann. Surg. 2022 Mar;275(3):560–567. pmid:34954758
- 31. Chen P, Wu T, Wang P, Chang D, Liu K, Wu M, et al. Pancreatic cancer detection on CT scans with deep learning: A nationwide population-based study. Radiology. 2023 Jan;306(1):172–182. pmid:36098642
- 32. Kavur AE, Gezer NS, Baris M, Sahin Y, Ozkan S, Baydar B, et al. Comparison of semi-automatic and deep learning-based automatic methods for liver segmentation in living liver transplant donors. Diagn. Interv. Radiol. 2020 Jan 2;26(1):11–21. pmid:31904568
- 33. Hameed BS, Krishnan UM. Artificial intelligence-driven diagnosis of pancreatic cancer. Cancers. 2022 Oct 31;14(21):5382. pmid:36358800
- 34. Kumar Y, Koul A, Singla R, Ijaz MF. Artificial intelligence in disease diagnosis: A systematic literature review, synthesizing framework and future research agenda. J. Ambient Intell. Human Comput. 2023 Jul;14(7):8459–8486. pmid:35039756