Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

CT scan pancreatic cancer segmentation and classification using deep learning and the tunicate swarm algorithm

  • Hari Prasad Gandikota ,

    Roles Conceptualization, Data curation, Formal analysis, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    csedept.au@gmail.com

    Affiliation Department of Computer Science & Engineering, Annamalai University, Chidambaram, Tamilnadu, India

  • Abirami S.,

    Roles Conceptualization, Data curation, Formal analysis, Methodology, Project administration, Resources, Supervision, Validation, Writing – original draft, Writing – review & editing

    Affiliation Department of Computer Science & Engineering, Annamalai University, Chidambaram, Tamilnadu, India

  • Sunil Kumar M.

    Roles Data curation, Project administration, Software, Supervision, Validation, Visualization, Writing – review & editing

    Affiliation School of Computing, Mohan Babu University, Tirupati, Andhra Pradesh, India

Abstract

Pancreatic cancer (PC) is a very lethal disease with a low survival rate, making timely and accurate diagnoses critical for successful treatment. PC classification in computed tomography (CT) scans is a vital task that aims to accurately discriminate between tumorous and non-tumorous pancreatic tissues. CT images provide detailed cross-sectional images of the pancreas, which allows oncologists and radiologists to analyse the characteristics and morphology of the tissue. Machine learning (ML) approaches, together with deep learning (DL) algorithms, are commonly explored to improve and automate the performance of PC classification in CT scans. DL algorithms, particularly convolutional neural networks (CNNs), are broadly utilized for medical image analysis tasks, involving segmentation and classification. This study explores the design of a tunicate swarm algorithm with deep learning-based pancreatic cancer segmentation and classification (TSADL-PCSC) technique on CT scans. The purpose of the TSADL-PCSC technique is to design an effectual and accurate model to improve the diagnostic performance of PC. To accomplish this, the TSADL-PCSC technique employs a W-Net segmentation approach to define the affected region on the CT scans. In addition, the TSADL-PCSC technique utilizes the GhostNet feature extractor to create a group of feature vectors. For PC classification, the deep echo state network (DESN) model is applied in this study. Finally, the hyperparameter tuning of the DESN approach occurs utilizing the TSA which assists in attaining improved classification performance. The experimental outcome of the TSADL-PCSC method was tested on a benchmark CT scan database. The obtained outcomes highlighted the significance of the TSADL-PCSC technique over other approaches to PC classification.

1. Introduction

Currently, pancreatic cancer (PC) is the most incurable and lethal disease of which survival rates have not yet been greater significantly [1]. Magnetic Resonance Imaging (MRI) guided radiotherapy is nowadays used to decrease cancer; but anatomical changes, i.e. breathing does not affect it due to the variability and interpatient infarction. Initial and precise detection of the PC is a challenge [2]. Enhancing earlier detection, early treatment, and initial diagnosis is of utmost importance. Computer-aided diagnoses (CAD) system has been devised with the advancements of computer science and image processing technologies for disease diagnoses [3]. Radiotherapists commonly use CAD systems to enhance diagnostic accuracy, help in detecting and interpreting disease, and decrease the burden on physicians. CAD method was recently developed in deep neural networks (DNNs) and prolonged the need for health care services [4]. Higher pathology in PC resulted in significant interest in optimizing effectual treatments and CAD systems where accurate pancreatic segmentation was required. Hence, there comes a need to develop a new approach for pancreatic segmentation. Today, computed tomography (CT) segmentations of the pancreas become a challenge. The most significant element of the CAD is image recognition [5]. The process of detecting adenocarcinomas has 2 stages: feature extraction and feature selection.

Current advancements in deep learning (DL) have witnessed greater potentiality in medical image analysis [6]. In the earlier study, it is proved that a convolutional neural network (CNN) can precisely differentiate between PC and noncancerous pancreas. But the radiologists manually perform identification of the pancreas with the help of CNN [7]. Identification of segmentation of the pancreas becomes challenging as the pancreas borders many structures and organs and differs in size and shape, particularly in patients with PC. Still, a medically applicable CAD tool must enable classification and segmentation (i.e., forecasting the absence or presence of PC), with minimum labour or human annotation [8]. The DL approach utilizing CNN has proved much more potential in examining clinical images. The neural network (NN) construction based on neurons contains activation parameters and functions that extract and merge features in the images and establishes a method that captured intricate relationships between images and diagnoses [9]. In the imaging identification of conditions like skin tumor, diabetic retinopathy (DR), and liver masses, CNN achieves greater performance. Still, the potential CNN advantages for diagnosing PC have not been studied widely [10]. Typically, PC is unclear at an initial stage imposes issues for trained radiotherapists and presents with ill-defined margins and irregular contours on CT.

The study developed the tunicate swarm algorithm with deep learning-based pancreatic cancer segmentation and classification (TSADL-PCSC) technique on CT scans. The TSADL-PCSC technique aims to accomplish enhanced PC classification results using a hyperparameter-tuned DL model. Primarily, the TSADL-PCSC technique employs a W-Net segmentation approach to define the affected regions on the CT scans. Besides the TSADL-PCSC technique utilizes GhostNet feature extractor for generating a group of feature vectors. For PC classification, the deep echo state network (DESN) model is applied in this study. At last, the hyperparameter tuning of the DESN approach occurs utilizing the TSA which assists in attaining improved classification performance. The simulation results of the TSADL-PCSC algorithm are tested on a benchmark CT scan dataset.

2. Related works

Vaiyapuri et al. [11] present an IDLDMS-PTC (intelligent DL-assisted decision-making medical system for PC classification) model with CT scans. The proposed algorithm develops an emperor penguin optimizer (EPO) using the multi-level thresholding (EPO-MLT) method for the segmenting PC. Moreover, the MobileNet architecture was employed as a feature extraction with optimum autoencoder (AE) for the classification of PC. The authors in [12] present and validate a DL architecture, which integrates level-set, and multi-atlas registration for the segmentation of PC from CT scans. The presented algorithm comprises three phases such as refine, coarse, and fine phases. Initially, by using multi-atlas-based 3D diffeomorphic registration and fusion, a coarse segmentation can be attained. Next, three 2-D slice-related CNNs and 3-D patch-related CNN were utilized for the prediction of a fine segmentation. For the completely automated predictive of preoperative pathological grading of PC, Zhang et al [13] introduced a DL algorithm in this study. A DL approach for the PC segmentation was coined first to attain lesion region. Next, each patient was divided into a test set, training set, and validation set. The features calculated from the lesion region introduced a prediction method of PC pathological grade. Lastly, the model stability was confirmed by seven-fold cross-validation.

The authors in [14], designed an ODL-PTNTC (optimum DL-based PC and non-tumour classification) algorithm using CT images. This presented method exploits the adaptive window filtering (AWF) method for noise removal. Furthermore, the sailfish optimizer-based Kapur’s Thresholding (SFO-KT) method was used for the process of segmentation. Besides, Political Optimizer (PO) with Cascade Forward NN (CFNN) was used for the classifier purpose. Bagheri et al. [15] utilized a deep CNN (DCNN) for pancreas segmentation in an openly accessible dataset. By using the Dice similarity coefficient (DSC), the accuracy of the segmentations was evaluated. Khdhir et al. [16] developed an ALO-CNN-GRU mechanism for the segmentation and classification of PC depending on DL and CT images. The images undergo pre-processing for noise reduction. The segmentation was processed by the Antlion Optimization (ALO) technique. The segmentation can be performed by using the classifier of the CNN and Gated Recurrent Unit (GRU) models.

Nishio et al. [17] introduced and evaluated a combination of DL architectures and data augmentation methods for automated pancreas segmentation on CT scans. Deep U-Net and Baseline U-Net are selected for the DL algorithms of pancreas segmentation. Data augmentation techniques involved random image cropping and patching (RICAP), mixup, and conventional method. Yang et al. [18] introduced AX-Unet, a DL architecture integrating an improved atrous spatial pyramid pooling model for learning the location data and for extracting multilevel contextual data for reducing data loss in the course of downsampling. Also, a group convolution model was introduced on the feature map at all the levels for achieving data decoupling between channels. Moreover, an explicit boundary-aware loss function was proposed for tackling blurry boundary problems. Compared to radiologist interpretation, the authors in [19], investigated whether CNN discriminates individuals with and without PC on CT. Images are pre-processed into patches, and a CNN was trained for the classification of patches as tumorous or non-tumorous.

3. The proposed model

In this manuscript, we have developed the TSADL-PCSC method for PC segmentation and classification on CT scans. The purpose of the TSADL-PCSC technique is to design an effectual and accurate model to improve the diagnostic performance of PC. To accomplish this, the TSADL-PCSC technique comprises four processes namely W-Net segmentation, GhostNet feature extractors, DESN classification, and TSA-based hyperparameter tuning. Fig 1 describes the working flow of the TSADL-PCSC system.

3.1. Image segmentation: W-Net model

At the initial stage, the input CT scans are passed into the W-Net model for the segmentation process. The W-net-based segmentation network has been used to attain the segmentation map of the CT scans [20]. Using the decoding and encoding path, this model preserves the localization and content information. Furthermore, edge data maintain consistency and are preserved to sharpen the image during segmentation. This network was planned as a progression of U‐Net. Later, by connecting both U‐Net topologies, a single AE was implemented. In U‐Net, an encoder (contracting path) and a decoder (expansive path) based architecture were applied.

The 1st module of the W‐net was the encoder that encompassed a set of blocks. The three-layer convolution and BN layers interspersed with ReLU was the essential component of the block. The basic module is considered twice for creating a single convolutional block. The blocks were joined by using 2×2 layers of max pooling. Using max pooling, the count of parameters is reduced and the critical target data was preserved. During the decoder, the kernel count of a convolutional layer is 8, increasing from 8 to 128 during the encoder.

The second‐wide path was the expansive path. Convolution and Upsampling layers made up its infrastructure. During the contracting path, the input has been downscaled just once, and in the expansive path, the input was upscaled four times. The mapping feature from the contracting path was concatenated with corresponding mapping features in the expansive path for recovering lost data in max pooling and convolution procedures. The second part is corresponding to the first, however, the outcome of the top pooling layer was integrated, and the outcome of the unit was placed at the same level in the first U‐Net.

Like others, there is an additional block which follows the upsampling of the contracting path and the last combination of the expansive initial block. At last, a 1×1 convolutional layer and a softmax activation function were used for matching the desirable amount of classes and mapping features. The cross‐entropy loss (CEL) and total‐variation loss (CT‐loss) are combined in this method.

(1)(2)(3)

In the formula, W and H characterize the width and height of input images, correspondingly. refers to the sample n’s normalized segmentation maps; {PC} represents the pseudo segmentation mask made by the index which increases the segmentation map values. This CT loss assists in reducing the time and memory used. Also, the segmentation mask was significantly compressed, which negates the need for post‐processing due to the features of the CT loss.

3.2. Feature extraction: GhostNet model

To derive a set of feature vectors, the GhostNet model is used. The GhostNet model removes features with some parameters and efficiently receives unwanted data from the network [21]. The GhostNet element turns the typical convolutional function into 2‐step operations. A primary stage is the typical convolutional function, however, it decreases the application of the convolutional kernels. The secondary stage is a lightweight linear function for generating redundant mapping features. Once the dimensional of the input mapping feature is DFxDFxM, the convolutional kernel of standard convolutional is DkxDkxN, and the calculation amount is DkxDkxMxDFxDFxN. A primary stage of the GhostConv element considers that m mapping features can be created, and the calculation amount is DkxDkxMxDFxDFxm. For ensuring a similar size as the typical convolutional output, the secondary stage of the GhostConv element was a lightweight linear function on mapping feature outcome by the primary stage, as depicted in Eq (4). (4) whereas ϕij represents a linear operation, implies the ith mapping feature, and yij stands for the jth mapping feature attained by the linear operation of the ith mapping feature. The GhostConv model gets N output mapping features, and N = mxs. It has been demonstrated that s‐1 linear conversion that deals with computational resources was carried out, therefore the computing count of the GhostConv model can be DKxDKxMxDFxDFxm + (s − 1)xDKxDKxDFxDF. Then the computation relationship among GhostConv and typical convolutional modules is expressed as follows (5)

According to Eq (5), the typical convolutional is s times as much as the GhostConv element in the computation. Thus, the GhostNet model was built depending on GhostNet‐Block could considerably decrease the count of computation and the count of network parameters.

3.3. Image classification: DESN model

In this work, the DESN approach can be employed for the detection and classification of PC. The important feature of the ESN is that it proceeds a random reservoir as a fundamental processing unit [22]. The reservoir was stimulated as a difficult internal state, which describes the feature of an input signal by the linear incorporation. Furthermore, the input weights and reservoir were fixed, and just the output weight was modified by linear regression during training of the ESN, which could avoid local minima, exploding, and vanishing gradients and improve efficiency. Fig 2 displays the infrastructure of ESN.

Consider = {x1,x2, ⋯,xN−1,xN} represent the internal state of reservoirs, u = {u1, u2, ⋯, un−1, un} as the input signals, x and y = {y1,y2, ⋯,ym−1,ym} denotes the output signals: (6)

In Eq (6), f(∙) denotes an activation function and Win, W, and Wback are random input, internal, and feedback weights, correspondingly. The leaky combine was considered as a neuron once the ESN was utilized for pattern detection, hence Eq (1) is changed as: (7)

In Eq (7), α denotes the leaky rate and γ refers to the gain: (8)

The output of ESN is: (9) Where g(∙) refers to the activation function and Wout denotes the output weight.

Wout is updated during the training of the ESN. The objective function L can be represented as follows: (10)

In Eq (10), ‖∙‖2 refers to L2 norm and g−1 (∙) shows the inverse function of g(∙).

The resultant weighted predicted was: (11)

In Eq (11), the pseudo‐inverse and the transpose of the matrix can be represented as superscripts † and T, correspondingly.

3.4. Hyperparameter tuning: TSA model

For optimum hyperparameter tuning of the DESN model, the TSA is used. TSA is inspired by the social behavior of tunicates watching for prey [23]. While hunting, the marine invertebrate exploits swarm intelligence and water jet to find prey. All the tunicates could quickly release the inhaled seawater using siphons of the atrium that make a type of jet propulsions which drives it quickly. Besides, tunicate displayed SI once it can share search details about the food location. The tunicate must meet the succeeding three limitations for establishing the mathematical modelling of its jet propulsion method:

  • Avoid clashes among all the search agents.
  • All the agents are guaranteed to move towards the fittest individual.
  • Create the search agent joins toward the region adjacent to the fittest individual.

To prevent clashes between each search agent, the below formula was used for calculating the novel location of the agent: (12) (13) (14) Where denotes the vector used to find the newest location of all the agents; refers to gravity; F indicates water flow in the deep sea; and c1, c2, and c3 represent three random integers within [0,1]. denotes the vector value as social strength between the searching agents as follows: (15)

In Eq (15), Pmin and Pmax signify the primary and secondary speeds that allow the search agent to construct social interaction and Pmin and Pmax are fixed to 1 and 4.

Each one move towards the neighbouring individual with the maximal fitness values (FV), after solving clashes between neighbouring search agents as follows: (16)

In Eq (16), denotes the food at the position of the present optimum individual; denotes a vector, which is the spatial distance among target food as well as tunicates; rrand denotes the arbitrary integer in zero and one; and shows the location data of the present search agent at tth iterations.

To create the search agent and execute sufficient local exploration of neighbouring fittest individuals for finding the better solution of the present iteration, the location was evaluated by: (17)

At t iteration, all the searching agents explore the region adjacent to the fittest individual Xbest and allocate the outcome to X(t) for upgrading the place.

The swarming behaviour of the tunicates transfers location data between the searching agents. This process can be driven by the location of the present search agents and can be attained as per the location upgraded by the present search agents. The fittest individual and the upgraded place by the prior individual using the swarm act can accomplish this: (18)

Here i = 1, …, N, N denotes the population size, shows the location of the existing search agents, represent the place of prior search agents at the following iteration.

To demonstrate the procedure of TSA, the steps to upgrade the location of the search agent are given below:

  1. Step1: Initializes the original population of searching agents X.
  2. Step2: Allocate value to initial parameters and max ‐iterations.
  3. Step3: Evaluate the FV of all the tunicates and choose the individual with better FV as a better search agent.
  4. Step4: Upgrade the place of all the search agents based on Eq (18).
  5. Step5: Keep all the search agents from the search space.
  6. Step6: Measure the FV of all the upgrade searching agents; if there were fittest individuals than the prior best-searching agents from the population, update Xbesi.
  7. Step7: If the maximum iteration was obtained, and the process stops. Or else, return to steps 4 to 7.
  8. Step8: Print the optimum individual (Xbest)

The TSA system produces a fitness function (FF) to obtain greater efficacy of classification. It defines positive integers to signify the better outcome of the solution candidate. The decline of the classifier error rate is regarded as FF.

(19)

4. Performance validation

The pancreatic cancer classification results of the TSADL-PCSC method are tested on the benchmark BioGPS datasets [3]. The dataset consists of 500 samples with two classes [24] as represented in Table 1.

In Fig 3, the confusion matrix of the TSADL-PCSC technique is analysed on pancreatic cancer classification. The results indicate that the TSADL-PCSC technique recognized pancreatic cancer and non-pancreatic cancer proficiently.

thumbnail
Fig 3. Confusion matrices of TSADL-PCSC method (a-b) 80:20 of TRP/TSP and (c-d) 60:40 of TRP/TSP.

https://doi.org/10.1371/journal.pone.0292785.g003

In Table 2 and Fig 4, the PC classifier result of the TSADL-PCSC method under 80:20 of TRP/TSP. The experimental values detect pancreatic cancer proficiently. For example, on 80% of TRP, the TSADL-PCSC method gains average accuy of 96.98%, precn of 97.18%, sensy of 96.98%, specy of 96.98%, and Fscore of 97%. At the same time, on 20% of TSP, the TSADL-PCSC technique gains average accuy of 99.02%, precn of 99%, sensy of 99.02%, specy of 99.02%, and Fscore of 99%.

thumbnail
Fig 4. Average outcome of TSADL-PCSC approach on 80:20 of TRP/TSP.

https://doi.org/10.1371/journal.pone.0292785.g004

thumbnail
Table 2. PC classifier outcome of TSADL-PCSC technique on 80:20 of TRP/TSP.

https://doi.org/10.1371/journal.pone.0292785.t002

In Table 3 and Fig 5, the PC classifier outcome of the TSADL-PCSC system under 60:40 of TRP/TSP. The experimental values detect pancreatic cancer proficiently. For instance, on 60% of TRP, the TSADL-PCSC technique gains an average accuy of 99.64%, precn of 99.69%, sensy of 99.64%, specy of 99.64%, and Fscore of 99.66%. Simultaneously, on 40% of TSP, the TSADL-PCSC method gains an average accuy of 99.55%, precn of 99.44%, sensy of 99.55%, specy of 99.55%, and Fscore of 99.49%.

thumbnail
Fig 5. Average outcome of TSADL-PCSC technique on 70:30 of TRP/TSP.

https://doi.org/10.1371/journal.pone.0292785.g005

thumbnail
Table 3. PC classifier outcome of TSADL-PCSC technique on 80:20 of TRP/TSP.

https://doi.org/10.1371/journal.pone.0292785.t003

Fig 6 examines the accuy of the TSADL-PCSC method during training and validation processes on 60:40 of TRP/TSP. The figure notifies that the TSADL-PCSC technique attains the highest accuy values over maximum epochs. Furthermore, the maximum validation accuy over training accuy exhibits that the TSADL-PCSC methodology attains effectively at 60:40 of TRP/TSP.

thumbnail
Fig 6. Accuracy curve of TSADL-PCSC approach on 60:40 of TRP/TSP.

https://doi.org/10.1371/journal.pone.0292785.g006

The loss analysis of the TSADL-PCSC method during training and validation is illustrated on 60:40 of TRP/TSP in Fig 7. The outcome indicates that the TSADL-PCSC method attains nearby values of training and validation losses. The TSADL-PCSC method efficiently gains on 60:40 of TRP/TSP.

thumbnail
Fig 7. Loss curve of TSADL-PCSC approach on 60:40 of TRP/TSP.

https://doi.org/10.1371/journal.pone.0292785.g007

A brief precision-recall (PR) analysis of the TSADL-PCSC technique is shown on 60:40 of TRP/TSP in Fig 8. The outcome stated that the TSADL-PCSC approach outcomes in the highest values of PR. The TSADL-PCSC method could obtain the highest PR values in 2 classes.

thumbnail
Fig 8. PR curve of TSADL-PCSC approach on 60:40 of TRP/TSP.

https://doi.org/10.1371/journal.pone.0292785.g008

In Fig 9, a ROC analysis of the TSADL-PCSC technique is shown on 60:40 of TRP/TSP. The figure defines that the TSADL-PCSC method resulted in maximum ROC values. In addition, the TSADL-PCSC system shows maximum ROC values on all classes.

thumbnail
Fig 9. ROC curve of TSADL-PCSC approach on 60:40 of TRP/TSP.

https://doi.org/10.1371/journal.pone.0292785.g009

A brief comparison study is made in Table 4 and Fig 10 in order to highlight the outperforming outcomes of the TSADL-PCSC method [11]. The outcome indicates that the CNN-50x50 model has obtained poor performance with the least results. In addition, the ODL-PTNTC, WELM, KELM, and ELM algorithms have attained slightly improved performance. Although the IDLDMS-PTC technique reaches near-optimal performance, the TSADL-PCSC technique gains outperforming results with maximum sensy of 99.55%, specy of 99.55%, and accuy of 99.55%. These results indicate the promising performance of the TSADL-PCSC technique in terms of different measures.

thumbnail
Fig 10. Comparative outcome of TSADL-PCSC approach with other systems.

https://doi.org/10.1371/journal.pone.0292785.g010

thumbnail
Table 4. Comparative outcome of TSADL-PCSC method with other techniques.

https://doi.org/10.1371/journal.pone.0292785.t004

5. Conclusion

In this study, we have developed the TSADL-PCSC method for PC segmentation and classification on CT scans. The TSADL-PCSC technique aims to accomplish enhanced PC classification results using a hyperparameter-tuned DL model. To accomplish this, the TSADL-PCSC technique comprises four processes namely W-Net segmentation, GhostNet feature extractor, DESN classification, and TSA-based hyperparameter tuning. The TSA helps to avoid the manual trial and error hyperparameter selection process, which in turn increases the overall classification performance. The experimental result of the TSADL-PCSC method was tested on a benchmark CT scan database. The obtained outcomes highlighted the importance of the TSADL-PCSC technique over other approaches. In the future, the performance of the TSADL-PCSC system was boosted by deep ensemble classifier algorithms.

References

  1. 1. Li X, Guo R, Lu J, Chen T, Qian X. Causality-Driven Graph Neural Network for Early Diagnosis of Pancreatic Cancer in Non-Contrast Computerized Tomography. IEEE Transactions on Medical Imaging. 2023 Jan 11. pmid:37018703
  2. 2. Na R., Bb M.K., Bb V.B., Mb K.D., Rc D. and Scholar U.G., 2021. Detection and Identification of Pancreatic Cancer Using Probabilistic Neural Network. Smart Intelligent Computing and Communication Technology, 38, p.273.
  3. 3. Xuan W, You G. Detection and diagnosis of pancreatic tumor using deep learning-based hierarchical convolutional neural network on the internet of medical things platform. Future Generation Computer Systems. 2020 Oct 1;111:132–42.
  4. 4. Abbas SK, Obied RS. Novel Computer Aided Diagnostic System Using Synergic Deep Learning Technique for Early Detection of Pancreatic Cancer. Webology 18. Special Issue on Information Retrieval and Web Search. 2021 Sep:367–79.
  5. 5. Iwasa Y, Iwashita T, Takeuchi Y, Ichikawa H, Mita N, Uemura S, et al. Automatic segmentation of pancreatic tumors using deep learning on a video image of contrast-enhanced endoscopic ultrasound. Journal of clinical medicine. 2021 Aug 15;10(16):3589. pmid:34441883
  6. 6. Vardhani, N., Gayathri, G., Leela, K., Bhavya, T. and Sravani, Y.D., 2023, February. Pancreatic Cancer Classification using Deep Learning. In 2023 7th International Conference on Computing Methodologies and Communication (ICCMC) (pp. 106–113). IEEE.
  7. 7. Fu H, Mi W, Pan B, Guo Y, Li J, Xu R, et al. Automatic pancreatic ductal adenocarcinoma detection in whole slide images using deep convolutional neural networks. Frontiers in oncology. 2021 Jun 25;11:665929. pmid:34249702
  8. 8. Park HJ, Shin K, You MW, Kyung SG, Kim SY, Park SH, et al. Deep learning–based detection of solid and cystic pancreatic neoplasms at contrast-enhanced CT. Radiology. 2023 Jan;306(1):140–9. pmid:35997607
  9. 9. Liang Y, Schott D, Zhang Y, Wang Z, Nasief H, Paulson E, et al. Auto-segmentation of pancreatic tumor in multi-parametric MRI using deep convolutional neural networks. Radiotherapy and Oncology. 2020 Apr 1;145:193–200. pmid:32045787
  10. 10. Chaithanyadas KV, Gnana King GR. Detection of Pancreatic Tumor from Computer Tomography Images Using 3D Convolutional Neural Network. In Computational Vision and Bio-Inspired Computing: Proceedings of ICCVBIC 2022 2023 Apr 8 (pp. 289–303). Singapore: Springer Nature Singapore.
  11. 11. Vaiyapuri T, Dutta AK, Punithavathi IH, Duraipandy P, Alotaibi SS, Alsolai H, et al. Intelligent deep-learning-enabled decision-making medical system for pancreatic tumor classification on CT images. InHealthcare 2022 Apr 3 (Vol. 10, No. 4, p. 677). MDPI. pmid:35455854
  12. 12. Zhang Y, Wu J, Liu Y, Chen Y, Chen W, Wu EX, et al. A deep learning framework for pancreas segmentation with multi-atlas registration and 3D level-set. Medical Image Analysis. 2021 Feb 1;68:101884. pmid:33246228
  13. 13. Zhang G, Bao C, Liu Y, Wang Z, Du L, Zhang Y, et al. 18F-FDG-PET/CT-based deep learning model for fully automated prediction of pathological grading for pancreatic ductal adenocarcinoma before surgery. EJNMMI research. 2023 May 25;13(1):49. pmid:37231321
  14. 14. Althobaiti M.M., Almulihi A., Ashour A.A., Mansour R.F. and Gupta D., 2022. Design of Optimal Deep Learning-Based Pancreatic Tumor and Nontumor Classification Model Using Computed Tomography Scans. Journal of Healthcare Engineering, 2022. pmid:35070232
  15. 15. Bagheri MH, Roth H, Kovacs W, Yao J, Farhadi F, Li X, et al. Technical and clinical factors affecting success rate of a deep learning method for pancreas segmentation on CT. Academic radiology. 2020 May 1;27(5):689–95. pmid:31537506
  16. 16. Khdhir R, Belghith A, Othmen S. Pancreatic Cancer Segmentation and Classification in CT Imaging using Antlion Optimization and Deep Learning Mechanism. International Journal of Advanced Computer Science and Applications. 2023;14(3).
  17. 17. Nishio M, Noguchi S, Fujimoto K. Automatic pancreas segmentation using coarse-scaled 2d model of deep learning: usefulness of data augmentation and deep U-net. Applied Sciences. 2020 May 12;10(10):3360.
  18. 18. Yang M, Zhang Y, Chen H, Wang W, Ni H, Chen X, et al. AX-Unet: A deep learning framework for image segmentation to assist pancreatic tumor diagnosis. Frontiers in Oncology. 2022 Jun 2;12:894970. pmid:35719964
  19. 19. Liu KL, Wu T, Chen PT, Tsai YM, Roth H, Wu MS, et al. Deep learning to distinguish pancreatic cancer tissue from non-cancerous pancreatic tissue: a retrospective study with cross-racial external validation. The Lancet Digital Health. 2020 Jun 1;2(6):e303–13. pmid:33328124
  20. 20. Bairaboina SS, Battula SR. Ghost-ResNeXt: An Effective Deep Learning Based on Mature and Immature WBC Classification. Applied Sciences. 2023 Mar 22;13(6):4054.
  21. 21. Lei Y, Pan D, Feng Z, Qian J. Lightweight YOLOv5s Human Ear Recognition Based on MobileNetV3 and Ghostnet. Applied Sciences. 2023 May 30;13(11):6667.
  22. 22. Li X., Bi F., Zhang L., Yang X. and Zhang G., 2022. An Engine Fault Detection Method Based on the Deep Echo State Network and Improved Multi-Verse Optimizer. Energies, 15(3), p.1205.
  23. 23. Cui Y., Shi R. and Dong J., 2022. CLTSA: A Novel Tunicate Swarm Algorithm Based on Chaotic-Lévy Flight Strategy for Solving Optimization Problems. Mathematics, 10(18), p.3405.
  24. 24. https://www.kaggle.com/datasets/salihayesilyurt/pancreas-ct