Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Fig 1.

Workflow for the multiclass cervical cell classification framework.

Workflow for the proposed multiclass cervical cell classification framework. The process includes image acquisition, DPAGCHE-based preprocessing for contrast enhancement and denoising, followed by classification using transfer learning with five CNN models, and concludes with comparative performance analysis. This pipeline aims to automate and enhance cervical cancer screening by improving feature quality and model accuracy.

More »

Fig 1 Expand

Fig 2.

Original classification of herlev dataset.

Original classification of the Herlev dataset, consisting of 7 classes: 3 healthy (superficial, intermediate, columnar) and 4 abnormal (mild, moderate, severe dysplasia, carcinoma in situ), used for cervical cell image analysis.

More »

Fig 2 Expand

Fig 3.

Herlev database distribution according to SIL classification.

Redistribution of the original 7 Herlev classes into 3 categories—Normal, LSIL, and HSIL—based on the Bethesda System (SIL classification).

More »

Fig 3 Expand

Fig 4.

Flowchart of image contrast enhancement.

More »

Fig 4 Expand

Table 1.

Architectural comparison of CNN models for cervical cell classification. Architectural comparison of five CNN models used in this study, highlighting their parameter size, key design features, strengths, and limitations. The table outlines the trade-offs between computational complexity and classification capability, informing the selection of models suited for Pap smear image classification.

More »

Table 1 Expand

Fig 5.

Random selection of original image dataset.

Randomly selected images from the original Herlev dataset, representing three classes: Normal-015, LSIL-006, HSIL-024, and HSIL- 085. These samples exhibit low contrast, noise presence, and poor nucleus-cytoplasm differentiation, which may hinder accurate feature extraction by deep learning models. This visual limitation highlights the need for image enhancement prior to classification.

More »

Fig 5 Expand

Fig 6.

Result of DPAGCHE-enhanced image dataset.

DPAGCHE-enhanced versions of the same images shown in Fig 5. The enhanced images demonstrate notable improvements in contrast, noise reduction, and clearer nucleus-cytoplasm boundaries, supporting better visual quality for subsequent classification.

More »

Fig 6 Expand

Table 2.

Hyperparameter settings used for the pre-trained models. Hyperparameters used to train all CNN models in this study, including image input size, batch size, optimizer, and learning rate schedule. These settings were selected to optimize convergence and reduce overfitting on the Herlev dataset for three-class cervical cell classification.

More »

Table 2 Expand

Table 3.

Performance metrics for the respective models. Classification performance of five CNN models on both the original and DPAGCHE-enhanced Herlev datasets. Metrics include accuracy, specificity, recall, precision, and F1-score. ResNet50 achieved the highest overall performance on the enhanced dataset, demonstrating significant improvements in all metrics compared to its performance on the original dataset.

More »

Table 3 Expand

Fig 7.

Bar Graph Comparison: CNN Model Metrics for DPAGCHE vs. Original Dataset.

Bar graph comparison of classification metrics across five CNN models using the DPAGCHE-enhanced dataset (blue bars) versus the original dataset (orange bars). The enhanced images show significant improvements in recall, precision, and F1-score across all models, indicating better overall classification performance with DPAGCHE preprocessing.

More »

Fig 7 Expand

Table 4.

Paired t-test comparison between ResNet50 performance on original images and DPAGCHE-enhanced images (n = 10 repetitions). The results show that DPAGCHE-enhanced images significantly improved model performance across all metrics, with all p-values < 0.001.

More »

Table 4 Expand

Table 5.

Improvement of model performance with DPAGCHE enhancement. Percentage improvement of each classification metric when using DPAGCHE-enhanced images compared to the original dataset. The most substantial gains were observed in F1-score (+53.65%) and precision (+44.29%), indicating improved model balance and reduced false positives. A modest gain in accuracy (+7.86%) and recall (+28.17%) further supports the effectiveness of the enhancement, although specificity slightly declined (−1.94%), suggesting a trade-off between sensitivity and true negative detection.

More »

Table 5 Expand

Fig 8.

Visual representation of comparative improvements in model performance with DPAGCHE enhancement.

Illustrating Relative Percentage Changes; Green Bars Indicate Performance Gains, While Red Bars Represent Performance Declines.

More »

Fig 8 Expand

Fig 9.

Metrics values for each model across HSIL, LSIL, and normal classes.

More »

Fig 9 Expand