Peer Review History

Original SubmissionAugust 7, 2024
Decision Letter - Xiaohui Zhang, Editor

PONE-D-24-32864 XLLC-Net: A Lightweight and Explainable CNN for Accurate Lung Cancer Classification Using Histopathological Images PLOS ONE

Dear Dr. Mridha,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Mar 29 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.
  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.
  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Xiaohui Zhang

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please note that PLOS ONE has specific guidelines on code sharing for submissions in which author-generated code underpins the findings in the manuscript. In these cases, we expect all author-generated code to be made available without restrictions upon publication of the work. Please review our guidelines at https://journals.plos.org/plosone/s/materials-and-software-sharing#loc-sharing-code and ensure that your code is shared in a way that follows best practice and facilitates reproducibility and reuse.

3. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: No

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: No

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: This study introduces the Explainable and Lightweight Lung Cancer Net (XLLC-Net), a convolutional neural network for classifying lung cancer from histopathological images. Using the LC25000 dataset, XLLC-Net achieves high classification accuracy with a compact architecture of 3 million parameters, allowing for efficient training in 60 seconds per epoch. Incorporating Explainable AI techniques like Saliency Map and GRAD-CAM enhances interpretability. Overall, XLLC-Net showcases the potential of lightweight deep learning models in medical imaging, balancing performance and resource efficiency for real-world healthcare applications.

I find the study interesting and suitable for submission to the PLOS ONE journal; however, I have some concerns about its overall quality and significance. I have not seen any link with the code, so my comments are based solely on the written manuscript and the figures.

Firstly, the paper is written inconsistently, with numerous repetitions and phrases that seem to originate from automated text generation tools, such as chatbots. While this isn't inherently negative, thorough editing is necessary after the initial draft to enhance clarity and cohesion.

The authors present a conventional CNN architecture that consists of convolutional layers, batch normalization, max-pooling, and dropout layers, repeated four times. This design lacks novelty, as it adheres to a typical structure found in many existing models. Additionally, while the authors compare their model with some state-of-the-art architectures, these comparisons involve models developed for different tasks (see Table 5), which may not provide a valid benchmark.

On page 9 (Table 6), they mention more advanced models trained specifically for medical image analysis, yet they do not directly compare their model against these in terms of the number of parameters. Furthermore, they fail to present standard deviations or disclose the number of trials (i.e., initialization) conducted, which are important for assessing the reliability and generalization of their results. As the differences in the metrics are very small, it is important to re-run the experiments at least 5 times per model.

The authors do not present and mention anything about how they have chosen the hyperparameters. For example, dropout rates, number of epochs, architecture (number of layers, number of nodes on the fully connected layers, etc).

The explainability is something I found really important given the context and the application. Well done.

Minor comments:

The figures are difficult to understand in their current format (given at the end of the manuscript), but maybe this was requested by the journal.

Use the abbreviation defined for later references to the same phrases (for example, deep learning (DL) is defined in line 8, and then the full phrase is used again in line 83).

Line 101, 110: Convolutional neural networks → CNNs

Line 139: “Cibi et al. presented a customized deep CNN model using CapsNet [...]”: I do not understand what the “CapsNet” is, it is not defined in the paper.

Methods (equations): In Eq. 12 the authors give the summation from j=1 to 3, however in Eq. 14 they use a more generic form using C to denote the number of classes. I suggest changing the Eq. 12 to denote the summation from j=1 to C and give below the explanation of C.

Figures 5-8: The y-axis should be the same across all subplots to ensure fair comparison with visual inspection.

Reviewer #2: The paper presents the Explainable and Lightweight Lung Cancer Net (XLLC-Net) for lung cancer image classification. The mathematical derivation is thorough, and the experimental results demonstrate that the proposed network achieves strong performance. This lightweight network has the potential to be integrated into medical applications with limited computational resources.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Revision 1

Journal: PLOS ONE

Manuscript ID: PONE-D-24-32864

Title: XLLC-Net: A Lightweight and Explainable CNN for Accurate Lung Cancer Classification

Using Histopathological Images

Authors: Jamin Rahman Jim, Md. Eshmam Rayed, M. F. Mridha, Kamruddin Nur.

Dear Editor in Chief/Reviewers,

We would like to thank anonymous reviewers for their valuable comments and the editor

for his acceptance to revise our paper and for their specific and important comments. We

have revised the paper and restructured several sections. All changes in the paper are

presented in the updated version.

Response of Reviewer-1

Reviewer Concern-1: Firstly, the paper is written inconsistently, with numerous

repetitions and phrases that seem to originate from automated text generation

tools, such as chatbots. While this isn’t inherently negative, thorough editing is

necessary after the initial draft to enhance clarity and cohesion.

Author’s Response: Thank you for your valuable feedback. We appreciate your suggestions.

Author’s Action: We have edited our manuscript to enhance clarity and cohesion.

Reviewer Concern-2: The authors present a conventional CNN architecture

that consists of convolutional layers, batch normalization, max-pooling, and

dropout layers, repeated four times. This design lacks novelty, as it adheres

to a typical structure found in many existing models. Additionally, while the

authors compare their model with some state-of-the-art architectures, these comparisons

involve models developed for different tasks (see Table 5), which may

not provide a valid benchmark.

Author’s Response: We appreciate the reviewer’s feedback and understand the concern

regarding the novelty of our CNN architecture. Our primary goal was to explore whether

a lightweight network could effectively perform the given task, and our results demonstrate

that it performs exceptionally well. While the structure of our model follows standard CNN

principles, its effectiveness in achieving superior results with reduced complexity is a key

contribution, particularly for resource-constrained environments.

Regarding the benchmark comparisons in Table 5, we would like to clarify that all the stateof-

the-art models—AlexNet, VGG-17, VGG-19, and ResNet-50—were trained on the same

LC25000 dataset for a fair and valid evaluation. Comparing the performance of a newly

proposed model against widely used deep learning architectures trained on the same task

is a standard practice in the field. This allows for a meaningful assessment of our model’s

efficiency and accuracy.

Author’s Action: Since these clarifications were already included in the manuscript, no

modifications were made. However, we remain open to further suggestions if additional

elaboration is required.

1

Reviewer Concern-3: On page 9 (Table 6), they mention more advanced

models trained specifically for medical image analysis, yet they do not directly

compare their model against these in terms of the number of parameters.

Author’s Response: Thank you for pointing out the need for a direct comparison of

model parameters. We appreciate this suggestion and have incorporated it into our revised

manuscript.

Author’s Action: We have added a ”Total Params (M)” column in Table 6, including

parameter counts for models where available. For models that did not report parameters in

their original papers, we have marked them as ”N/A” to maintain accuracy.

Reviewer Concern-4: Furthermore, they fail to present standard deviations or

disclose the number of trials (i.e., initialization) conducted, which are important

for assessing the reliability and generalization of their results. As the differences

in the metrics are very small, it is important to re-run the experiments at least

5 times per model.

Author’s Response: We appreciate the reviewer’s suggestion regarding the importance of

multiple trials to assess the reliability and generalization of our results. To address this, we

have conducted five independent training trials of the XLLC-Net model and reported the

corresponding accuracy, precision, recall, and F1-score for each trial. Additionally, we have

now included the mean and standard deviation of these performance metrics to provide a

comprehensive understanding of the model’s stability and robustness.

Author’s Action: We have updated the manuscript to explicitly mention that the model

was trained five times, and we now present the results for each trial in Table 4. The table

includes accuracy, precision, recall, and F1-score values across all trials, along with the

mean ± standard deviation to quantify variability. Additionally, we have revised the Results

Analysis section to emphasize the consistency of our model across multiple runs, ensuring

its reliability and generalization capabilities.

Reviewer Concern-5: The authors do not present and mention anything about

how they have chosen the hyperparameters. For example, dropout rates, number

of epochs, architecture (number of layers, number of nodes on the fully connected

layers, etc).

Author’s Response: We appreciate the reviewer’s insightful comment regarding hyperparameter

selection. To address this, we have added a dedicated explanation in the Methodology

section, explicitly detailing the rationale behind key hyperparameters, including dropout

rates, number of epochs, network architecture, batch size, optimizer choice, and loss function.

These decisions were made using standard deep learning techniques for medical imaging

tasks.

Author’s Action: We have included a ”Hyperparameter Selection” subsection in the

Methodology section, where we provide a clear justification for each parameter choice.

Reviewer Concern-6: Use the abbreviation defined for later references to the

same phrases (for example, deep learning (DL) is defined in line 8, and then

the full phrase is used again in line 83). Line 101, 110: Convolutional neural

networks → CNNs

Author’s Response: Thank you for highlighting the inconsistency in abbreviation usage.

We appreciate this suggestion and have carefully revised the manuscript to ensure consistency.

2

Author’s Action: We have standardized all abbreviations, replacing Machine Learning

with ML, Deep Learning with DL, Convolutional Neural Networks with CNNs, and Explainable

AI with XAI throughout the manuscript. The corrected terms are marked in blue in

the ”Revised Manuscript with Track Changes” file.

Reviewer Concern-7: Line 139: “Cibi et al. presented a customized deep

CNN model using CapsNet [...]”: I do not understand what the “CapsNet” is,

it is not defined in the paper.

Author’s Response: Thank you for highlighting the missing definition of CapsNet. We

appreciate this feedback and have now incorporated a brief explanation of Capsule Networks

(CapsNet) in the manuscript to ensure clarity.

Author’s Action: We have revised the sentence to include a short explanation of CapsNet

before its first mention in the manuscript.

Reviewer Concern-8: Methods (equations): In Eq. 12 the authors give the

summation from j=1 to 3, however in Eq. 14 they use a more generic form using

C to denote the number of classes. I suggest changing the Eq. 12 to denote the

summation from j=1 to C and give below the explanation of C.

Author’s Response: Thank you for your suggestion regarding consistency in notation.

We appreciate this feedback and have modified Equation 12 to use a generic notation C

instead of the fixed number 3, ensuring uniformity with Equation 14.

Author’s Action: We have updated Equation 12 to denote the summation from j=1 to

C and added an explanation stating that C represents the total number of classes in the

classification task.

Reviewer Concern-9: Figures 5-8: The y-axis should be the same across all

subplots to ensure fair comparison with visual inspection.

Author’s Response: Thank you for your observation. We appreciate your suggestion

regarding the y-axis consistency across subplots.

Author’s Action: The figures are automatically generated from the code, and while there

are slight differences in the y-axis scale, they do not impact the accuracy of the presented

results. Each subplot correctly reflects the training and validation trends for the respective

models, ensuring a fair comparison. Given the computational complexity and resource constraints,

re-running all models to adjust the y-axis is not feasible at this stage. However, the

trends remain clearly interpretable despite this minor variation.

3

Attachments
Attachment
Submitted filename: Response to Reviewers PLOS ONE.pdf
Decision Letter - Xiaohui Zhang, Editor

XLLC-Net: A Lightweight and Explainable CNN for Accurate Lung Cancer Classification Using Histopathological Images

PONE-D-24-32864R1

Dear Dr. Mridha,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. If you have any questions relating to publication charges, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Xiaohui Zhang

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: N/A

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: My last comment about the y-axis range to be the same across plots does not require to re run the models, just to plot them again.

Also, do the authors intend to make the code available?

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

**********

Formally Accepted
Acceptance Letter - Xiaohui Zhang, Editor

PONE-D-24-32864R1

PLOS ONE

Dear Dr. Mridha,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

If revisions are needed, the production department will contact you directly to resolve them. If no revisions are needed, you will receive an email when the publication date has been set. At this time, we do not offer pre-publication proofs to authors during production of the accepted work. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few weeks to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Xiaohui Zhang

Academic Editor

PLOS ONE

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .