Peer Review History

Original SubmissionJune 4, 2025
Decision Letter - Xiaohui Zhang, Editor

PONE-D-25-30235Enhanced Local Feature Extraction with Lite Network for Precise Segmentation of Small Brain Tumors in MRIPLOS ONE

Dear Dr. Yuan,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Sep 12 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.
  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.
  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Xiaohui Zhang

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1.Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. We note that your Data Availability Statement is currently as follows: All relevant data are within the manuscript and in Supporting Information files.

Please confirm at this time whether or not your submission contains all raw data required to replicate the results of your study. Authors must share the “minimal data set” for their submission. PLOS defines the minimal data set to consist of the data required to replicate all study findings reported in the article, as well as related metadata and methods (https://journals.plos.org/plosone/s/data-availability#loc-minimal-data-set-definition).

For example, authors should submit the following data:

- The values behind the means, standard deviations and other measures reported;

- The values used to build graphs;

- The points extracted from images for analysis.

Authors do not need to submit their entire data set if only a portion of the data was used in the reported study.

If your submission does not contain these data, please either upload them as Supporting Information files or deposit them to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of recommended repositories, please see https://journals.plos.org/plosone/s/recommended-repositories.

If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent. If data are owned by a third party, please indicate how others may request data access.

3. PLOS requires an ORCID iD for the corresponding author in Editorial Manager on papers submitted after December 6th, 2016. Please ensure that you have an ORCID iD and that it is validated in Editorial Manager. To do this, go to ‘Update my Information’ (in the upper left-hand corner of the main menu), and click on the Fetch/Validate link next to the ORCID field. This will take you to the ORCID site and allow you to create a new iD or authenticate a pre-existing iD in Editorial Manager.

4. If the reviewer comments include a recommendation to cite specific previously published works, please review and evaluate these publications to determine whether they are relevant and should be cited. There is no requirement to cite these works unless the editor has indicated otherwise. 

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: N/A

Reviewer #2: No

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: No

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: This manuscript presents a lightweight network to perform segmentation task on MRI images utilizing shared CNN layers. The following questions and comments should be addressed during revision:

1. Figure 3, since the authors proposed using shared CNN layers during encoding, why not leverage the parameter saving to explore networks with larger depths? Does increasing the network depth increase the performance of the model?

2. Table 1, the overall accuracy metrics are really close for all models reported. The authors should consider repeating the model training process a few times to obtain the mean and standard deviation of the performance metrics to determine if the improvements are statistically significant.

3. The training time for each model should also be reported.

4. What is the training/testing ratio when the models were trained?

5. Figure 4, the meaning of the red bounding box should be explained in the figure caption.

6. A core assumption of this manuscript is the scale invariance of the CNN layers. Have the authors tested this using the datasets available? For example, for the tumor in Figure 4, what happens to all trained model if we crop the image down to a sub-region containing the tumor (resampling back to the same resolution, of course). This effectively changes the scale of the image/tumor. Is the proposed model able to correctly segment out the tumor in this case?

7. Table 3, for the ablation study, another case with just shared CNN layers (without transformers) should be added to see the impact of just the shared CNN layers.

Reviewer #2: This paper presents LiteMRINet, a lightweight network for segmenting small brain tumors in MRI images. The method introduces a shared 10-layer CNN applied across multiple input scales to enhance local feature extraction while minimizing parameter growth. Additionally, Transformer modules are used on low-resolution feature maps to capture global context. The architecture adopts a U-Net-style decoder and is evaluated on the LGG Segmentation and BraTS21 datasets, demonstrating competitive segmentation performance with significantly fewer parameters compared to existing models. However, before publication, the following concerns should be addressed.

1. Did the study include validation sets? These are typically essential for tuning and early stopping.

2. Does the arthor implement the study to test the generalization ability of proposed method?

3. The usage of “U-Net” and “UNet” should be standardized; “U-Net” is the more widely accepted form.

4. The notation “128×128@3” used to describe data shape is uncommon and should be revised to a more standard format like 128×128×3.

5. The first paragraph of the introduction discusses tumor characteristics and clinical context with minimal referencing. I would suggest add more citations to support those statements.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Revision 1

Reviewer #1: This manuscript presents a lightweight network to perform segmentation task on MRI images utilizing shared CNN layers. The following questions and comments should be addressed during revision:

1. Figure 3, since the authors proposed using shared CNN layers during encoding, why not leverage the parameter saving to explore networks with larger depths? Does increasing the network depth increase the performance of the model?

Response: Our research objective is to enhance the extraction of large-scale detailed features without increasing the network parameters, and even to reduce the network parameters. Although increasing the number of CNN layers would yield better performance, it would lead to more model parameters and higher requirements for the computer’s GPU.

2. Table 1, the overall accuracy metrics are really close for all models reported. The authors should consider repeating the model training process a few times to obtain the mean and standard deviation of the performance metrics to determine if the improvements are statistically significant.

Response:Thank you for your suggestion. We have replaced the GPU with a 4060Ti 16G,

and conducted multiple rounds of model training and continued until the differences between the three metrics of the two best training results were all within 0.5. We then took the average of these two sets of metrics as the final result. The experimental results indeed differed from the previous ones, so we have rewritten the results section.

3. The training time for each model should also be reported.

Response: Thank you for your suggestion. Our model’s advantage lies in its lightweight nature; however, due to the internal implementation of convolutional operations, a lightweight model does not necessarily ensure faster computation.

4. What is the training/testing ratio when the models were trained?

Response: 3,000 images from the LGG Segmentation Dataset were allocated to the training set, while the remaining 929 images were designated for the test set. We have revised the materials subsection and added the following information: For the BraTS21 dataset, the training set contains 580 images and the test set includes 85 images.

5. Figure 4, the meaning of the red bounding box should be explained in the figure caption.

Response: Rectangular bounding boxes have been removed from the new results.

6. A core assumption of this manuscript is the scale invariance of the CNN layers. Have the authors tested this using the datasets available? For example, for the tumor in Figure 4, what happens to all trained model if we crop the image down to a sub-region containing the tumor (resampling back to the same resolution, of course). This effectively changes the scale of the image/tumor. Is the proposed model able to correctly segment out the tumor in this case?

Response: We enable input images of different scales to share the same convolutional neural network (CNN), thereby training the network to recognize features across multiple scales simultaneously. Our assumption is that tumors of varying sizes should all be identifiable by the same network model—analogous to how the human brain can accurately recognize a car both from a distance and up close.

The dataset itself already contains tumors of different sizes, and all models are capable of recognizing them; the only difference lies in the recognition performance among the models. Due to the enhanced depth of large-scale feature extraction in our network, it achieves better performance while having the fewest parameters among all models.

7. Table 3, for the ablation study, another case with just shared CNN layers (without transformers) should be added to see the impact of just the shared CNN layers.

Response: In fact, the change in metric values from the second row ("Transformer + CNN") to the third row ("Transformer + ShareCNN") directly represents the impact solely brought by the shared CNN layers.

Moreover, since convolution primarily captures local features, we specifically apply ShareCNN to large-scale inputs. If we remove the Transformer and use only ShareCNN (which would mean applying ShareCNN to small-scale inputs as well), this would be of little significance.

Reviewer #2: This paper presents LiteMRINet, a lightweight network for segmenting small brain tumors in MRI images. The method introduces a shared 10-layer CNN applied across multiple input scales to enhance local feature extraction while minimizing parameter growth. Additionally, Transformer modules are used on low-resolution feature maps to capture global context. The architecture adopts a U-Net-style decoder and is evaluated on the LGG Segmentation and BraTS21 datasets, demonstrating competitive segmentation performance with significantly fewer parameters compared to existing models. However, before publication, the following concerns should be addressed.

1. Did the study include validation sets? These are typically essential for tuning and early stopping.

Response: In this experiment, we used a training set and a test set. For early stopping, we adopted the criterion that training is halted if the loss function value does not decrease for 20 consecutive epochs.

2. Does the arthor implement the study to test the generalization ability of proposed method?

Response: In our experimental results, the metric values of all models on the test set represent a generalization outcome of the trained models.

3. The usage of “U-Net” and “UNet” should be standardized; “U-Net” is the more widely accepted form.

Response: Thank you for your suggestion. To avoid having two hyphens ("-") in terms like "LeViT-UNet" and "MA-UNet", we have revised all instances of "U-Net" to "UNet".

4. The notation “128×128@3” used to describe data shape is uncommon and should be revised to a more standard format like 128×128×3.

Response: Thank you for your suggestion; we have revised.

5. The first paragraph of the introduction discusses tumor characteristics and clinical context with minimal referencing. I would suggest add more citations to support those statements.

Response: Thank you for your suggestion; we have revised first two paragraph of introduction.

Attachments
Attachment
Submitted filename: response.docx
Decision Letter - Xiaohui Zhang, Editor

Enhanced Local Feature Extraction of Lite Network with Scale-Invariant CNN for Precise Segmentation of Small Brain Tumors in MRI

PONE-D-25-30235R1

Dear Dr. Kang,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact billing support.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Xiaohui Zhang

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #2: The authors have addressed all my concerns. The paper is technically sound and well-written. Thus, I recommend the manuscript for publication.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: No

**********

Formally Accepted
Acceptance Letter - Xiaohui Zhang, Editor

PONE-D-25-30235R1

PLOS ONE

Dear Dr. Kang,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

You will receive an invoice from PLOS for your publication fee after your manuscript has reached the completed accept phase. If you receive an email requesting payment before acceptance or for any other service, this may be a phishing scheme. Learn how to identify phishing emails and protect your accounts at https://explore.plos.org/phishing.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Xiaohui Zhang

Academic Editor

PLOS ONE

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .