Peer Review History

Original SubmissionJuly 25, 2025
Decision Letter - Jinhao Liang, Editor

PONE-D-25-40072Adaptive Output Steps: FlexiSteps Network for Dynamic Trajectory PredictionPLOS ONE

Dear Dr. Niu,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Oct 19 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.
  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.
  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Jinhao Liang

Academic Editor

PLOS ONE

Journal requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please note that PLOS One has specific guidelines on code sharing for submissions in which author-generated code underpins the findings in the manuscript. In these cases, we expect all author-generated code to be made available without restrictions upon publication of the work. Please review our guidelines at https://journals.plos.org/plosone/s/materials-and-software-sharing#loc-sharing-code and ensure that your code is shared in a way that follows best practice and facilitates reproducibility and reuse.

3. When completing the data availability statement of the submission form, you indicated that you will make your data available on acceptance. We strongly recommend all authors decide on a data sharing plan before acceptance, as the process can be lengthy and hold up publication timelines. Please note that, though access restrictions are acceptable now, your entire data will need to be made freely accessible if your manuscript is accepted for publication. This policy applies to all data except where public deposition would breach compliance with the protocol approved by your research ethics board. If you are unable to adhere to our open data policy, please kindly revise your statement to explain your reasoning and we will seek the editor's input on an exemption. Please be assured that, once you have provided your new statement, the assessment of your exemption will not hold up the peer review process.

4. Please include a separate caption for each figure in your manuscript.

5. If the reviewer comments include a recommendation to cite specific previously published works, please review and evaluate these publications to determine whether they are relevant and should be cited. There is no requirement to cite these works unless the editor has indicated otherwise. 

Additional Editor Comments:

Please revise the manuscript in a point-by-point manner, with particular emphasis on addressing the reviewers’ concerns regarding novelty from prior work.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: N/A

Reviewer #2: N/A

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: No

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: 1. What is the research subject of this article - vehicle trajectory prediction or pedestrian trajectory prediction? The author needs to be clear.

2. The innovation of this article is not clear. In the contribution section, the author fails to distinguish the differences between the methods presented in this article and those of previous studies, nor does he highlight the problems that this method can solve.

3. The font in Fig.2 is not uniform. The picture needs to be adjusted. The color of the arrows needs to be darkened. The meanings of some variables in the picture are unclear. Besides, what is the input of the scoring mechanism? It cannot be seen from the picture. Overall, Fig.2 is confusing.

4. The full names of APM and DD should be provided for the first time. Additionally, I haven't found the structural details of APM and DD, which prevents me from evaluating the innovativeness of these two modules.

5. The experimental part is insufficient and the comparison methods used are limited. The author claims that the method in this paper improves efficiency, but no analysis of computational cost has been presented.

6. Many typos exist in this work. Please double-check it.

Reviewer #2: The authors propose FlexiSteps Network (FSN), a novel framework for trajectory prediction that dynamically adjusts the number of future time steps predicted based on contextual conditions. This addresses a critical limitation of fixed-horizon models and has strong potential for real-world deployment in autonomous systems.

The motivation is excellent, the use of Fréchet distance for geometric-temporal evaluation is well-justified, and experiments on Argoverse and INTERACTION add credibility. However, several revisions are needed to strengthen the technical contribution and ensure reproducibility:

Clarify the Adaptive Prediction Module (APM):

What is its architecture (e.g., MLP, Transformer)?

How is it trained? Are labels for optimal step length generated from ground truth trajectories or heuristics?

Is it pre-trained independently or jointly with the decoder?

Define the scoring mechanism explicitly:

Provide the mathematical formula for the score (e.g., Score = Fréchet Distance / Step Length or a weighted combination). Explain how the trade-off is calibrated and whether it was tuned on a validation set.

Demonstrate adaptivity empirically:

Include statistics (e.g., histogram or boxplot) showing the distribution of predicted step lengths across different scene types. Do complex scenes trigger longer horizons? This is essential to validate the core claim.

Include ablation studies:

Compare FSN with:

A version using fixed output steps (FSN-Fixed).

A variant without knowledge distillation.

This will isolate the contribution of each component.

Improve baseline comparisons:

Ensure that baseline models (e.g., HiVT, HPNet) are re-implemented fairly and report performance against published leaderboard results.

Report computational efficiency:

Include inference time (ms/sample) for FSN vs. fixed-step models to support claims of efficiency gains.

Release code:

Commit to making the source code and trained models publicly available to enhance reproducibility and impact.

With these revisions, this paper will be a strong contribution to the field of dynamic trajectory prediction.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Revision 1

Reviewer #1: 1. What is the research subject of this article - vehicle trajectory prediction or pedestrian trajectory prediction? The author needs to be clear.

Response: We appreciate the reviewer for raising this important point. To clarify, The primary research subject of this work is vehicle trajectory prediction in autonomous driving scenarios. This is reflected in the datasets used (Argoverse and INTERACTION), which mainly consist of vehicle trajectories in urban traffic environments. That said, our proposed FlexiSteps Network (FSN) is designed in a general, agent-agnostic manner. The Adaptive Prediction Module (APM) and Dynamic Decoder (DD) operate on latent representations of dynamic agents, and therefore the framework can naturally be extended to other agents such as pedestrians or cyclists.

2. The innovation of this article is not clear. In the contribution section, the author fails to distinguish the differences between the methods presented in this article and those of previous studies, nor does he highlight the problems that this method can solve.

Response: We thank the reviewer for pointing this out. Our work aims to achieve the truth dynamic trajectory prediction, in layman's terms. Because the previous models output the fixed prediction steps whatever is inputed. We have rewrite the expression in p2 of section introduction. Additionally, for the same input sequence, longer prediction lengths lead to greater instability and lower confidence, while shorter prediction lengths lead to shorter reaction times left to the decision system. This leads to the question of a judgmental metric for optimal prediction lengths, we also mentioned “This leads to the question of a judgmental metric for optimal prediction lengths…” in p5 of section introduction. In short, our work gives the model the ability to adaptively judge the output, rather than outputting the same predicted length for all inputs according to a human setting. We have also reorganized the language of the contribution point section.

3. The font in Fig.2 is not uniform. The picture needs to be adjusted. The color of the arrows needs to be darkened. The meanings of some variables in the picture are unclear. Besides, what is the input of the scoring mechanism? It cannot be seen from the picture. Overall, Fig.2 is confusing.

Response: We apologize for the confusion from this figure. We have modified Fig. 2 by unifying the fonts and changing all arrows to black for better clarity and visual consistency. We have enhanced the figure caption and manuscript text to provide clearer explanations of the variables, ensuring that their meanings are now explicitly described. As shown in the upper-right part of Fig. 2, the scoring mechanism takes two inputs: the prediction results from the backbone and the corresponding ground-truth trajectories. To avoid confusion, we have further clarified this in the revised figure and provided a more detailed explanation in Fig. 3 as well as in the Method section (Scoring Mechanism subsection).

4. The full names of APM and DD should be provided for the first time. Additionally, I haven't found the structural details of APM and DD, which prevents me from evaluating the innovativeness of these two modules.

Response: We thank the reviewer for pointing this out. In the revised manuscript, we have made the following changes:

(1) Full names provided: We now explicitly spell out the full names Adaptive Prediction Module (APM) and Dynamic Decoder (DD) when they first appear in the text (see Section Method).

(2) Adaptive Prediction Module (APM): We now detail both its training and inference stages. The APM is a pre-trained two-layer MLP classifier/regressor, which takes the encoded latent features of agents and their contexts as input, and outputs the optimal prediction step. The training procedure is illustrated in Fig.4, and we describe the joint classification and regression losses used to supervise the module.

(3) Dynamic Decoder (DD): We clarify that the DD consists of multiple specialized sub-decoders ${\phi_{D^f}}_{f=5}^F$, each responsible for a specific output step length. During training, the appropriate sub-network is selected according to the APM’s predicted step, and during inference, only the corresponding sub-network is activated. We also emphasize the independent parameter updates of each decoder and our use of KL-divergence regularization to transfer knowledge between different prediction horizons.

These additions ensure that the structural and functional designs of both modules are transparent and evaluable. We believe this will help highlight their novelty and contribution to improving the flexibility of trajectory prediction.

5. The experimental part is insufficient and the comparison methods used are limited. The author claims that the method in this paper improves efficiency, but no analysis of computational cost has been presented.

Response: We sincerely appreciate the reviewer’s insightful comments. In response:

(1) On the sufficiency of experiments:

Following your suggestion, we have conducted additional experiments to strengthen the empirical validation. Specifically, we included results from the FSN-Fixed baseline and further reported computational efficiency metrics to provide a more comprehensive comparison.

(2) On the limited comparison methods:

We acknowledge the concern. However, we would like to note that trajectory prediction for autonomous driving has only recently emerged as an active research area. Compared with fields such as computer vision or natural language processing, the number of closely related baseline models is still relatively limited. The two models we selected for comparison are both recent state-of-the-art works published at CVPR 2024 and NeurIPS 2024, which we believe represent strong and relevant baselines for fair evaluation.

(3) On computational efficiency analysis:

After carefully reviewing the initial submission, we noticed that parts of the manuscript were inadvertently mixed with earlier draft content, which may have caused confusion. We sincerely apologize for this oversight. Initially, we expected that our method might improve computational efficiency; however, as our study progressed, we found that the additional modules introduced inevitably increased computational cost. The final experiments indicate that while the method enhances prediction accuracy, it does not yield significant improvements in efficiency compared with conventional frameworks, as shown in fig3. Accordingly, we have revised the manuscript to clarify this point more accurately. Specifically, in section Abstract, Introduction and Computational Efficiency Analysis of section Main Results.

We hope these revisions address the reviewer’s concerns and improve the clarity and completeness of the experimental section.

6. Many typos exist in this work. Please double-check it.

Response: Thank you for your careful reading of our manuscript. We have performed a thorough, line-by-line proofreading of the entire manuscript (main text, equations, figure/table captions, and headings), followed by an automated spell/grammar pass.

Reviewer #2: The authors propose FlexiSteps Network (FSN), a novel framework for trajectory prediction that dynamically adjusts the number of future time steps predicted based on contextual conditions. This addresses a critical limitation of fixed-horizon models and has strong potential for real-world deployment in autonomous systems.

The motivation is excellent, the use of Fréchet distance for geometric-temporal evaluation is well-justified, and experiments on Argoverse and INTERACTION add credibility. However, several revisions are needed to strengthen the technical contribution and ensure reproducibility:

Clarify the Adaptive Prediction Module (APM):

What is its architecture (e.g., MLP, Transformer)?

Response: Thank you for your careful check. The APM is designed as a two-layer Multilayer Perceptron (MLP), which decodes the latent features produced by the baseline encoder. We intentionally avoided using a more complex Transformer structure to minimize computational overhead during inference. This lightweight design allows APM to be integrated as a plug-and-play module without significantly increasing runtime costs. We also acknowledge that deeper architectures (e.g., Transformer-based) might capture richer associations between contextual features and optimal prediction lengths. This potential extension is explicitly mentioned in the Conclusion as a future research direction.

How is it trained? Are labels for optimal step length generated from ground truth trajectories or heuristics?

Response: As described in the Method section, the training of APM proceeds in three stages:First, we collect multi-modal prediction results from a baseline model trained with different fixed output lengths; These predictions are compared with the corresponding ground truth trajectories, and the scoring mechanism (based on Fréchet distance combined with prediction length) is applied to identify the optimal prediction step length for each agent; These optimal step lengths serve as supervised labels for training the APM. Thus, the labels are derived from ground truth trajectories rather than heuristics.

Is it pre-trained independently or jointly with the decoder?

Response: The APM is pre-trained independently using the procedure described above. It learns to map encoded latent features to optimal prediction steps. The Dynamic Decoder (DD) is then trained separately, but its training relies on the APM’s predicted step lengths during inference. In this way, APM and DD are trained independently, but are integrated sequentially in the overall FSN framework.

Define the scoring mechanism explicitly:

Provide the mathematical formula for the score (e.g., Score = Fréchet Distance / Step Length or a weighted combination). Explain how the trade-off is calibrated and whether it was tuned on a validation set.

Response: We thank the reviewer for the helpful suggestion. In the revised manuscript, we have explicitly provided the mathematical definition of our scoring function in Section Method – Scoring Mechanism. The score is formulated as q_i^f = \frac{d_i^f}{f}, d_i^f = FDK(\mu_i^f, gt_i^f)

where d_i^f denotes the Fréchet Distance Kernel (FDK) between the predicted trajectory \mu_i^f and the ground truth trajectory gt_i^f at prediction horizon f. The optimal prediction length is then obtained as {f_{gt}}_i = \operatorname{argmin}_{f \in \{5,...,F\}} q_i^f.

To address the reviewer’s concern about the trade-off: rather than introducing additional hyperparameters, the division by the step length f naturally calibrates the balance between trajectory similarity and prediction horizon. In this design, longer horizons are only favored when their Fréchet distance is sufficiently low, ensuring principled trade-offs. We also verified the stability of this formulation on the validation set, and it consistently produced reasonable horizon selections without the need for additional tuning.

Demonstrate adaptivity empirically:

Include statistics (e.g., histogram or boxplot) showing the distribution of predicted step lengths across different scene types. Do complex scenes trigger longer horizons? This is essential to validate the core claim.

Response: Thank you for the constructive suggestion. We have added empirical analyses to explicitly demonstrate FSN’s adaptivity:

1.New distribution plots:

We include a new figure titled “Prediction Steps Distribution on Argoverse validation set” (now Fig. 7 in the revised manuscript). It provides: A histogram of predicted step lengths aggregated over the validation set; A 2D heatmap showing the distribution of predicted step lengths across agent-count bins. The x-axis indicates the number of agents in a scene, and the y-axis the predicted horizon; color encodes frequency.

2.Scene-complexity proxy and binning:

We treat the number of agents in a scene as a complexity proxy and report results in bins (e.g., <10, 11–30, 31–50, 51+). This choice is consistent with prior practice and aligns with our data annotations.

3.Findings :

The results show a clear adaptive trend: Simple scenes (few agents) preferentially yield longer horizons (25–30 steps); Moderately complex scenes (11–30 agents) concentrate on medium horizons (15–20 steps) while retaining diversity; Highly complex/congested scenes (≥31 agents, especially 51+) predominantly select shorter horizons (5–15 steps).

Overall, 10–20 steps account for the majority of cases, reflecting a practical balance between stability and foresight.

In our data, more complex scenes do not trigger longer horizons. Instead, they lead to shorter horizons, which is consistent with intuition: higher interaction density and uncertainty warrant nearer-term forecasts with tighter error control, while simpler scenes allow the model to extend its predictive horizon safely.

Include ablation studies:

Compare FSN with:

Response: Thank you for your rigorous consideration. We will respond to each of the following comments.

A version using fixed output steps (FSN-Fixed).

Response: Thank you for this suggestion. In the revised manuscript, we have added FSN-Fixed as an additional baseline in all experiments (see Table 1). This variant disables adaptive step selection by fixing the output length during training and inference, thereby isolating the effect of FSN’s adaptive mechanism. The results show that while FSN-Fixed achieves some improvements over traditional baselines, the full adaptive FSN consistently delivers superior performance, highlighting the importance of dynamic step prediction.

A variant without knowledge distillation.

This will isolate the contribution of each component.

Response: As requested, our ablation study (Table 3 in the revised manuscript) already includes results for both settings: w/ KL denotes training with KL-based knowledge distillation, while w/o KL refers to the variant without knowledge distillation. This provides a direct comparison of the effect of distillation on model performance.

Improve baseline comparisons:

Ensure that baseline models (e.g., HiVT, HPNet) are re-implemented fairly and report performance against published leaderboard results.

Response: Thank you for your rigorous advice. All baseline models (HiVT and HPNet) were re-implemented and evaluated under the same experimental platform, datasets, and hyperparameter settings, as detailed in the Implementation Details section. This ensures fairness and reproducibility in the reported comparisons.

Regarding HPNet, we acknowledge that the reported results differ from the leaderboard numbers. This is because the original HPNet employs a two-stage prediction framework (Stage 1: trajectory coordinate prediction; Stage 2: refinement of predicted coordinates). Due to the large model size and our limited computational resources, we adopted only the first prediction stage, which shares the identical input-processing pipeline with FSN. This design choice was made to (i) ensure fair compatibility with FSN as a plug-and-play module, and (ii) keep training feasible within our resources.

Importantly, when we restored the full HPNet model in a separate experiment, the results matched the published leaderboard, confirming the correctness of our re-implementation. Therefore, the simplified variant used in our main experiments does not compromise the validity of our conclusions, as the goal was to assess FSN’s adaptability rather than re-benchmark HPNet in its entirety.

Moreover, we note that for plug-and-play methods, the key criterion is the relative improvement under a consistent setup, rather than matching absolute leaderboard numbers for every backbone. In fact, for two closely related adaptive-length baselines—LaKD and FlexiLength—their reported results in the original papers also show minor discrepancies from public leaderboard entries, which can arise from differences in preprocessing, map releases, hardware, random seeds, and evaluation tooling. To control for these factors, we re-implement and evaluate all methods within the same codebase, datasets, and hyperparameters, so that the comparative deltas attributable to FSN remain valid and reproducible.

Report computational

Attachments
Attachment
Submitted filename: Response to reviewers.docx
Decision Letter - Jinhao Liang, Editor

Adaptive Output Steps: FlexiSteps Network for Dynamic Trajectory Prediction

PONE-D-25-40072R1

Dear Dr. Niu,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact billing support.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Jinhao Liang

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewer #2:

Reviewer #3:

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #2: All comments have been addressed

Reviewer #3: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #2: Yes

Reviewer #3: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #2: N/A

Reviewer #3: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: Yes

Reviewer #3: No

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #2: Yes

Reviewer #3: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #2: (No Response)

Reviewer #3: The author has addressed reviewers’ concerns well. I think this paper could be published on this journal.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: No

Reviewer #3: No

**********

Formally Accepted
Acceptance Letter - Jinhao Liang, Editor

PONE-D-25-40072R1

PLOS ONE

Dear Dr. Niu,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

You will receive an invoice from PLOS for your publication fee after your manuscript has reached the completed accept phase. If you receive an email requesting payment before acceptance or for any other service, this may be a phishing scheme. Learn how to identify phishing emails and protect your accounts at https://explore.plos.org/phishing.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Jinhao Liang

Academic Editor

PLOS ONE

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .