Peer Review History

Original SubmissionJanuary 23, 2025
Decision Letter - Shuai Ren, Editor

PONE-D-25-03633The Role of Novel Biomarkers in the Early Diagnosis of Pancreatic Cancer: A Systematic Review and Meta-AnalysisPLOS ONE

Dear Dr. Song,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Apr 25 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.
  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.
  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols .

We look forward to receiving your revised manuscript.

Kind regards,

Shuai Ren

Academic Editor

PLOS ONE

Journal Requirements:

1. When submitting your revision, we need you to address these additional requirements.

Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. We note that your Data Availability Statement is currently as follows: [All relevant data are within the manuscript and its Supporting Information files.]

Please confirm at this time whether or not your submission contains all raw data required to replicate the results of your study. Authors must share the “minimal data set” for their submission. PLOS defines the minimal data set to consist of the data required to replicate all study findings reported in the article, as well as related metadata and methods (https://journals.plos.org/plosone/s/data-availability#loc-minimal-data-set-definition).

For example, authors should submit the following data:

- The values behind the means, standard deviations and other measures reported;

- The values used to build graphs;

- The points extracted from images for analysis.

Authors do not need to submit their entire data set if only a portion of the data was used in the reported study.

If your submission does not contain these data, please either upload them as Supporting Information files or deposit them to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of recommended repositories, please see https://journals.plos.org/plosone/s/recommended-repositories.

If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent. If data are owned by a third party, please indicate how others may request data access.

3. PLOS requires an ORCID iD for the corresponding author in Editorial Manager on papers submitted after December 6th, 2016. Please ensure that you have an ORCID iD and that it is validated in Editorial Manager. To do this, go to ‘Update my Information’ (in the upper left-hand corner of the main menu), and click on the Fetch/Validate link next to the ORCID field. This will take you to the ORCID site and allow you to create a new iD or authenticate a pre-existing iD in Editorial Manager.

4. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information .

5. As required by our policy on Data Availability, please ensure your manuscript or supplementary information includes the following:

A numbered table of all studies identified in the literature search, including those that were excluded from the analyses. 

For every excluded study, the table should list the reason(s) for exclusion. 

If any of the included studies are unpublished, include a link (URL) to the primary source or detailed information about how the content can be accessed.

A table of all data extracted from the primary research sources for the systematic review and/or meta-analysis. The table must include the following information for each study:

Name of data extractors and date of data extraction

Confirmation that the study was eligible to be included in the review. 

All data extracted from each study for the reported systematic review and/or meta-analysis that would be needed to replicate your analyses.

If data or supporting information were obtained from another source (e.g. correspondence with the author of the original research article), please provide the source of data and dates on which the data/information were obtained by your research group.

If applicable for your analysis, a table showing the completed risk of bias and quality/certainty assessments for each study or outcome.  Please ensure this is provided for each domain or parameter assessed. For example, if you used the Cochrane risk-of-bias tool for randomized trials, provide answers to each of the signalling questions for each study. If you used GRADE to assess certainty of evidence, provide judgements about each of the quality of evidence factor. This should be provided for each outcome. 

An explanation of how missing data were handled.

This information can be included in the main text, supplementary information, or relevant data repository. Please note that providing these underlying data is a requirement for publication in this journal, and if these data are not provided your manuscript might be rejected.

Additional Editor Comments:

In the introduction part, the authors should emphasize the dire need of diagnosis pancreatic cancer at an early stage and following references could be added: doi: 10.4251/wjgo.v16.i4.1256; doi: 10.1177/20552076231179007.

There are many other biomarkers useful for diagnosis of early-stage pancreatic cancer, following references could be added such as: doi:10.1002/cam4.5296; doi:10.1177/10732748251316602.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: No

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: No

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: There are several inconsistencies in the font size and type in the manuscript. Check and fix them.

When you reference a figure or table in-text, do not include the description of that figure or table)

Line 96-113 : I think you could make a table for this or list the points in bullets to make it easier to read through.

Line 104: State the full form of QUADAS-2

Study selection: How did you access interrater reliability?

Line 147-149: What guided setting a benchmark score of 3 for inclusion?

Line 176: “Insufficient sample size” could be regarded as vague. What sample size was considered insufficient and why?

Line 181: “Tables 1 and Table2” fix the typo.

Line 183-186: These sentences are confusing.

Line 211: Kindly list the major proteins identified in these studies.

Line 307: You can include the funnel plot for this finding.

Line 296: Kindly list the major metabolites identified in these studies.

Reviewer #2: Manuscript ID: PONE-D-25-03633

Thank you for the opportunity to review this manuscript. The manuscript provides a comprehensive overview of biomarkers for the early diagnosis of pancreatic cancer. However, several aspects of the methodology require further elaboration.

1. The authors pooled data by biomarker type. However, even within each type, there is variability in the specific biomarkers and their cut-off values. I am not a subject expert, but I wonder how meaningful it is to combine such different biomarkers.

2. The authors state that studies with a QUADAS-2 score of 3 or higher were included. QUADAS-2 has four domains for bias and three for applicability. Could you clarify how the scoring was performed? Does a total score of 7 indicate inclusion, or was the focus only on bias? Since bias and applicability have different implications, combining them does not seem appropriate.

3. Please provide the QUADAS-2 assessment results for each study.

4. Please specify the search dates.

5. The authors report the combination of CA-19 and other biomarkers. Could you elaborate on how these were combined and whether this varied across studies? Was a positive result defined as either marker being positive or both being positive? For example, Yamada (2019) reported the sensitivity and specificity of bombinin anti-3’-sialyllactose IgG and CA-19 using predictive equations rather than a simple combination of the two tests. How was this handled in the analysis? The text provides data for calculating sensitivity based on a simple definition of positivity (resulting in a sensitivity of 93%, as in this review, with a positive result defined as either being positive), but there is no information to calculate the specificity of the combined screening using the same definition. Kashiro et al., on the other hand, appear to have combined two binary results (presumably defining positivity as either test being positive).

6. Please report the full results of the sensitivity analysis in the supplementary material. The manuscript states that sensitivity analyses did not show significant changes, yet the discussion mentions that "sensitivity analyses indicated heterogeneity might be associated with very large or small sample sizes." This seems inconsistent.

7. Exclusion criteria: Please clarify what is meant by "studies with excessive heterogeneity in design, patient population, biomarker detection method." How were these defined? Additionally, what threshold was used for defining "small-scale studies"?

8. Could you report how pancreatic cancer was excluded in each study?

9. The manuscript states as a strength that "advanced statistical models accommodated variability and calculated pooled estimates." This is too general. The analysis appears to use a standard random-effects meta-analysis. Please clarify.

10. The manuscript states that "most studies (n = 27) evaluated more than one biomarker," yet Table 2 reports only a single set of numbers per study. Please clarify this discrepancy.

11. The manuscript states that "all relevant data are within the manuscript and its Supporting Information files," but the numbers of TP, FP, FN, and TN needed to reproduce Figure 7 are missing. Please provide these.

12. In Table 2, what does "NC" stand for in the control group?

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean? ). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy .

Reviewer #1: No

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org . Please note that Supporting Information files do not need this step.

Revision 1

Response to Editor

Dear Editor,

Thank you for the opportunity to revise our manuscript. We have carefully revised the manuscript in accordance with the reviewers' suggestions and adjusted the manuscript formatting to match the PLOS ONE template, ensuring it fully complies with the journal's style requirements.

All raw data necessary for reproducing our findings have been provided within the main manuscript and supplementary materials. Additionally, we have included the names of the data extractors (Line 153) and specified the date of data extraction (Line 140) in the main text. A comprehensive table showing the complete risk of bias and quality assessment for each included study is provided in the supplementary material (STable 3).

In response to the editorial requirements, we have also made the following additions to the supplementary files:

A numbered table of all studies identified through the literature search, including those excluded from the analyses along with the reasons for exclusion;

A complete table of all data extracted from the primary research sources for the systematic review and/or meta-analysis, including the names of data extractors, dates of extraction, and confirmation of study eligibility;

Moreover, an explanation of how missing data were handled has been incorporated into the manuscript (Lines 170–174).

Furthermore, we have emphasized the urgent clinical need for early diagnosis of pancreatic cancer in the Introduction section (Lines 69-72) and highlighted the potential utility of numerous novel biomarkers for diagnosing pancreatic cancer at an early stage(Lines 86-94) .

We hope these revisions have sufficiently addressed the reviewers' concerns and improved the quality of our manuscript.

Thank you again for considering our manuscript for publication in PLOS ONE. We look forward to your favorable response.

Yours sincerely,

Yani Song

On behalf of all co-authors 

Response to Reviewer #1

Dear Reviewer #1,

Thank you for your thorough review and valuable comments. We have carefully addressed each point you raised and implemented the suggested revisions to enhance the clarity, consistency, and presentation of our manuscript. Below, please find our point-by-point responses:

1. There are several inconsistencies in the font size and type in the manuscript. Check and fix them.

Response: We appreciate your observation. We have thoroughly reviewed the manuscript and corrected any inconsistencies in font size and type. These adjustments ensure a more uniform and professional appearance throughout the text.

2. When you reference a figure or table in the text, do not include the description of that figure or table.

Response: Thank you for the clarification. We have removed any descriptive text when referencing figures and tables in the main body of the manuscript. Only the figure or table number is now included.

3. Line 96-113: I think you could make a table for this or list the points in bullets to make it easier to read through.

Response: Thank you for this helpful suggestion. In response, we revised the content around lines 96–113 by listing the key points in bullet format (lines 106–124). We believe this revision improves readability and clarity for the reader.

4. Line 104: State the full form of QUADAS-2

Study selection: How did you access interrater reliability?

Response: We have now stated the full form of QUADAS-2(Line 114-115) and explained each of its domains in greater detail (Supplementary Tables 2 and 3). Additionally, we clarified how we applied this tool to assess risk of bias and applicability concerns in the included studies. We clarified that two independent reviewers assessed each study using the QUADAS-2 tool. Disagreements were resolved through discussion or consultation with a third reviewer. (lines 177–186)

5. Line 147-149: What guided setting a benchmark score of 3 for inclusion?

Response: We have clarified the rationale for using a QUADAS-2 threshold. In our revised text (lines 177–186), we explain that studies with high risk of bias in at most one domain (out of four bias domains) were included. The original reference to a “3-point” standard has been removed to avoid confusion. Instead, we specify that at least three domains of bias must not be rated as high risk for a study to be included.

6. Line 176: “Insufficient sample size” could be regarded as vague. What sample size was considered insufficient and why?

Response: We have replaced the phrase “insufficient sample size” with a clear cutoff of fewer than 30 participants (line 212-213). Studies with fewer than 30 participants were excluded due to concerns about statistical power and reliable estimation of diagnostic performance.

7. Line 181: “Tables 1 and Table2” fix the typo.

Response: We have corrected this typographical error in line 219.

8. Line 183-186: These sentences are confusing.

Response: We have revised the text in lines 221–227. We now clearly distinguish between healthy control group (HC) and non-cancer control group(NC) including pancreatitis, benign pancreatic diseases in the included studies.

9. Line 211: Kindly list the major proteins identified in these studies.

Response: We have expanded Table 3 to include the specific protein biomarkers evaluated in each study (line 245). Each biomarker is now clearly listed, providing detailed insight into the diagnostic markers investigated.

10. Line 307: You can include the funnel plot for this finding.

Response: We have included a funnel plot (Fig 6.E) for the results related to potential publication bias (lines 370–371). This funnel plot was generated using Deeks’ test (p = 0.04), and we note in the Discussion that any relevant findings should be interpreted with caution due to potential bias.

11. Line 296: Kindly list the major metabolites identified in these studies.

Response: As with the proteins, we have updated Table 3 to include all major metabolites reported in the relevant studies (line 245).

We sincerely appreciate your detailed review and constructive feedback. Your suggestions have significantly enhanced the clarity, coherence, and scientific rigor of our manuscript. We hope our revisions meet your expectations. Please let us know if there are any additional concerns or suggestions.

Kind regards,

Yani Song

On behalf of all co-authors

Response to Reviewer #2

Dear Reviewer #2,

Thank you for your thoughtful review and for recognizing the comprehensive scope of our work on early diagnostic biomarkers for pancreatic cancer. We value your insightful comments on the methodology and have addressed each point in detail below:

1. The authors pooled data by biomarker type. However, even within each type, there is variability in the specific biomarkers and their cut-off values. I am not a subject expert, but I wonder how meaningful it is to combine such different biomarkers.

Response: We sincerely appreciate the reviewer’s insightful observation regarding the heterogeneity within biomarker categories. This is a critical consideration, and we acknowledge that pooling studies with distinct biomarkers and varying thresholds introduces potential limitations. Below, we clarify our rationale and address the validity of this approach:

(1). Rationale for Pooling by Biomarker Type

Our primary goal was to evaluate the overall diagnostic potential of broad biomarker categories (e.g., miRNAs, proteins) for early pancreatic cancer detection. While individual biomarkers within a category may differ mechanistically, grouping them by type allows us to:

(1.1). Assess the overall performance of biomarker categories (e.g., miRNAs vs. proteins), guiding future research priorities.

(1.2). Compare liquid biopsy-based biomarker classes (e.g., ctDNA vs. metabolites) to determine if they offer distinct advantages in sensitivity or specificity.

(1.3). Provide clinicians with actionable insights on which biomarker categories might complement existing diagnostic tools like CA19-9.

(2).Addressing Heterogeneity Within Categories

We agree that variability in biomarkers and their thresholds can affect pooled estimates. To mitigate this, We used a random-effects model, which accounts for heterogeneity by assuming true effect sizes vary across studies. Subgroup analyses were performed, such as separating studies using ELISA for protein biomarkers from those using mass spectrometry (Fig. 9). These analyses revealed no significant differences in performance, suggesting that methodological variability had limited impact. We explicitly reported high I² values (e.g., 83-98% for proteins and metabolites), emphasizing the need for cautious interpretation.

(3).Clinical and Biological Justification

Despite variability, biomarkers within a class often share biological relevance:

miRNAs: Many regulate overlapping oncogenic pathways and are consistently dysregulated in pancreatic cancer.

Proteins: While individual markers vary (e.g., S100P, MIC-1), they often reflect tumor-associated inflammation or immune evasion.

Metabolites: Altered lipid or amino acid profiles broadly indicate metabolic reprogramming, a hallmark of cancer.

Pooling these biomarkers provides a holistic view of their collective diagnostic utility, even if individual candidates require further validation.

(4).Limitations and Future Directions

We recognize the limitations of this approach and have revised the Discussion to emphasize:

The need for standardized biomarker panels: Future studies should prioritize validating specific combinations rather than isolated markers.

The importance of harmonizing thresholds: Cut-off values should be optimized in large cohorts to reduce variability. We acknowledge that individual biomarkers may exhibit different cut-off values, sensitivities, and specificities, which could limit the generalizability of pooled estimates.

While pooling biomarkers by category introduces heterogeneity, this approach aligns with the study’s exploratory aim to compare broad diagnostic strategies. We agree that future research should focus on standardizing specific biomarker panels, and we thank the reviewer for raising this important point.

We further have emphasized this limitation in our Discussion. (Lines 527-538)

2. The authors state that studies with a QUADAS-2 score of 3 or higher were included. QUADAS-2 has four domains for bias and three for applicability. Could you clarify how the scoring was performed? Does a total score of 7 indicate inclusion, or was the focus only on bias? Since bias and applicability have different implications, combining them does not seem appropriate.

Response: We agree with your concern that combining bias and applicability domains is inappropriate. In our revised manuscript (lines 177–186), we clarify that we did not merge these domains into a single score. Instead, we focused primarily on the four bias domains. Studies with at most one domain rated as high risk of bias in these four areas were included. We have removed references to a “3-point” system and now explain precisely how we assessed and reported each domain’s outcome. The results are provided in Supplementary Table 3.

3. Please provide the QUADAS-2 assessment results for each study.

Response: We have now included the full QUADAS-2 evaluation for each study in Supplementary Table 3.

4. Please specify the search dates.

Response: We have added the exact date of our literature search (June 1, 2024) to the Methods section (line 140), ensuring greater transparency regarding our search strategy.

5. The authors report the combination of CA-19 and other biomarkers. Could you elaborate on how these were combined and whether this varied across studies? Was a positive result defined as either marker being positive or both being positive? For example, Yamada (2019) reported the sensitivity and specificity of bombinin anti-3’-sialyllactose IgG and CA-19 using predictive equations rather than a simple combination of the two tests. How was this handled in the analysis? The text provides data for calculating sensitivity based on a simple definition of positivity (resulting in a sensitivity of 93%, as in this review, with a positive result defined as either being positive), but there is no information to calculate the specificity of the combined screening using the same definition. Kashiro et al., on the other hand, appear to have combined two binary results (presumably defining positivity as either test being positive).

Response: Thank you for raising this important issue. To address your concern, we have carefully re-reviewed and re-classified the included studies concerning the combined analysis of CA19-9 with other biomarkers. Based on the methods reported in these studies, we categorized the combined analyses into two approaches:

Either-positive principle:

In these studies, sensitivity was calculated such that if either CA19-9 or the novel biomarker was positive, the overall test was considered positive. Conversely, specificity was calculated by requiring both CA19-9 and the novel biomarker to be negative to classify a case as negative. In other words, this approach applied an “OR” principle for positivity and an “AND” principle for negativity.

Predictive modeling or machine-learning methods:

Another group of studies utilized regression equations, scoring systems, or machine-learning algorithms (such as logistic regression, multi-marker panels, or other algorithms) to estimate the sensitivity and specificity of combined testing. Typically, an integrated score or predicted probability was used as a threshold: results exceeding this threshold were classified as positive, while those below were classified as negative. Hence, sensitivity and specificity in these studies were not calculated by simple binary combinations but rather determined directly through predictive modeling or receiver operating characteristic (ROC) analysis.

For these studies, we first conducted an overall meta-analysis to estimate the diagnostic performance of biomarkers combined with CA19-9. Subsequently, subgroup analyses were conducted according to the different combination methods, and we assessed whether significant differences existed at the subgroup level.

Relevant methodological clarifications, along with the results obtained and their implications in the discussion, have been added to the manuscript.

We sincerely appreciate your highlighting this matter, which has greatly contributed to enhancing the clarity and rigor of our manuscript. Thank you again for your constructive feedback (Lines 396-431; Lines 490-504; Supplementary Figs 1 and 2).

6. Please report the full results of the sensitivity analysis in the supplementary material. The manuscript states that sensitivity analyses did not show significant changes, yet the discussion mentions that "sensitivity analyses indicated heterogeneity might be associated with very large or small sample sizes." This seems inconsistent.

Response: We have reviewed and consolidated our sensitivity analysis findings. All results, including leave-one-out analysis, have been added to the Supplementary Material (Supplementary Fig 7). We clarified that while small or large sample sizes could theoretically introduce heterogeneity, the actual sensitivity analyses did not reveal significant changes (lines 523–524). We thank you for pointing out this discrepancy, which we have now resolved.

7. Exclusion criteria: Please clarify what is meant by "studies with excessive heterogeneity in design, patient population, biomarker detection method." How were these defined? Additionally, what threshold was used for defining "small-scale studies"?

Response: We have clarified that “excessive heterogeneity” refers to high risk of bias or inapplicability in multiple QUADAS-2 domains, leading to exclusion. Furthermore, we now explicitly define “small-scale studies” as those enrolling fewer than 30 participants, as they lack sufficient statistical power for reliable conclusions. (lines 121–124)

8. Could you report how pancreatic cancer was excluded in each study?

Response: We have added a detailed explanation in the Methods section, clarifying how each included study ensured that enrolled participants in the control arm did not have pancreatic cancer. (lines 125–137)

9.

Attachments
Attachment
Submitted filename: Response to Reviewers.pdf
Decision Letter - Shuai Ren, Editor

The Role of Novel Biomarkers in the Early Diagnosis of Pancreatic Cancer: A Systematic Review and Meta-Analysis

PONE-D-25-03633R1

Dear Dr. Song,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager®  and clicking the ‘Update My Information' link at the top of the page. If you have any questions relating to publication charges, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Shuai Ren

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Congratulations to all the authors and thank you for addressing all comments and suggestions during the review process.

Reviewers' comments:

Formally Accepted
Acceptance Letter - Shuai Ren, Editor

PONE-D-25-03633R1

PLOS ONE

Dear Dr. Song,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Shuai Ren

Academic Editor

PLOS ONE

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .