Peer Review History

Original SubmissionSeptember 1, 2025
Decision Letter - Haofeng Zhang, Editor

PONE-D-25-47197

Contrastive learning enhanced retrieval-augmented few-shot framework for multi-label patent classification

PLOS ONE

Dear Dr.  Chen,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Dec 22 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.
  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.
  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Haofeng Zhang

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1.Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please amend the manuscript submission data (via Edit Submission) to include author “Wenlong Zheng”.

3. Please amend your authorship list in your manuscript file to include author “Shikun Chen”.

4. If the reviewer comments include a recommendation to cite specific previously published works, please review and evaluate these publications to determine whether they are relevant and should be cited. There is no requirement to cite these works unless the editor has indicated otherwise.

[Note: HTML markup is below. Please do not edit.]

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: No

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Summary:

This paper proposes a retrieval-augmented few-shot framework for multi-label patent classification that combines domain-specific contrastive pre-training with retrieval-enhanced few-shot learning for label decisions. Concretely, the authors build multi-label–aware contrastive objectives, rank demonstrations with a composite similarity, and then perform decomposed, category-wise predictions with adaptive thresholds. Evaluated on a curated dataset, the method reports improvements of Macro-F1/Micro-F1 over few-shot and transformer baselines.

Weakness:

- The evaluation is confined to a single domain with a custom 10-category schema. Results may not transfer to other technical domains. Evaluating on other domains would strengthen claims of generality.

- Efficiency is under-reported. The paper should provide efficiency analysis like a parameter/GFLOPs comparison to baselines.

- The figure is very vague. The authors need to refine the resolution of the figures to make it readable.

- Missing related work. Several recent works relevant to few-shot learning are relevant and should be cited:

a. Multimodality Helps Few-shot 3D Point Cloud Semantic Segmentation (ICLR 2025)

b. Generalized Few-shot 3D Point Cloud Segmentation with Vision-Language Model (CVPR 2025)

c. In-Context Learning for Text Classification with Many Labels (ACL 2023)

Reviewer #2: 1.The "Background and related work" section covers too many research directions (six in total) and employs an overly complex classification scheme. It is recommended to streamline and consolidate this part for better clarity and focus.

2.The "Dataset" subsection should clarify whether the dataset is a novel contribution of this work or was obtained from an existing source, providing appropriate citations in either case.

3.Figures should be embedded close to their corresponding references in the main text rather than being placed later in the document. Additionally, the current image resolution is insufficient and should be improved.

4.For Table 3, please include citations for the comparative algorithms listed. Furthermore, the note stating "All improvements by our framework are statistically significant (p < 0.001)" should be supported with relevant details in the text explaining where and how statistical significance was evaluated, as well as what this implies for the results.

5.It is recommended to present the ablation study results in a tabular format, which would more clearly demonstrate the contribution of each individual component.

6.Among the six algorithms currently included in “Overall performance comparison” section, it is necessary to supplement with 2–3 recently proposed algorithms specifically designed for patent classification.

7.The references contain formatting inconsistencies. Journal and conference names should follow standard capitalization rules (e.g., each major word capitalized). Additionally, page numbers are missing in several entries and must be provided.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

To ensure your figures meet our technical requirements, please review our figure guidelines: https://journals.plos.org/plosone/s/figures

You may also use PLOS’s free figure tool, NAAS, to help you prepare publication quality figures: https://journals.plos.org/plosone/s/figures#loc-tools-for-figure-preparation.

NAAS will assess whether your figures meet our technical requirements by comparing each figure against our figure specifications.

Revision 1

We would like to sincerely thank the reviewers for their helpful remarks and corrections. Our response to the raised points can be found below. We hope that the revised manuscript addresses all issues in a satisfying manner; all changes made to the article are marked with blue highlighting.

Reviewer #1 comments

1.“ The evaluation is confined to a single domain with a custom 10-category schema. Results may not transfer to other technical domains. Evaluating on other domains would strengthen claims of generality. ”

We sincerely thank the reviewer for this important observation. We acknowledge that our evaluation is confined to UAV patent classification, and we agree that broader evaluation would strengthen generalizability claims. We selected the UAV domain deliberately as it exemplifies multi-label classification challenges through its inherent multidisciplinary nature, encompassing mechanical, electronic, software, and commu- nication technologies within individual patents. This characteristic makes it an ideal testbed for validating our approach under realistic multi-label conditions.

In response to this valuable feedback, we have added a comprehensive limitations paragraph in the Discussion section that explicitly acknowledges the domain-specific evaluation and discusses the need for additional validation across other technological domains. The limitations discussion addresses (1) the need for cross-domain validation beyond UAV patents, (2) computational costs and API dependencies from GPT-4o in- tegration, (3) domain-specific data requirements for contrastive pre-training, and (4) challenges with non-English patents and genuinely novel technologies. We also out- line future research directions including evaluating across multiple patent categories, exploring lightweight LLM alternatives, and investigating transfer learning strategies to reduce domain-specific data requirements.

While our current focus remains on demonstrating the framework’s effectiveness within the patent domain, we believe the modular architecture and methodological contribu- tions provide a foundation for future cross-domain evaluation.

2.“ Efficiency is under-reported. The paper should provide efficiency analysis like a parame- ter/GFLOPs comparison to baselines. ”

We thank the reviewer for highlighting this important aspect. We have added a ded- icated ”Computational efficiency” subsection in the Results section with comprehen- sive efficiency analysis. Table 4 presents detailed comparisons including parameters, GFLOPs, inference time, GPU memory, and training time across all eight baseline methods, organized into patent-specific methods (LLM-AL, PatentSBERTa) and gen- eral baselines (RoBERTa-Large, XLNet-Large, RePrompt, RAG+BERT, Prototypi- cal, META-LSTM).

3.“The figure is very vague. The authors need to refine the resolution of the figures to make it readable.”

We appreciate this feedback. Following PLOS ONE submission guidelines, figures in the LaTeX manuscript are placeholders, as the template requires figures to be uploaded separately. We have provided all figures as high-resolution (600 DPI) files in the revised submission package to ensure optimal readability.

4.“

Missing related work. Several recent works relevant to few-shot learning are relevant and should be cited:

(a) Multimodality Helps Few-shot 3D Point Cloud Semantic Segmentation (ICLR 2025)

(b) Generalized Few-shot 3D Point Cloud Segmentation with Vision-Language Model (CVPR 2025)

(c) In-Context Learning for Text Classification with Many Labels (ACL 2023) ”

We thank the reviewer for these valuable suggestions. We have incorporated all three recommended citations into the ”Retrieval-augmented few-shot learning” subsection. Specifically, we cite Milios et al. (2023) on in-context learning for multi-label text classification, which directly relates to our demonstration selection approach. We also cite An et al. (2024, 2025) on few-shot 3D point cloud segmentation to acknowledge recent advances in few-shot learning across computer vision domains, noting that cross- domain insights may inform future methodological developments.

Reviewer #2 comments

1.“ The ”Background and related work” section covers too many research directions (six in total) and employs an overly complex classification scheme. It is recommended to streamline and consolidate this part for better clarity and focus. ”

We appreciate this constructive feedback. We have streamlined the Background and re- lated work section from six subsections to four by consolidating related content. Specif- ically, we merged ”Multi-label patent classification,” ”Domain-specific pre-training and technical language models,” and ”UAV patent classification” into a single uni- fied subsection titled ”Patent classification with domain-specific language models.” This consolidation integrates patent-specific challenges, domain adaptation strategies, and UAV testbed justification into a coherent narrative. The revised structure main- tains focus on the four core methodological pillars: patent-specific models, contrastive learning, retrieval-augmented few-shot learning, and chain-of-thought reasoning.

2.“ The ”Dataset” subsection should clarify whether the dataset is a novel contribution of this work or was obtained from an existing source, providing appropriate citations in either case. ”

We thank the reviewer for this important clarification request. We have explicitly stated in the Dataset subsection that the curated UAV patent dataset with 15,000 expert-annotated patents across ten technological categories represents a novel contri- bution of this work. We have also added a footnote with the GitHub repository URL for dataset access and citation.

3.“ Figures should be embedded close to their corresponding references in the main text rather than being placed later in the document. Additionally, the current image resolution is in- sufficient and should be improved. ”

We appreciate this suggestion. Following PLOS ONE submission guidelines, we have repositioned all figures to appear immediately after their first citation in the text. All figures have been provided as high-resolution (600 DPI) files in the revised submission package.

4.“For Table 3, please include citations for the comparative algorithms listed. Furthermore, the note stating ”All improvements by our framework are statistically significant (p < 0.001)” should be supported with relevant details in the text explaining where and how sta- tistical significance was evaluated, as well as what this implies for the results.”

We thank the reviewer for this important observation. We have added citations for all baseline algorithms in Table 3 and substantially expanded the statistical significance explanation in both the revised manuscript and table footnote to include detailed methodology, multiple comparison correction, and effect sizes.

5.“It is recommended to present the ablation study results in a tabular format, which would more clearly demonstrate the contribution of each individual component.”

We appreciate this suggestion. We have presented the ablation study results in a tabular format (now Table 5) in the ”Ablation study” subsection.

6.“Among the six algorithms currently included in ”Overall performance comparison” section, it is necessary to supplement with 2–3 recently proposed algorithms specifically designed for patent classification.”

We appreciate this valuable suggestion. We have supplemented the comparison with two recently proposed patent-specific methods: (1) LLM-AL (Xiong et al., 2025), an iterative large language model with active learning, and (2) PatentSBERTa (Bekamiri et al., 2024), a hybrid SBERT-KNN model with patent-specific domain adaptation. We reorganized Table 3 to distinguish recent patent-specific methods from general baselines, updated statistical tests for eight comparisons, and added analysis demon- strating our framework’s improvements over these state-of-the-art patent classification approaches.

7.“The references contain formatting inconsistencies. Journal and conference names should follow standard capitalization rules (e.g., each major word capitalized). Additionally, page numbers are missing in several entries and must be provided.”

We thank the reviewer for this careful observation. We have corrected all formatting inconsistencies in the bibliography: (1) Capitalized journal names following standard rules (”arXiv Preprint,” ”IEEE Access,” ”Advances in Neural Information Processing Systems”), (2) For entries lacking page numbers, we have either added them where available or appropriately classified entries as working papers or theses where page numbers do not apply. All journal articles now include complete bibliographic infor- mation.

Again, we are grateful to the reviewers for their time and effort, and we sincerely thank them for their valuable help in improving our manuscript.

Best regards, on behalf of the authors,

Shikun Chen

Attachments
Attachment
Submitted filename: review_response.pdf
Decision Letter - Haofeng Zhang, Editor

Contrastive learning enhanced retrieval-augmented few-shot framework for multi-label patent classification

PONE-D-25-47197R1

Dear Dr. Chen,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact billing support.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Haofeng Zhang

Academic Editor

PLOS One

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: (No Response)

Reviewer #2: This revised work is significantly improved and demonstrates substantial refinement. l recommend acceptance of the work in the current form.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

Formally Accepted
Acceptance Letter - Haofeng Zhang, Editor

PONE-D-25-47197R1

PLOS One

Dear Dr. Chen,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS One. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

You will receive an invoice from PLOS for your publication fee after your manuscript has reached the completed accept phase. If you receive an email requesting payment before acceptance or for any other service, this may be a phishing scheme. Learn how to identify phishing emails and protect your accounts at https://explore.plos.org/phishing.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Professor Haofeng Zhang

Academic Editor

PLOS One

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .