Peer Review History
| Original SubmissionSeptember 16, 2025 |
|---|
|
-->PONE-D-25-50479-->-->Beyond Templates and BERT: Headword-Centric Parsing for Semantic Question Answering in Non-English Financial Domains-->-->PLOS One Dear Dr. Alshargabi, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. The reviewers agree that the research described in this manuscript contributes to the state of the art; nonetheless, they raise many issues that need to be addressed before it can be accepted for publication. I strongly suggest including all the comments in the revised manuscript, particularly one that needs special attention regarding the research's replicability. The manuscript must describe the research in a way that allows it to be replicated; this includes the algorithm, the datasets, and the comparisons made. There are occasions when one uses a dataset that, by its nature, cannot be publicly available; if this is the case, it must be clearly stated. Please submit your revised manuscript by Feb 06 2026 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:-->
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.. We look forward to receiving your revised manuscript. Kind regards, Mario Graff-Guerrero, Ph.D. Academic Editor PLOS One Journal requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. In your Methods section, please include additional information about your dataset and ensure that you have included a statement specifying whether the collection and analysis method complied with the terms and conditions for the source of the data. 3. Please note that PLOS One has specific guidelines on code sharing for submissions in which author-generated code underpins the findings in the manuscript. In these cases, we expect all author-generated code to be made available without restrictions upon publication of the work. Please review our guidelines at https://journals.plos.org/plosone/s/materials-and-software-sharing#loc-sharing-code and ensure that your code is shared in a way that follows best practice and facilitates reproducibility and reuse. 4. Please update your submission to use the PLOS LaTeX template. The template and more information on our requirements for LaTeX submissions can be found at http://journals.plos.org/plosone/s/latex. 5. We note that you have indicated that there are restrictions to data sharing for this study. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For more information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Before we proceed with your manuscript, please address the following prompts: a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially identifying or sensitive patient information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., a Research Ethics Committee or Institutional Review Board, etc.). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent. b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of recommended repositories, please see https://journals.plos.org/plosone/s/recommended-repositories. You also have the option of uploading the data as Supporting Information files, but we would recommend depositing data directly to a data repository if possible. We will update your Data Availability statement on your behalf to reflect the information you provide. If the reviewer comments include a recommendation to cite specific previously published works, please review and evaluate these publications to determine whether they are relevant and should be cited. There is no requirement to cite these works unless the editor has indicated otherwise. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions -->Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. --> Reviewer #1: Partly Reviewer #2: Yes Reviewer #3: Partly Reviewer #4: Partly ********** -->2. Has the statistical analysis been performed appropriately and rigorously? --> Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: No Reviewer #4: No ********** -->3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.--> Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: No Reviewer #4: No ********** -->4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.--> Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: No Reviewer #4: Yes ********** -->5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)--> Reviewer #1: The logic, structure, and analysis presented in the paper are clear and well organized. However, I believe that this article requires major revisions to address the significant methodological clarifications and improvements outlined in the attached comments before it can make a strong contribution to the field. Reviewer #2: The paper presents INAGQA, a German-language question-answering system for financial domains using headword-centric parsing. The work addresses an important gap in non-English QA systems and demonstrates promising results. The headword-centric shallow parsing method shows the strong potentials over deep learning models. However, the paper is lengthy and can be shorten as a concise version. For example, the workflow section is excessively detailed, including explanations of well-established concepts such as POS-tagging that are common knowledge in the NLP community. The paper would be more accessible if streamlined to focus on the novel contributions rather than providing textbook-level explanations of standard techniques. While the authors present a carefully designed workflow, the rationale behind key methodological choices is not well explained. For the first step, they use syntactic parsing to do pattern match, is it scalable or how is the coverage of this pattern detection? Why not any DL-model based solutions? In addition, in order to generate SPARQL query, they used Spacy’s standard NER, why not choose any other deep learning methods. Can you add a dedicated subsection justifying each major architectural choice with empirical evidence or theoretical reasoning? The scalability and generalizability of the approach remain uncertain. The system relies on handwritten grammar rules for pattern matching, but the paper does not clarify: (1) how many rules were created for the 2,100 test questions, (2) what coverage can be expected for unseen question variations, and (3) how much expert effort is required to extend the system to new financial concepts or question types. This raises concerns about the practical scalability of the rule-based approach. With recent advances in LLMs, many sub-tasks can now be automated through in-context learning. However, the paper lacks comparison with current SOTA approaches such as GPT-4, Claude, or fine-tuned multilingual LLMs. It is more of a comparison of rule-based methods v.s. automated solutions. Reviewer #3: (I)Summary: This manuscript presents INAGQA, a semantic question-answering system for German financial domains that employs headword-centric parsing through shallow syntactic chunking combined with knowledge graph embeddings. The system addresses linguistic variability in German compound nouns and question variants, reporting F1 scores of 0.91 on 2,100 queries, outperforming baseline systems including Falcon 2.0 (0.79) and BERT-KGQA (0.83). The system integrates multiple knowledge bases (local Virtuoso, DBpedia, Wikidata) and includes a user curation mechanism allowing financial analysts to edit and extend knowledge entries. Experimental results demonstrate the practical value of the proposed system, which shows that shallow parsing with similarity ranking effectively handles German compound nouns and question disambiguation. (II)Strengths of this submission This work is well-motivated, with clear applications to real-world German financial tasks, and the authors correctly identify that existing QA systems struggle with these linguistic complexities. The primary strength of the manuscript lies in its presentation of a complete, production-ready system with sound engineering and thoughtful design choices. The hybrid approach successfully combines the efficiency of shallow parsing with the effectiveness of embedding-based ranking, while the intelligent multi-source knowledge integration appropriately prioritizes internal knowledge in enterprise contexts. The system demonstrates comprehensive technical integration across multiple technologies (Spacy, NLTK, ElasticSearch, BERT, SPARQL, MQTT) and includes valuable practical innovations: a user curation interface empowering domain experts, an agent-based Wikidata suggestion system addressing knowledge gaps, and suitable response times for interactive deployment. These features, combined with the Corporate Smart Insights framework, deliver substantial practical value beyond conventional QA systems. The system demonstrates substantial performance improvements with an F1 score of 0.91 and significant error reduction (35%) in relation-linking for compound nouns, directly addressing a key challenge in German NLP. User validation through a case study with 20 financial analysts shows strong practical acceptance (18/20 rated outputs as easy to interpret), while the claimed 98% accuracy on temporal/quantitative disambiguation is particularly noteworthy. The work presents technically sound, domain-specific research that addresses the practical needs of German-speaking financial QA applications. (III)Weaknesses of this submission The manuscript raises several significant concerns that must be addressed before publication. The first issue is data availability, which is a fundamental requirement for scientific reproducibility. The manuscript claims "all data available within the paper," yet the 2,100-question Financial-QA evaluation dataset is not provided. Without this dataset, the reported results cannot be validated. Furthermore, the details of the Financial-QA dataset are not provided, including annotation guidelines, inter-annotator agreement scores, construction methodology, and basic statistics. The second issue concerns the experimental results. A detailed specification of baselines is not provided. The manuscript does not state whether the baselines, Falcon 2.0, EARL, and BERT-KGQA, were retrained or fine-tuned specifically for the German language. If the authors compare their German-optimized system against baselines trained only on English data, the results will be misinterpreted. Additionally, the experimental setting is unclear for reproduction, including the embedding model (multilingual BERT, German-specific BERT, or other fine-tuned models) not being specified, and complete grammar rules not being provided. Furthermore, the system combines multiple components, but no ablation study isolates the contribution of each component. This makes it difficult to determine which innovations contribute most significantly to the performance gains. Beyond the selected baselines, LLMs have recently become the dominant approach for QA tasks; the complete absence of LLM comparisons represents a significant gap. The authors could either conduct a proper comparison with current LLMs using appropriate prompting strategies or acknowledge this as a significant limitation and discuss scenarios where the proposed approaches might be preferred (e.g., interpretability, latency, cost, and data privacy). Additionally, there is a discrepancy between the claims and evidence regarding cross-domain applicability. The abstract states that the work demonstrates "language-sensitive design principles applicable to healthcare/legal domains," yet Table 6 shows performance drops of 15-18% when tested in these domains. While the authors suggest ontology mismatch as a contributing factor, it would be helpful to clarify whether domain-specific grammar rules are necessary for adaptation, as this has important implications for the system's generalizability. (IV)Questions to authors: Q1. Dataset specification and experimental setup: - Which specific datasets were used for the results reported in Tables 4, 5, and 6? - What is the train/test split for Financial-QA (number of questions in training vs. test)? - Were all systems (including baselines and INAGQA) evaluated on test sets? Q2. Baseline configuration: - Were Falcon 2.0, EARL, and BERT-KGQA retrained or fine-tuned for German, or were they used as originally trained? - Which specific embedding models were used (for both baselines and INAGQA)? Q3. Cross-domain evaluation: - For Table 6, what is the source of the healthcare and legal domain test questions? - Were domain-specific grammar rules developed for these domains, or were Financial-QA rules applied without modification? (V)Writing style and organizational suggestions (1) Significant redundant content exists in this manuscript, such as: - Entity recognition mentioned multiple times: - Section 3.1 (page 7): "The system performs entity recognition using Spacy's standard NER function..." - Section 4.3 (page 9): "The system performs entity recognition using Spacy's standard NER function..." - SPARQL template explanation: - Section 4.3.1 (pages 11-12): detailed code example - Section 4.4 (page 12): repeated explanation It is suggested that authors consolidate and reorganize this content to enhance readability. (2) Undefined terms: - Corporate Smart Insights - OMG API It is suggested that authors provide a brief explanation when these terms first appear in the manuscript. (VI)Typographical errors - Table 7 uses symbols (✓ and o) without providing a legend. While the meaning is intuitive (success vs. failure), adding a caption such as "✓ indicates correct answer; o indicates failure" would improve clarity. - Section 6 (page 18) incorrectly references "Table 2" when discussing INAGQA experimental results. The text states "Table 2 shows the results of the INAGQA system experiment comparing..." but should reference "Table 7," which actually contains the comparison of INAGQA's parsing approaches with Falcon. Reviewer #4: This manuscript presents INAGQA, a German-language semantic question-answering (QA) system using headword-centric parsing. The system combines shallow syntactic chunking with knowledge graph embeddings to disambiguate questions and map them to SPARQL queries. The authors evaluate their approach and report an F1-score of 0.91, outperforming BERT-KGQA and Falcon 2.0. However, the paper need to address the following issues: 1. Data 1.1 Availability: The "Financial-QA" dataset of 2,100 questions is described as "our corpus" but is not made available. Generalizability cannot be verified, and the Virtuoso triple store (14,000 German financial announcements) is proprietary with no reproducibility path. These need to be either published/submitted/declared to suit PLOS ONE data availability policy. 1.2 Annotation: No inter-annotator agreement (IAA) reported for the 2,100 expert-annotated questions. Were they annotated by one person, multiple annotators, and with what agreement threshold? This need to be clarified to ensure reproducibility. 2. Evaluation 2.1 Baseline: In Section 5.1.2 Performance Comparison it is stated that BERT-KGQA was "fine-tuned on German," but no details are provided: training data, hyperparameters, training time, convergence criteria. Were Falcon 2.0 and EARL re-implemented or results taken from prior papers? If taken from prior work, were they tuned for German financial data or only English/cross-lingual benchmarks? All these need further clarification, and especially, if baselines were not optimized equally, the comparison is unfair. 2.2 Cross-Validation: No cross-validation or hold-out test set methodology described. It is not clear whether F1 from a single test set or averaged over folds. Typically multiple folds are needed to robustly benchmark the method. Nevertheless, the manuscript is of research merit and suitable for publication at PLOS ONE, given the above-mentioned issues fully addressed. (As these give rise to my "no" and "partly" answer for the reviewer question 1-3) ********** -->6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our For information about this choice, including consent withdrawal, please see our Privacy Policy..--> Reviewer #1: No Reviewer #2: No Reviewer #3: No Reviewer #4: Yes: Weihang HuangWeihang Huang ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] To ensure your figures meet our technical requirements, please review our figure guidelines: https://journals.plos.org/plosone/s/figures You may also use PLOS’s free figure tool, NAAS, to help you prepare publication quality figures: https://journals.plos.org/plosone/s/figures#loc-tools-for-figure-preparation. NAAS will assess whether your figures meet our technical requirements by comparing each figure against our figure specifications.
|
| Revision 1 |
|
-->PONE-D-25-50479R1-->-->Beyond Templates and BERT: Headword-Centric Parsing for Semantic Question Answering in Non-English Financial Domains-->--> PLOS One Dear Dr. Alshargabi, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Apr 16 2026 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:-->
-->If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.. We look forward to receiving your revised manuscript. Kind regards, Mario Graff-Guerrero, Ph.D. Academic Editor PLOS One Journal Requirements: 1. If the reviewer comments include a recommendation to cite specific previously published works, please review and evaluate these publications to determine whether they are relevant and should be cited. There is no requirement to cite these works unless the editor has indicated otherwise. 2. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. Additional Editor Comments: The manuscript is at the latest step before publication recommendation; as can be seen, two of the reviewers recommend publication, and one of them provided feedback that needs to be included in the final manuscript. I reviewed the manuscript, and below are some comments worth including to improve the presentation. I also include a PDF. The highlighted text indicates that it might be a grammatical mistake, that more information is needed (e.g., an undefined acronym), or that it was hard for me to understand.
[Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions -->Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.--> Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed Reviewer #4: All comments have been addressed ********** -->2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. --> Reviewer #1: Yes Reviewer #2: Partly Reviewer #4: Yes ********** -->3. Has the statistical analysis been performed appropriately and rigorously? --> Reviewer #1: Yes Reviewer #2: Yes Reviewer #4: Yes ********** -->4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.--> Reviewer #1: Yes Reviewer #2: No Reviewer #4: Yes ********** -->5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.--> Reviewer #1: Yes Reviewer #2: Yes Reviewer #4: Yes ********** -->6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)--> Reviewer #1: Overall, the article is well-written and addresses an important topic. The methodology is appropriate, and the results are clearly presented. Reviewer #2: Thank you for taking time to resolve the comments. Appreciate that authors take additional effort to defense their method by writing rationable to adopt methods and conduct more evaluation on SoTA, including LLMs. But the paper still needs to be more concise and well-structured to be accepted. For example, move "Rationale for Architectural Choices" earlier: Instead of having this as a response to a criticism, integrate it into the start of the Methodology. It acts as a strong "hook" that explains why you aren't just using others, setting the stage for the technical details that follow. In Related Work section: Instead of long paragraphs describing every previous system, use a comparison table to show how INAGQA differs from systems like AskNow, EARL, and Falcon. You can significantly clean up the paper by "offloading" technical data. For example, rather than detailing all 46 handcrafted grammar rules in the text, provide a high-level summary of their 92% coverage and move the full list to an Appendix. This keeps the "messy" technical specs out of the way of your main research story. Reviewer #4: The author has successfully addressed all comments; therefore, the paper is now ready for publication. ********** -->7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our For information about this choice, including consent withdrawal, please see our Privacy Policy..--> Reviewer #1: No Reviewer #2: No Reviewer #4: Yes: Weihang HuangWeihang Huang ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] To ensure your figures meet our technical requirements, please review our figure guidelines: https://journals.plos.org/plosone/s/figures You may also use PLOS’s free figure tool, NAAS, to help you prepare publication quality figures: https://journals.plos.org/plosone/s/figures#loc-tools-for-figure-preparation. NAAS will assess whether your figures meet our technical requirements by comparing each figure against our figure specifications. -->
|
| Revision 2 |
|
Beyond Templates and BERT: Headword-Centric Parsing for Semantic Question Answering in Non-English Financial Domains PONE-D-25-50479R2 Dear Dr. Alshargabi, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact billing support.. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Mario Graff-Guerrero, Ph.D. Academic Editor PLOS One Additional Editor Comments (optional): Reviewers' comments: |
| Formally Accepted |
|
PONE-D-25-50479R2 PLOS One Dear Dr. Alshargabi, I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS One. Congratulations! Your manuscript is now being handed over to our production team. At this stage, our production department will prepare your paper for publication. This includes ensuring the following: * All references, tables, and figures are properly cited * All relevant supporting information is included in the manuscript submission, * There are no issues that prevent the paper from being properly typeset You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps. Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. You will receive an invoice from PLOS for your publication fee after your manuscript has reached the completed accept phase. If you receive an email requesting payment before acceptance or for any other service, this may be a phishing scheme. Learn how to identify phishing emails and protect your accounts at https://explore.plos.org/phishing. If we can help with anything else, please email us at customercare@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Mario Graff-Guerrero Academic Editor PLOS One |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .