Peer Review History
| Original SubmissionNovember 6, 2023 |
|---|
|
PONE-D-23-34484Workflow for detecting biomedical articles with underlying open and restricted-access datasetsPLOS ONE Dear Dr. Bobrov, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Feb 09 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Sergio Consoli Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. We suggest you thoroughly copyedit your manuscript for language usage, spelling, and grammar. If you do not know anyone who can help you do this, you may wish to consider employing a professional scientific editing service. Whilst you may use any professional scientific editing service of your choice, PLOS has partnered with both American Journal Experts (AJE) and Editage to provide discounted services to PLOS authors. Both organizations have experience helping authors meet PLOS guidelines and can provide language editing, translation, manuscript formatting, and figure formatting to ensure your manuscript meets our submission guidelines. To take advantage of our partnership with AJE, visit the AJE website (http://learn.aje.com/plos/) for a 15% discount off AJE services. To take advantage of our partnership with Editage, visit the Editage website (www.editage.com) and enter referral code PLOSEDIT for a 15% discount off Editage services. If the PLOS editorial team finds any language issues in text that either AJE or Editage has edited, the service provider will re-edit the text for free. Upon resubmission, please provide the following: The name of the colleague or the details of the professional service that edited your manuscript A copy of your manuscript showing your changes by either highlighting them or using track changes (uploaded as a *supporting information* file) A clean copy of the edited manuscript (uploaded as the new *manuscript* file)” 3. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information. Additional Editor Comments: Please take particular care to the comments raised by all the reviewers (especially R3). The technical contribution looks not thorough enough. The justification and motivations, along with the objectives of the work should be better elaborated and clearly articulated. The methodology would require further, deeper investigation. Explain better and motivate the employed dataset and features. The reported computational evaluation and discussion show to be not robust enough, it should be articulated more. Unfortunately the manuscript fails to meet the PLOS ONE publication criteria in its current form. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The authors do not mention the journals from which they extract the DAS, I would recommend that they do so, to see which journals use this statement, as well as the repositories they have used to verify the presence or not of the data. On page 18 it says Haven et al, but they do not add the year. On the reference list the year is 2022, but throughout the text it says 2023, is there a missing reference or is it a mistake? Reviewer #2: The topic of the article is very interesting, important and applicable in current academic system. The methodology workflow and the results are described clearly, systematically, in detail, and the research seems reproducible. However, since everything else is described in such detail, I suggest you add a few sentences about the protocol for the manual confirmation step. For example, you say “we checked, amongst other criteria, whether the dataset could be found” - how was this checked - using what software, which search terms? Are the raters’ physicians or information specialists? This could affect both the results and the time needed for rating... I tried to access the data using DOI link provided in Data availability statement, but did not gain access. Will this be open once the article is published? I have two additional questions that don’t need to be answered in this article, but can serve as “food for taught”: - Have you considered using other tools which you mention in the discussion on the same set of publications? From the introductory part, it's clear that the results would not be the same, because the methodology and definitions (hence the goals) are different, but it would still be interesting to compare the results and see if your tools and protocols missed anything. - It would be interesting to compare presence of dataset statements, the actual datasets availability and their accessibility between OA articles and paywalled articles. Reviewer #3: The article presents a workflow to identify dataset mentions in scholarly articles that are publicly accessible. To this end, the authors employ existing (or elsewhere introduced) software packages and review the output manually. In particular, ODDPub (developed in the direct environment of the authors) is employed to identify dataset mentions from fulltext pdfs. Numbat (a tool to support systematic reviews) is used then to review the positive decisions of ODDPub. The authors estimate the IRR between 2 and 3 raters on sets of 100 resp. 20 publication and estimate a high reliability. The authors conclude that this manual curation process can be used to estimate the amount of mentions to publicly accessible datasets, but also that the set of publications should not be larger than the sample size used in the paper (~6000). I appreciate the idea of identifying dataset mentions automatically to provide extra funding for "Open Scientists". However, I see some problems with the work at hand that should be handled before acceptance. - From the original work on ODDPub the authors know that the Identification performance is very high. However, the authors note that there are some differences between the evaluation set and the set of articles at hand. I would like to see a more detailed evaluation including articles (at least a sub-sample) that have been sorted out by ODDPub. - For an exact definition of what the authors consider as open data, they refer to another publication. As the article should be self-contained, I would like to see the definition in the paper (at least the most interesting parts). - The article states "... a rater could miss a specific dataset, in which case IRR was calculated only on those datasets extracted by all raters". If I understand this correctly, this introduces a bias in the evaluation, as datasets that where not identified by a rater a skipped during evaluation. When applying the final workflow with only one rater, those datasets would be missing entirely. - When it comes to the interpretation of the IRR, the authors argue that maximizing the IRR was not focus of the study but the generation of reliable data. This should be the default case to estimate the quality. Later, the authors further argue that using the class category "unsure" less liberally, would further increase the IRR. I don't understand this argumentation, as the final goal is to get all important information from the articles and not optimizing the IRR. The analysis of the IRR reads a bit like "we could have cheated, but we didn't" - In the interpretation of the results, it is argued that the high reliability is not only supported by the high IRR, but by an increased correlation of another study that employed the same protocol. In general, correlation could also be increased by additional errors, thus the authors should elaborate on this and state in how far this supports their results. - The results of Krippendorffs alpha is interpreted by the approach of Landis&Koch 1977, who base their work on Cohen's kappa. Is it reliable to do this generalization? - When evaluating the duration of the manual screening, the authors estimate 4.5 minutes per article. This shows that the process does not scale well. Further, the authors state that actual time to spend is much higher due to unsure cases. I'm puzzled by this statement. What is the purpose of the estimation, when the authors do not believe the result. Also, while the authors state that 4 minutes seems to be low, I have the feeling that it is rather high just to check if an identified dataset is publicly available or not. Please elaborate! - The novel parts of the process are mainly based on manual work. It would be nice if the authors outline in how far the entire process could be automated to eventually apply the incentivization on a regular basis. - Figure 1 is missing the articles that could actually be retrieved due to closed or restricted access. Further, in order to make a statement about potential application in other setting it might be interesting to get an idea of the amount of open access articles in comparison with others. Beside the issues listed above, I see a couple of technical issues that could easily be fixed. (The order does not reflect importance): - In the abstract, the authors state that an IRR of >0.8 was estimated, but there is no information about what kind of measure was used for estimation. - The KrippAlpha function from the DescTools package was used. The function was cited, but not the packages. I suggest to cite the package directly. - The authors cite all software that was used in the study and use links to archived versions. However, neither a software version, nor the data of access was provided. While it would of course be possible to identify the software via software heritage, software citations should be accompanied by version (or dates). - It might be good citation style to cite websites and software without date, I have the feeling that it would help the reader to add a date. - The link to the dataset published with the study is wrong, as it points to some sharepoint folder - When estimating the amount of open data the article states "... that 7.9% of articles had underlying openly available data. In addition, 1.05% of articles had shared restricted-access datasets, resulting in an overall 8.4% of articles for which at least one dataset was available". I would guess that the final value should be 8.95%. Please correct! - There are some issues with the bibliography: - Bobrov et al (2023) is regularly cited but not provided in the list of references - Haven et al., 2023 is cited but in the list of references provided as "Haven, T. L., Abunijela, S., & Hildebrand, N. (2022, September 15). Biomedical supervisors’ role modeling of responsible research practices: a cross-sectional study." - Serghiou et al. (2021) is missing in the references ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Remedios Melero Reviewer #2: No Reviewer #3: Yes: Frank Krüger ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
Workflow for detecting biomedical articles with underlying open and restricted-access datasets PONE-D-23-34484R1 Dear Dr. Bobrov, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. If you have any questions relating to publication charges, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Sergio Consoli Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed Reviewer #3: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: (No Response) Reviewer #3: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: (No Response) Reviewer #3: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: (No Response) Reviewer #3: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: (No Response) Reviewer #3: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: I have not any further questions, I think authors have responded all reviewers queries and it deserves to be published Reviewer #2: (No Response) Reviewer #3: I appreciate the detailed answer and the explanations provided by the authors. I don't have any additional comments. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Remedios Melero Reviewer #2: No Reviewer #3: Yes: Frank Krüger ********** |
| Formally Accepted |
|
PONE-D-23-34484R1 PLOS ONE Dear Dr. Bobrov, I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team. At this stage, our production department will prepare your paper for publication. This includes ensuring the following: * All references, tables, and figures are properly cited * All relevant supporting information is included in the manuscript submission, * There are no issues that prevent the paper from being properly typeset If revisions are needed, the production department will contact you directly to resolve them. If no revisions are needed, you will receive an email when the publication date has been set. At this time, we do not offer pre-publication proofs to authors during production of the accepted work. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few weeks to review your paper and let you know the next and final steps. Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. If we can help with anything else, please email us at customercare@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Sergio Consoli Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .