Peer Review History
Original SubmissionMay 19, 2020 |
---|
Transfer Alert
This paper was transferred from another journal. As a result, its full editorial history (including decision letters, peer reviews and author responses) may not be present.
PONE-D-20-14688 Scientific Quality of COVID-19 and SARS CoV-2 Publications in the Highest Impact Medical Journals during the Early Phase of the Pandemic: A Review and Case-Control PLOS ONE Dear Dr. Berger, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Could you please pay attention to the following comments made by the reviewers:
Please submit your revised manuscript by Sep 13 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Bart Ferket Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Please provide additional information concerning the qualitative analyses performed. For example, if any rubric, theoretical framework or protocol was followed please describe this in sufficient detail for replication. 3. Please ensure that "systematic review" is incorprated into the title per PLOS ONE submission guidelines. 4. Please amend either the title on the online submission form (via Edit Submission) or the title in the manuscript so that they are identical. 5. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. In your revised cover letter, please address the following prompts: a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially identifying or sensitive patient information) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent. b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. Please see http://www.bmj.com/content/340/bmj.c181.long for guidelines on how to de-identify and prepare clinical data for publication. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. We will update your Data Availability statement on your behalf to reflect the information you provide. 6. Thank you for stating the following in the Competing Interests section: "Marko Zdravkovic, Bogdan Zdravkovic and Joana Berger-Estilita have declared that no competing interests exist. David Berger has read the journal's policy and the authors of this manuscript have the following competing interests: The Department of Intensive Care Medicine at Inselspital has, or has had in the past, research contracts with Abionic SA, AVA AG, CSEM SA, Cube Dx GmbH, Cyto Sorbents Europe GmbH, Edwards Lifesciences LLC, GE Healthcare, ImaCor Inc., MedImmune LLC, Orion Corporation, Phagenesis Ltd. and research & development/consulting contracts with Edwards Lifesciences LLC, Nestec SA, Wyss Zurich. The money was paid into a departmental fund; Dr Berger received no personal financial gain. The Department of Intensive Care Medicine has received unrestricted educational grants from the following organizations for organizing a quarterly postgraduate educational symposium, the Berner Forum for Intensive Care (until 2015): Abbott AG, Anandic Medical Systems, Astellas, AstraZeneca, Bard Medica SA, Baxter, B | Braun, CSL Behring, Covidien, Fresenius Kabi, GSK, Lilly, Maquet, MSD, Novartis, Nycomed, Orion Pharma, Pfizer, Pierre Fabre Pharma AG (formerly known as RobaPharm). The Department of Intensive Care Medicine has received unrestricted educational grants from the following organizations for organizing bi-annual postgraduate courses in the fields of critical care ultrasound, management of ECMO and mechanical ventilation: Abbott AG, Anandic Medical Systems, Bard Medica SA., Bracco, Dräger Schweiz AG, Edwards Lifesciences AG, Fresenius Kabi (Schweiz) AG, Getinge Group Maquet AG, Hamilton Medical AG, Pierre Fabre Pharma AG (formerly known as RobaPharm), PanGas AG Healthcare, Pfizer AG, Orion Pharma, Teleflex Medical GmbH.".
Please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials, by including the following statement: "This does not alter our adherence to PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests). If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared. * Please include your updated Competing Interests statement in your cover letter; we will change the online submission form on your behalf. Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: No Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Many thanks for the opportunity to review this paper. Overall, I think this is a good paper and solid analysis that supports the discussion and conclusion provided. I do, however, identify some aspects of the methods and conclusions that require attention or reconsideration. Was the impact factor >50 prespecified or did that just happen to fit the 3 journals the authors wanted to look at? I can’t imagine that the authors didn’t know exactly what journals would be included by setting the cutoff that high. Since it doesn’t appear this analysis was prespecified just be upfront about the journals you wanted to look at and why. Page 5 Lines 102-106: This is not presented clearly. For instance the authors say they exclude correspondence but then (rightfully as they may include original results) include some correspondence in their final sample. Please add a bit more detail about what was included and excluded and why. The Oxford Quality Rating Scheme is a valid choice for the purposes described, but I am wary of one aspect of its use: the inclusion of editorials, which I assume are largely categorised as the lowest level of evidence (“expert opinion”). I fear this might be skewing the findings quite a bit and I’m not sure top journals publishing editorials on the all-consuming topic of the moment, in its early stages when original research is going to be sparse, is indicative of compromising their standards evidence (the authors allude to this in their discussion). I would be interested in a sensitivity analysis that excluded these and seeing how it might impact the robustness of the findings. The authors exclude JAMA Research Letters from the “Original Article” count but I think this is a mistake. These are usually original research simply presented in a shorter format for manuscripts that don’t require a full article of detail. However, the level of detail they usually describe should be enough to perform the assessments. If they didn’t, that itself might be an interesting finding on the format (although I don’t think that is the case). Medical news and perspectives from JAMA are included in the quality assessments. This is odd to me since, as news pieces, they do not really fit as something appropriate to be examined by a “levels of evidence” tool. News is typically not held to the same standard as original research. The QUALSYST checklist appears to be an acceptable and general enough tool for the purposes described. Page 6 Lines 140-142: Small detail but were disagreements resolved through consensus with the full group or between each 2-author pair? Why, for the “Qualitative analysis of COVID-19 original articles” did the authors rely on a personal subjective interpretation of article quality rather than existing methods, like say the Cochrane risk of bias tools (even if modified or slimmed down), to assess article quality? These cover many of the same areas examined and have been created for a wide array of research types. The citation frequency analysis is the part of the study that I feel the least confident drawing any meaningful conclusions from as a reader. COVID articles are extremely prevalent and concentrated in a single area of great interest while the rest of the corpus is likely spread out over the entire rest of the field of biomedical sciences. With a preponderance of research being published in these journals dealing with COVID specifically, a high citation rate feels natural rather than indicative of any larger issue. Similarly, more than usual, I think the speed at which the academic community is presenting hypothesis and exploratory research on an entirely novel disease, and focusing intense public scrutiny on notable findings, may be leading to a situation in which the context around a citation is crucial to understanding this research question. I could reasonably imagine that a given study or article is far more likely to be cited alongside criticism, or in some negative light, than usual in the context of COVID research. This might be something the authors could explore given the relatively small amount of citations (52) included in their analysis. Please make sure you discuss all your reported outcomes in the methodology section. For instance, you report the amount of authors without ever mentioning this was being examined in the methods section. Would Hedge’s g be a more accurate estimator of effect size in this case than Cohen’s d given the unequal sample sizes? Though since the “sample” is complete for articles from these journals for this time period, perhaps Cohen’s d is appropriate? I’m not entirely sure one way or the other but it might be worth the authors justifying their use of one vs the other so it is clear to the reader. The authors should report the actual mean total and summary percentage scores (and resulting error bars) and not just refer to Figure 2 from which it is difficult to tell the exact numbers. It is good that the authors are willing to make their data available, however I don’t see any reason the underlying data for this study couldn’t be proactively made available on an appropriate repository like OSF or Figshare (or any other appropriate public repository) rather than from the author. I would think this would be particularly important for this research as others may want to pick up and expand on these methods as the pandemic matures. Requiring interested parties to contact the author is simply an additional barrier to the benefits of data sharing and availability. I would ask the authors to consider taking this proactive step. The authors recognize what I believe to be the biggest issue with this paper which is timeliness. Obviously there is not much to be done to combat this other than a potential update of the results before submission but this would be a lot to ask. However, I do agree this is a major limitation of these findings in terms of overall interest to the community. That said, I think even more value can come from sharing the framework for the methods to potentially be expanded and applied at other points in the epidemic. I think there is some interesting potential for this that the authors would do well to acknowledge. The authors appropriately acknowledge the inherent limitations of choosing any single assessment tool for paper quality and I think their choice to use QUALSYST is fine. However the recent Surgisphere scandal brings this limitation to further light. I’m fairly certain this methodology would have assessed both the NEJM and Lancet Surisphere papers that were retracted as high quality research despite failing on some general quality measures. Things like the availability of data and materials are metrics that aren’t included in the scales used that may also be important indicators of quality. To be clear, I do not think it would be reasonable for the authors to have scrutinized the papers included to a level that they would have noticed the irregularities that tipped off the community to the issues with the Surgisphere Lancet paper. That level of detailed scrutiny was not within the scope of this paper. However there are broad indicators largely related to best practice in Open Science (and I’m sure in some other areas as well) that impact their ability to assess quality. Just an interesting case study to consider in grounding this limitation. Reviewer #2: I have provided a review of the manuscript "Scientific Quality of COVID-19 and SARS CoV-2 Publications in the Highest Impact Medical Journals during the Early Phase of the Pandemic: A Review and Case-Control” The authors provide am interesting manuscript which can enlighten the quality of COVID-19 related publications which could have been affected by rapidity and high needs in this pandemic. INTRODUCTION Line 65 – 68. Can be worthy to add the example of the retraction of the papers published on The Lancet and New England Journal regarding the use of Covid-19 treatments chloroquine and hydroxychloroquine https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(20)31180-6/fulltext. This info can strengthen the rational of the study. Lines 86-87: “We hypothesized that the quality of recent publications on COVID-19 in medical journals with impact factor >50 is lower than for nonCOVID-19 articles published during the same time period.” I wonder if considering the same period as span of time for the non-covid publications is adequate. COVID19 has affected not only the publications related to it but also all the other specialties have been altered and affected. Priorities has changed during the first 6 months of 2020 and also the publications process has been speeded for COVID19 publications but slowed for the non-COVID19 areas. Would have been more appropriate to consider the same months for the non-covid publications but in 2019 when COVID19 couldn’t affected the quality and the process? METHODS Lines 91 – 92. “This report follows the applicable STROBE guidelines for case-control studies and PRISMA guidelines for systematic reviews”. Could you please define the study design? Is this an observational study (ie. Case-control or even cross sectional) or a review? Why use the PRISMA guideline if not properly a systematic review? This is not a systematic review but a case/control study so observational. Lines 102 – 104. “The resulting publications were stratified into COVID-19–related and nonCOVID-19–related. We matched the nonCOVID-19 publications with COVID-19 publications according to article types within each journal…”. As already discussed above, wouldn’t an historical control, ie non-COVID19 publication from March 12 to April 12, 2019 be more appropriate and unbiased? The publications process for the non-covid19 has been altered due to the COVID19 priprities. For example, peer review process were slower for non-covid19 than covid-19 publications; the attention were more directed to COVID19 publications than non COVID. Table 1. Table 1 should be included only in results sections not in the methods since it reported characteristics as level of evidence of the included studies. Lines 126-127. “Quantitative appraisal” and lines “Qualitative analysis”. Reading these two sections, I would not find a difference in terms of quantitative or qualitative appraisal. The checklist the authors used to evaluate the methodological quality express a qualitative approach and not a quantitative one. Quantitative appraisal or quantitative synthesis usually refers to a meta-analysis or any intended statistical methods for combining results across studies (e.g. meta-analysis, subgroup analysis, meta-regression, sensitivity analysis), including methods for assessing heterogeneity In this case, it may be more informative to write about “qualitative appraisal of quantitative research” or simply report a unique paragraph with the qualitative appraisal. Then, what reported in lines 144 – 147 (ie. Funding or missing data) can be an other way to inform qualitatively about the included research. Line 174: “Quantitative appraisal of the quality of the original articles is..” I would change this terms into qualitative assessment of original articles. Quality is implicitly considered into “qualitative assessment”. Line 232: “favoring COVID-19 original research papers”. I suppose that this might be obvious considering the period of high demand for COVID19 answers in the international scientific community. A comparison with non-COVID19 research in a period not affected by COVID19 could have been more appropriate for detecting the real difference in number of citations. Lines 239-240: “Most of these studies had limitations in terms of missing data or under-reporting. The randomized trial was not blinded” This sentence confused me. What did the authors consider as quantitative and what as qualitative assessment? Elements considered in the “qualitative assessment” are related to the methodological quality of the study designs. For example, the blinding in a RCT is an item of the Cochrane Risk of Bias tool whose aim is to assess the internal validity of a randomized controlled trial, the risk of bias in terms of methodological quality. https://handbook-5-1.cochrane.org/chapter_8/8_assessing_risk_of_bias_in_included_studies.htm DISCUSSION Discussion is too limited and need to be enriched and enlarged – several issues can be of interest, here some already arisen in the issues above: - The assessment of nonCOVID19 publication in a different period could have changed the results? - The period is highly influenced by the changed research priorities related to COVID19 – the efforts (money, time etc…) of the whole international scientific community has been dedicated to COVID19. - Could the journal peer review process have affected the quality of the published journal? In this six months of SARS-COV-2 pandemic even the attention of editors and reviewer was directed to speed as much as possible COVID19 publications: did the author discuss this? Moreover, a lot of pre-prints on COVID19 exist – could these influence the publications of COVID-19 research? ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Nicholas J. DeVito Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
Revision 1 |
PONE-D-20-14688R1 Scientific Quality of COVID-19 and SARS CoV-2 Publications in the Highest Impact Medical Journals during the Early Phase of the Pandemic: A Case Control Study PLOS ONE Dear Dr. Berger, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. I agree with reviewer 2 regarding the appropriateness of the current title. I also do agree with changing the phrasing of the qualitative appraisal sections. Please use different terminology to describe this analysis, for example narrative appraisal. Please submit your revised manuscript by Nov 28 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Bart Ferket Academic Editor PLOS ONE [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: (No Response) Reviewer #2: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Partly ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: N/A ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Many thanks for the opportunity to review the revised article. This remains good research, fit for publication, and I believe the authors have mostly addressed the issues discussed during the prior round of review. I offer 1 major issue and some additional brief comments and replies to the author’s responses. Assuming these are addressed, I would recommend for publication. Major Issues: My biggest remaining issue is the “Qualitative evaluation.” I thank the authors for their explanation of mixed-methods research but to claim you used a “mixed-methods methodology” that includes a “Qualitative” section you need to actually include and describe some sort of qualitative method used to evaluate the work. e.g., thematic analysis, grounded theory, content analysis, framework analysis etc. These are all well established, detailed, systematic methods for conducting qualitative analysis. A one sentence qualitative methods section that just states that the research was “assessed qualitatively” is not sufficient. The Cochrane Risk of Bias tool may have provided a deductive framework around which to categorise and report these evaluations (using something like content analysis) is what I was getting at. You should remove the word “Qualitative” from this section and replace it with something like “Narrative” pr “Subjective” assessments so as to not give the impression that a robust, systematic Qualitative method was used for this evaluation. At least, based on what is reported in the paper, that does not seem to be the case. Minor Issues: Many thanks to the authors for addressing how they selected the journals. They may be interested to see, if they have not already, that some very similar research was presented recently at the European Society of Cardiology Congress 2020. https://www.tctmd.com/news/covid-19-blamed-weaker-research-published-top-tier-journals-2020 The fact that they looked at the same three journals, however using different criteria/scales and historic controls. This is an interesting check on the findings from Zdravkovic et al. and can be brought out in the discussion. I couldn’t locate a book of abstracts for that conference (it may not be available yet) but the results are described in that article. Obviously not ideal for referencing, but the similarity is notable and probably worth being mentioned in relation to this research. I am glad to see the sensitivity analysis removing editorials. I disagree that ~8% and ~14% of your sample is very small. That said it is very good that your findings remained robust to removing these. I think that your findings remained robust to the various sensitivity analyses is a strength. I don’t think it is correct to say that the JAMA Research Letter format (or other journal’s similar formats) are indicative of the editors/reviewers believing the articles “are not good enough to meet the criteria for Original article category.” You can submit directly in the Research Letter format without being referred there by the editors. Some research simply doesn’t need 2000+ words to get the point across. That said, I think the inclusion of the sensitivity analysis is sufficient. On the citation analysis, I agree that with the new data, it would be unreasonable to check all of these citations for context. However I do think the levels originally reported through May could reasonably have been investigated. I understand that you feel this is out of the scope of this paper and resources of your group and respect the decision not to investigate further at this time. I would simply request that if you agree that what I stated about citation context may be true, that it is mentioned as relevant to the interpretation of this finding on Page 14 and potentially as a direction for future research in the Discussion. A case study in this that has personally annoyed me quite a bit is Didier Raoult going on about how many times his hydroxychloroquine paper has been cited, as a defense of the paper, with no context of how many of those were citing it in the context of pointing out the many limitations of that research. No need to use this example but I think it proves the point. I would like to applaud the authors for making their data openly available. Many thanks again for the opportunity to review this paper. If these issues can be rectified to the editor’s satisfaction I am happy to recommend this paper go forward to publication. Reviewer #2: 1. I would like to comment on the following author’s answer: A-2-3: We fully agree with the reviewer. We had planed an observational study design as a case control. We were then obliged by PLOS ONE to include “systematic review” in the title, because we deal with study comparisons. We would like to follow your suggestion to go as a case-control study which was the initial plan, unless the editorial office overrules us. Actually, I agree that this is not a systematic review therefore I would avoid any reference to this study design in order to not arise misunderstanding regarding the study type. I did not find the term “systematic review in the title accordingly and I agree with the removal of any reference to the PRISMA statement. 2. My major concern is still related to the quality appraisal the authors performed. I went through the references the authors reported to support the choice of QUALSYST tool use, the mixed methodology and still some issues and confusion raised. I agree with their comment A1-6 B where the authors described the definition of qualitative and quantitative research but I think something is not still clear or there is some confusion in the description of this concept. They stated: Qualitative analysis is the analysis of qualitative data such as text data from interview transcripts. Unlike quantitative analysis, which is statistics-driven and largely independent of the researcher, qualitative analysis is heavily dependent on the researcher’s analytic and integrative skills and personal knowledge of the social context where the data is collected. The emphasis in qualitative analysis is “sense making” or understanding a phenomenon, rather than predicting or explaining. This methodology is very frequently used in social sciences in combination with quantitative analysis, the so called mixed-methods methodology. I am aware regarding the mixed-method methodology but I don’t think this is the case to adopt it: here the author should have assessed just the methodological quality of the quantitative research study design (ie. experimental, observational etc….) offering a qualitative assessment and not a quantitative one which in the Cochrane wording is referring to a quantitative synthesis/statistically driven. Moreover, the second part the authors assessed, ie. “qualitative analysis” is just an assessment of reporting characteristic of the included studies - I did not assess it as a qualitative analysis. PLease look at the following examples of reporting characteristics assessment/methdoological quality: https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0040078; https://www.sciencedirect.com/science/article/pii/S0895435616308162. Otherwise the study design is not clearly described – the mixed methodology might describe the study design and not the appraisal of the studies. Here an example: https://pubmed.ncbi.nlm.nih.gov/33033951/ where both a questionnaire(quantitative analysis) and focused groups (qualitative analysis) were performed. In this case control study, this mixed methodology did not really reflect what was done. To my opinion is a control study belonging to the quantitative research world where included studies were compared using the following instruments: 1. Methodological quality throughout the QUALSYST tool (even though the high standard: Cochrane Risk of bias or the New Castle or the ROBINS I could have been used too) 2. Asssessment of reporting elements such as: reported from page 7, lines 150: ”The COVID-19 original research articles (n = 13) were assessed qualitatively to report on their major weaknesses (which type of weakness and how this is standardized across studies? Was not already included in the methodological quality for quantitative studies in the QUALSYST tool?), potential conflicts of interest, and likely influence on further research and clinical practice (in which way these are standardized, collected and reported in the assessment? Are regression analyses planned to investigate the influence? ). The second point cannot be equal to a qualitative part of a mixed methodology where usually focus groups or interviews are used to collect qualitative data. The data/info the authors wanted to comment on are included and reported in the selected studies, in the manuscript/full text as general characteristics (ie. conflict of interest) and so are simply collected from them and then discussed. I suggest the authors to better revise the study design performed, the qualitative/quantitative wording. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Nicholas J. DeVito Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
Revision 2 |
Scientific Quality of COVID-19 and SARS CoV-2 Publications in the Highest Impact Medical Journals during the Early Phase of the Pandemic: A Case Control Study PONE-D-20-14688R2 Dear Dr. Berger, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Bart Ferket Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: |
Formally Accepted |
PONE-D-20-14688R2 Scientific Quality of COVID-19 and SARS CoV-2 Publications in the Highest Impact Medical Journals during the Early Phase of the Pandemic: A Case Control Study Dear Dr. Berger: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Bart Ferket Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .