Peer Review History
| Original SubmissionApril 7, 2021 |
|---|
|
PONE-D-21-11452 Introducing the EMPIRE Index: A novel, value-based metric framework to measure the impact of medical publications PLOS ONE Dear Dr. Rees, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. The reviewers have raised a range of questions and concerns regarding the presentation of your methods and interpretation of your data that need to be carefully addressed when preparing your revisions. Please submit your revised manuscript by Oct 10 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Jamie Males Staff Editor PLOS ONE Journal requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf. 2. We note that the grant information you provided in the ‘Funding Information’ and ‘Financial Disclosure’ sections do not match. When you resubmit, please ensure that you provide the correct grant numbers for the awards you received for your study in the ‘Funding Information’ section. 3. Thank you for stating the following in the Competing Interests section: “I have read the journal's policy and the authors of this manuscript have the following competing interests: Avishek Pal is an employee of Novartis Pharma AG. Tomas Rees is an employee of Oxford PharmaGenesis and received research funding from Novartis Pharma AG for this study.” Please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials, by including the following statement: "This does not alter our adherence to PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests). If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared. Please include your updated Competing Interests statement in your cover letter; we will change the online submission form on your behalf. 4. Please amend your list of authors on the manuscript to ensure that each author is linked to an affiliation. Authors’ affiliations should reflect the institution where the work was done (if authors moved subsequently, you can also list the new affiliation stating “current affiliation:….” as necessary). 5. We note that you have included the phrase “data not shown” in your manuscript. Unfortunately, this does not meet our data sharing requirements. PLOS does not permit references to inaccessible data. We require that authors provide all relevant data within the paper, Supporting Information files, or in an acceptable, public repository. Please add a citation to support this phrase or upload the data that corresponds with these findings to a stable repository (such as Figshare or Dryad) and provide and URLs, DOIs, or accession numbers that may be used to access these data. Or, if the data are not a core part of the research being presented in your study, we ask that you remove the phrase that refers to these data. Additional Editor Comments (if provided): [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Hi Dear Sir/Madam, The article is appropriate for publication in this format. It just need to promote statement of the problem as well and also method must be repeatable for all researcher. Sincerely Yours, Reviewer #2: Interesting paper in which the authors propose a composite indicator to assess medial clinical trial publications' societal, social and scientific impact. I have several concerns that should be addressed before considering the paper worth publishing. MAJOR COMMENTS Although the authors acknowledge the critiques to composite indicators (problems with interpretation, misleading views, etc.) they do not substantiate the need for the indicator they propose, nor the utility. Wouldn't readers be able to see what the EMPIRE Index offers by simply looking at the individual metrics included? I suggest the authors to read this paper where theoretical problems with the use of composite indicators are exposed (in this case with regard to the H-Index) https://doi.org/10.1002/asi.21678 I find troubling the uncritical way with which altmetric indicators are embraced, you indicate that Twitter, Facebook, etc. relate to social impact. What do you mean by social impact? In the same vein, you indicate that policy documents refer to societal impact. What is the difference between these two types of impact? Do mentions in these venues reflect impact? On whom? I think the paper would really benefit from a theoretical framing and motivation which would explain many of the decisions which are later made. You discuss the value of metrics and how these are subjective. But wouldn't that invalidate your whole method? If you were to replicate the same exercise with the same or a slightly different panel of stakeholders, would the weighting still hold? Overall, I find the paper to be methodologically robust as to the analyses the authors do, but the authors make a series of interpretations from the data for which they provide no explanations and ignore much of the literature critical with the potential use of altmetrics to measure quantitatively impact. I think that by including a more reflexive and critical review of the literature and a theoretical framework by which they can revisit and sustain many of the claims they make the paper will be much stronger. I suggest some papers which may be useful: - On value of research: https://link.springer.com/content/pdf/10.1007/s11024-011-9161-7.pdf - Critiques to the AAAS: https://doi.org/10.1007/s11192-016-1991-5 - On criteria for evaluating indicators: https://www.researchgate.net/profile/Nirmala-Svsg/post/What-indicators-applied-for-evaluating-online-catalogs-at-universities/attachment/59d63e0779197b807799ab81/AS%3A422227658186752%401477678326575/download/Criteria-MIT6.pdf Reviewer #3: Overall, I'm intrigued by the proposal of this new metric, and think it could merit publication. It's certainly novel, and the statistical analysis seems appropriate (though I am far from the most qualified judge on this matter). The metric is thoughtful, and in theory, could give useful information about an article that we don't currently have. That said, I have some serious concerns about how this metric was both constructed and presented. My biggest concern with this article is that it’s using (mostly) altmetrics as indicators of impact, and even of specific types of impact (I count 4 – predictive scholarly impact, along with social, scholarly, and societal). The correlation between altmetrics as a whole and their value toward quantifying impact has never been clearly laid out, and while there is some evidence to justify some of how it is being used (such as a high degree of correlation between Mendeley downloads and citations for STEM fields, justifying Mendeley download as a proxy for scholarly impact), there’s a lot that is still up for debate (what tweets signify, for example – for detail on that below). Altmetrics are commonly discussed as a complement to existing bibliometrics, because they serve to bolster and support existing impact claims rather than introduce their own – it’s why the words “attention” and “engagement” get used by Altmetric, among other stakeholders, when describing the value of altmetrics. They’re also commonly discussed for their qualitative value – being able to demonstrate moments of impact (say, research being tweeted by a relevant non-profit or titan of the field) rather than measure and contextualize their quantity (beyond the very high-level contextualization of an AAS compared to similar articles). Becker Model – I’d like to see the Becker Model (https://becker.wustl.edu/impact-assessment/files/becker_model-reference.pdf) mentioned, since it is the major impact framework for biomedicine that goes beyond scholarly impact / citation-based evaluation. There are some areas of overlap with EMPIRE, but it’s clear that it was not consulted or used in any significant way. Any reason why? Concerns about societal impact – I suggest that the measures included for this metric are too low to quantify, and correlation with benchmark articles could be due to attention related to those articles making them more attractive to policymakers, guideline producers, etc. Not everything that can be measured should be quantified, and this grouping makes an excellent case for qualitative description rather than quantification, since a single citation can make a big impact on the score. I know from personal experience that the “policy” category of Altmetric is something of a weak area, with links to documents that have no right to be called policy papers, so I’m wary of its use, and the lack of correlation of the societal measure with ANY other measure gives me pause. I’m not as familiar with guidelines, though I suspect that metric is a bit more straightforward and appropriate for the biomedical field. Transparency – I’d like to see more transparency regarding how the weighting was established – was there good agreement between the stakeholders, especially among different stakeholder groups? EMPIRE hinges on proper weighting. There is some good discussion around weighting, but this group needs more description – how many, how the weighting process went and how many iterations there were, demographics such as age and gender to capture an appropriately diverse set of opinions. This is one of the first times we’re seeing altmetrics directly used as an IMPACT indicator, so I’d like to hear a LOT more about this discussion. Predictors - I don’t take much issue with the predictive metrics, though just to note, Figure 2 incorrectly implies, through the arrow pointing to the total value, that the two predictors form the “total value” score – I wonder if maybe the three composite scores should be on the left, and the predictors should be on the right, or some other way to show that the predictor scores are separate. Anyway, my one significant issue is in using CiteScore, while also claiming that EMPIRE moves away from the problematic JIF. CiteScore is virtually identical to JIF, just with a different set of indexed journals (Scopus rather than Web of Science), but all of the existing biases/power dynamics, time restrictions, lack of complexity, and other JIF-related complaints still exist. I think EMPIRE should be entirely distinct from citation-based evaluations. As is, its inclusion just serves to reinforce journal-level metrics/classification rather than provide a distinct metric from them – with CiteScore, it becomes less about the attention an individual article is receiving and more about the prestige, notoriety, and reach of a publication. I don’t like that the two are just casually mixed together, and think it’s a stronger metric, and more accurate gauge of early attention at the article level, without it. I’d like to hear more about the intended use of this metric, specifically from Novartis Pharma AG and Oxford PharmaGenesis, as well as the intended audience for this metric. I greatly appreciate the weaknesses section of this paper, including mention of proprietary subscriptions necessary to calculate, but as with any new metric, the potential for misuse is awfully high for this. I think adding intended or suggested case uses and/or limitations would help with this – specific article types are mentioned, for example, but it should be clearly stated that this isn’t applicable outside of the biomedical field. Are EMPIRE scores meant to be directly compared to establish impactful articles along multiple impact dimensions? Are they used to justify the impact of a single researcher or research facility’s research? Further, it should also be stated more clearly that this metric is just an attempt at quantifying a much more complex concept, and should not be used as a sole metric for research evaluation. As mentioned above, some of the conjectures about what specific altmetrics like tweets signify are overly general and are not well agreed-upon (some tweets may be with an aim to influence a more general public, but academic Twitter is very much a thing, so you could also consider tweets as measures of scholarly or practitioner impact, depending on the context – in short, we’re still not sure what tweets mean, which is why altmetrics don’t usually get directly associated with impact metrics at all). Going back to the Becker Model, there’s SO much incorporated into its model that can’t be quantified, and some of the indicators in EMPIRE aren’t included there. In short, there’s a great deal of uncertainty that remains with EMPIRE, mainly due to assumptions about what individual altmetrics signify. In short, many aspects of EMPIRE are highly subjective. If Novartis Pharma AG and Oxford PharmaGenesis want to use it for internal evaluation purposes, that’s their prerogative, but I have deep concerns about this, and feel like there are too many assumptions being made about its validity to warrant general use without a lot more explanation about its intended purpose and large limitations. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
Introducing the EMPIRE Index: A novel, value-based metric framework to measure the impact of medical publications PONE-D-21-11452R1 Dear Dr. Tomas James Rees, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Leila Nemati Anaraki Guest Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed Reviewer #3: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Hi Dear Sir/Madam, The article is now appropriate to publish in this journal. all the review recommendations have revised. Sincerely Yours, Reviewer #2: (No Response) Reviewer #3: Thanks to the authors for their thorough response. I feel much better about the paper with the added information and context. The case uses and recommendation for use really helps contextualize the metric, to (hopefully) prevent misuse and abuse of this metric, since it is highly tailored to a specific audience and purpose, and the added details about the panel help to better understand the viewpoints of the stakeholders. I still think the metric has too much subjectivity for broader implementation, but feel that the manuscript adequately addresses this concern, and I appreciate the care that was taken to contextualize the decisions that were made. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: No |
| Formally Accepted |
|
PONE-D-21-11452R1 Introducing the EMPIRE Index: A novel, value-based metric framework to measure the impact of medical publications Dear Dr. Rees: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Leila Nemati Anaraki Guest Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .