Peer Review History
| Original SubmissionOctober 8, 2021 |
|---|
|
PONE-D-21-30612Does perceived scarcity of COVID-19 vaccines increase vaccination willingness? Results of an experimental study with German respondents in times of a national vaccine shortage.PLOS ONE Dear Dr. Schnepf, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please revise the paper in accordance to the reviewers' comments. In the revised version of the paper, please better state the aspects related to sampling and validation of the model. Please submit your revised manuscript by Feb 05 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Camelia Delcea Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: Partly Reviewer #5: Partly Reviewer #6: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: No Reviewer #2: Yes Reviewer #3: I Don't Know Reviewer #4: No Reviewer #5: Yes Reviewer #6: No ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: No Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: No Reviewer #5: Yes Reviewer #6: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: Yes Reviewer #5: Yes Reviewer #6: No ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: This is a well written and concise paper reporting a useful survey of the impact of Covid vaccine scarcity on the likelihood of unvaccinated individuals to get vaccinated. The author should detail the number of participants that were excluded according to each of the exclusion criteria. This could either be in the text or in a CONSORT-type diagram. Page 4: ‘especially Germany’ would read better as ‘Germany especially’ Top of page 6: ‘have a fix vaccination appointment’ - ‘fix’ should be replaced with ‘fixed’ Page 6, near the bottom’: Since deception took place, participants were not directed to the participation code until they indicated that they had read and understood the debriefing information.’ - it’s not clear what the ‘participation code’ is. Top of page 7: the author should define the alpha that is given here. Sample size calculation: please define the parameters. Results: please define the parameters (M, SD, etc). Has the author assessed the validity of the assumptions underlying MANOVA, eg. Normality of the response? It seems likely to me that they will not hold and that a nonparametric alternative may be necessary. The author should provide some interpretation and discussion of the magnitude of the differences between the groups. The difference in means was less than 1 between the groups for both outcomes and I would argue that this actually a very small difference - perhaps even to the extent that this is a positive thing and may mean that people are relatively likely (both groups >4) to get vaccinated regardless of the availability of the vaccine. I think there is a nuance between the statistical and societal significance of the results that has been missed by the author. Figure 1 should be displayed as box plots instead of the bar plots shown. This would help with the interpretation of the results, as referred to in the previous point, since it will give an overview of the entirety of each group. Reviewer #2: -the author should be removed "see e.g." from the citations (see e.g. , Westdeutscher Rundfunk, May 2021). ). -the author should be included the list of abbreviations eg. WHAT does MANOVA, IBM, SPSS and COVID-19 means? -write the formula and assumption of MANOVA? - the methods of this study like lecture notes so rewrite again Reviewer #3: Dear author, The abstract missing the conclusion part, there are still characters left to add that part. Was validity of the survey assessed? Many of the references used here are quiet old, additionally, it is unreasonable to compare the scarcity of condiments to that of medical procedures. Reviewer #4: This short paper experimentally tests whether information about COVID-19 vaccine scarcity increases vaccine willingness in Germany in May 2021. Participants were recruited online for a short survey their manipulated (with deception) whether respondents were informed that their county was or was not experiencing vaccine shortages, before assessing the impact of this treatment on immediate self-reported willingness to vaccinate as well as anger about restrictions on the unvaccinated. The results show about a 0.3 standard deviation increase in willingness to vaccinate, suggesting – in line with studies relating to non-COVID vaccines – that scarcity can increase willingness to get vaccinated. While the paper is concise and speaks broadly to a key topic – determinants of vaccine uptake – there are also substantial limitations in terms of what can be learned from this study. As noted in detail below, the writeup is missing important information, there are reasons to doubt the value of the findings, and the theoretical mechanisms are not explored. The concision of the paper is good, but it goes too far. It is actually quite difficult to evaluate the study due to a lack of standard information about the study. In particular: • None of the assumptions required for valid inference are supported empirically. There are no tests validating that treatment is balanced across predetermined covariates (location and demographic variables, in this case), which is important in studies like this with a small sample. Moreover, there are no tests of differential attrition – do people exposed to different messages answer the main outcome questions at different rates? If so, is the sample that survives to the outcome questions still well balanced? • The author notes that survey invitations were sent in order to match the age and gender population distribution. However, this does not tell us what the sample that actually takes the survey looks like – is it representative on these dimensions or any other dimensions that were not matched on? Ultimately, we know very little about the sample, making it hard to evaluate whether this online sample is representative of the German population (or other populations that we might want to extrapolate to). • A manipulation check was conducted, which is good, but the results of it are not reported. Did the manipulation work, and for how many people did it work? This is important because it’s hard to gauge whether the intensity of treatment is – i.e. how many people changed their minds because of treatment. This also has implications for whether the mechanism driving the results is belief updating or increasing salience. • The main vaccination intention outcome is comprised of 3 survey questions. However, the author does not report how the 3 items are combined – is the main outcome the average of the 3, an inverse-covariance weighted scale, a factor, or something else? In addition, it would be nice to see the results reported by item separately, since the second and especially the first outcomes seem to be the most relevant ones for understanding vaccination intentions, but may not be the outcomes that are driving treatment effects. • It is also worth noting that the preregistration information deviates somewhat from the final paper. The sample size differs, another outcome is introduced in the paper, the scale for vaccination outcomes is not mention in the analysis plan, the definition of treatment failing was not defined in advance, etc. These are not major issues, but some justification for deviation should be provided. Despite the experimental nature of the study, I have several concerns about the design itself: • First, the outcomes are vaccine intentions measured immediately after a manipulation that respondents were told was being done to them. This raises important concerns about social desirability biases. Did people thinking vaccines were scarce feel the need to answer that they will get one, out of shame or experimenter demand? To be fair, most studies in the literature have used survey intentions – rather than behavioral outcomes – to measure vaccination, but this is increasingly unsatisfactory given that vaccines are now widely available (and were fairly available in Germany in May 2021 too) and because this particular survey started by telling participants “that they might be deceived during participation and that the true purpose of the study could not be explained until the end.” • Second, the exclusion criteria seem to risk post-treatment bias. While excluding “speeders” is fine, I worry about excluding people on the basis of their posterior beliefs about scarcity. It is unorthodox in my field to condition the sample (or use covariate adjustment) on basis of post-treatment covariate, as this introduces biases, see e.g. Hernan and Robins 2011. I would encourage the author not to use this potentially bias-inducing exclusion restriction. If they wish to normalize by the degree to which the manipulation works, they would be better served using an instrumental variables regression. This should be easy to implement in a revision, although it’s not clear how robust the results would be. • Third, it wasn’t clear to me whether the treatment would be interpreted by respondents as reflecting low supply of or high demand for vaccines. The former sentence in the treatment suggests a supply mechanism, but the latter could be either. This has implications for interpreting the results. Finally, I had several concerns regarding the broader contribution: • The experiment has very little to say about what theoretical mechanisms might be driving the scarcity effect. I think a variety of mechanisms, with differing policy implications, could be at play – is scarcity altering perceptions of the value of vaccination, due to limited number or social learning (i.e. high uptake suggests others think vaccines are good)? Is scarcity creating pressures to conform because people are learning that others around them are getting vaccinated? Is scarcity simply galvanizing people into action, but without altering their valuations or social incentives to vaccinate? • Especially if we can’t say much about the mechanisms, the policy implications are not clear. Moreover, it seems impractical and unsustainable to suggest that governments should maintain low vaccine supplies at all times or that they should lie to people about supplies being low. So, while it’s useful to know that low availability affect vaccine uptake, its seem like an inherently transient factor with few policy implications or aggregate uptake implications. Reviewer #5: The article describes a short online experiment to determine if perceived scarcity of COVID-19 vaccines has an impact on vaccination willingness in Germany, during May 2021. The experiment was pre-registered, and provides access to the final data set, the pre-registered exclusion rules, and the SPSS syntax to redo the calculations, which makes a strong case for the rigourosity of the analysis. I have minor comments about the manuscript that I'm going to list as follows: 1) You are reporting Wilks Lambda and Eta square, but you are not interpreting nor commenting on those results in your discussion. Perhaps it would be better to add a short interpretation in both cases. 2) Add a comparison between your final sample (N = 175) demographics with German demographics, and add in your discussion a comment on possible biases that the online experiment might have. One of them that is clear to me, it's the participation of more men than women in this study. Are there any references about possible biasses induced by gender?. Also, comment on the limitations and advantages of using a paid website to recruit participants. Are there any references about possible biases on this self recruited population? If so, I think it's important to add them to the manuscript. 3) For Table 1, report exact p-values along with the Eta square for treatment. This is the main result of the analysis, hence saying p<0.05 or p<0.01 may not be enough. 4) A descriptive plot for the Vaccination willingness and Anger measures according to each of the covariates could be useful to describe the data set before entering into the analysis. Reviewer #6: Summary The study explores the association between the perceived scarcity of the COVID-19 vaccine and people’s willingness to get vaccinated. The topic is very timely and is of significant importance. However, there are a number of issues that need to clarified about the study design to ensure its validity and about its limitations and the effect of these limitations on the results and their interpretation. To account for potential confounders and quantify the effect size accurately, I strongly recommend Author perform multivariate regression analysis, instead of one-way MANOVA. If this cannot be done, Author should explain the reasons in the discussion as it seems to be a better study design to examine the association between perceived scarcity of vaccines and willingness to get vaccinated. To be acceptable for publication, the manuscript requires substantial revision as follows. Introduction Author described in detail the impact of vaccine scarcity on consumer demand and its applicability to the current context of COVID-19 vaccine roll-out. In general, the introduction described the current situation in Germany well to provide context to the study. However, all these factors mentioned in the introduction should be accounted for in the analysis. Also, there is little justification for the reason why Author decided to measure the “anger” as an exploratory analysis (i.e., why is it of importance and what is its relevant to the main hypothesis?). Page 3, paragraph starting with “As for the context of vaccines, …”: This sentence does not appear to have the correct citation. Pls check and include an appropriate citation. Page 4, sentence starting with “At first, those countries…” requires citation. Page 4, sentence starting with “However, despite all these initial obstacles, …”: When you say “the period of the national vaccine shortage”, please be explicit and state the exact period in the text. Page 5, paragraph starting with “In addition to the test of a scarcity…”: This is the first time Author introduced the debate over relaxation of measures for those vaccinated. If this is one of the outcome measures you aim to test in your study, please provide more context as to its importance before introducing a question related to it. Methods Fundamentally, I am concerned with the validity of the study design. 1) Author should not describe this study as an experimental study. There is no counterfactual (pre-/post-test or treatment vs. control) group or randomization. The study appears to be a cross-sectional study. 2) The ‘treatment’ described in the study is not a true treatment. The instant exposure to crafted information on vaccine surplus/scarcity is unlikely to lead to an immediate change in participants’ perception about vaccination. Therefore, exposure to treatment -> associated outcome relationship here is not established. 3) Similarly, the study design does not rule out the possibility that the participants’ willingness to be vaccinated was established prior to their exposure to information on vaccine surplus/scarcity. There is no pre-/post- comparison to establish the temporal relationship between the exposure and the outcome. The current results without this does not support a causal relationship. 4) In addition, Author used one-way MANOVA to test the hypothesis, which does not account for the potential confounding effects. This way, it cannot be ruled out that the participants’ willingness to be vaccinated is driven/mediated by other factors than the perceived scarcity of vaccines. This contradicts what was stated in the introduction and the discussion that vaccine efficacy or side effects also can drive people’s willingness to be vaccinated. Author needs to justify why one-way MANOVA was used on the set of outcome variables. Author should also separately examine and present in a table whether there are any groups differences with regard to demographic and other confounding variables to bolster the findings. 5) Lastly, it is unclear why Author did not choose to perform multivariate regression analysis on each of the outcome variables to account for potential confounding factors. This should be explained in the discussion. Currently, your methods section is a lengthy walk-through of the online survey. Some information you included, such as how participants were “thanked for their participation”, is not essential. Rather than describing the survey procedure in detail, please provide the well-articulated summary of what you were trying to measure, how they were measured in the survey (i.e. type of question or scale used), and how they were coded in the data. Also, whenever a technical term is introduced, spell it out the first time it was used, and explain what it is used for. Page 5, sentence starting with “Invitation mails were sent out…”: What was the sampling pool for this invitation? what is meant by “weighted by” age and gender? Page 6, paragraph starting with “Participants then answered questions…”: What are the dependent variables and what do you mean by ‘manipulation check’? What do you mean when participants were not directed to the “participation code”? What is the “debriefing information”? Are these important facts to be mentioned in this section? Page 7, “Dependent variables”: 1) What was your rationale to include these three questions to measure the outcomes? Are these questions from a validated instrument to measure willingness to receive vaccination? (If yes, please cite the work.) 2) Why is the scale from 1 – 7? It appears to be unconventional. 3) Did you combine the three questions to form one outcome measure scale? If so, how did you combine them? (i.e., average of 3, sum of 3, etc.). 4) What is the ‘alpha’? (if it is Cronbach’s alpha, you need to specify it explicitly in the text and explain what it means) Page 7, “Manipulation check”: What is the purpose of measuring this variable? Page 7, “Sample size and data analysis” 1) In the first sentence, please clarify what a “large effect size” is and a “small effect size” is. Please provide appropriate ranges. 2) For the power calculation, please specify d, 1-beta, alpha. Also, please spell-out MANOVA when it appears for the first time in the text. 3) In the last sentence, what do you mean when you say “A one-way MANOVA …. was pre-registered?” Page 7, “Exclusion criteria” 1) What is a relative speed index? Please describe. 2) Spell out SD when it appears for the first time in the text. 3) The description of the final sample for the study should be provided in the “Results” section using a table including all the variables used in the analysis. Please provide such a table in this section. Results 1) Currently the results section lacks a descriptive table of the sample (a conventional table). 2) In addition, the way in which the results is reported in the text is hard to follow. A table for the results of the one-way MANOVA analysis for each group should be included. Author listed a list of abbreviated terms with figures without explaining what they mean. There is no effect size reported for the treatment, or no justification why the different tests were used to report the results. Discussion Overall, your discussion is based on the findings from your study to draw general inference on the German population. I am concerned on the validity of the discussion in general because 1) the generalizability of the findings based on a small sample of 175 participants is not discussed; and 2) the current study design does not allow any causal inference. Limitations should also expand further to accommodate the concerns raised in the “Methods” section. Page 8, sentence starting with “The results support the hypothesis…”: Based on the current study design, there is very little evidence generated from the study to support this statement. Page 9, sentence starting with “Surprisingly, this has not..”: You cannot say this without testing for this. Your current study does not explore the association between the mentioned factors and your dependent variable. Page 9, sentence starting with “In fact, when some cities…”: This is irrelevant to the findings of this study and should rather presented in the introduction. Page 9, sentence starting with “On the other hand, scarcity mentalities..”: This sentence is vague and does not add any value to the discussion as it is formed. Author should clarify it. Page 9, sentence starting with ”Such anger may also quickly…”: Is there any literature documenting such an association? If yes, Author should cite it. This could be an interesting mechanism and Author should clearly state this in the introduction to justify the exploratory question (focusing on measurement of anger). Minor revisions: No lines numbers provided in the submission file. This made the review of this manuscript difficult which will also make its revision similarly difficult. Referencing style does not comply with PLOS ONE’s submission guideline, and in-text citation format is inconsistent throughout (i.e., avoid adding hyperlink to a webpage directly in the text, use a citation software so that in-text citations are numbered in Vancouver style). Footnotes are not permitted as per PLOS One’s guidelines. Overall, writing needs to be improved in terms of grammar and style. For example, inconsistent use of tense throughout the text; inconsistent use of terminology on “COVID-19 vaccine” (vs. SARS-CoV-2 vaccine) ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Dr SJE Barry Reviewer #2: Yes: Yenew Alemu Reviewer #3: No Reviewer #4: No Reviewer #5: No Reviewer #6: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
|
| Revision 1 |
|
PONE-D-21-30612R1Does perceived scarcity of COVID-19 vaccines increase vaccination willingness? Results of an experimental study with German respondents in times of a national vaccine shortage.PLOS ONE Dear Dr. Schnepf, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. ============================== ACADEMIC EDITOR: Still, reviewers are raising substantial concerns (reviewer # 6 is against publication) over the revised form of the MS. Do go through the comments and amend the MS accordingly. ============================== Please submit your revised manuscript by May 08 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, A. M. Abd El-Aty Academic Editor PLOS ONE [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed Reviewer #3: All comments have been addressed Reviewer #4: (No Response) Reviewer #5: All comments have been addressed Reviewer #6: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: (No Response) Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: Partly Reviewer #5: Yes Reviewer #6: No ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: (No Response) Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: No Reviewer #5: Yes Reviewer #6: No ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: (No Response) Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: Yes Reviewer #5: Yes Reviewer #6: No ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: (No Response) Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: Yes Reviewer #5: Yes Reviewer #6: No ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: (No Response) Reviewer #2: - the author should be add the list of abbreviation and types of statistical software used for analysis Reviewer #3: (No Response) Reviewer #4: The author does a reasonable job in addressing many of the issues I raised, especially regarding providing additional information about the experiment. If other reviewers and the editors are ok with a paper that has little to say about mechanisms and external validity, I do not want to stand in the way of its publication. However, there are two issues with the empirical analysis that I have major concerns about, and cannot support the publication of a paper on a topic with important public health implications in a wide-read scientific journal until they are addressed. At the moment, the results are not credible enough in my view. Fortunately, there are simple fixes that I propose below. First, I remain seriously concerned that the exclusion of people that the manipulation did not work for is introducing bias by conditioning the sample on a function of treatment. Specifically, the article notes that: “participants for whom the scarcity vs. surplus manipulation did not work, i.e., individuals in the scarcity condition who perceived the availability to be high (perceived vaccine availability > 4) and individuals in the surplus condition who perceived the availability to be low (perceived vaccine availability < 4), were excluded from the data analysis.” Although the preregistration stated “participants for whom the treatment failed”, it did not prespecify how this would be done. This is simply bad practice, and seriously risks generating biased estimates. Let me explain. If the baseline mean is below 4 (which it seems to be), there is a risk that the experimental design is broken because participants in the scarcity treatment is less likely to be dropped because they are more likely to have a score below 4 than participants surplus treatment are likely to have a score above 4. Indeed, this fits with the fairly substantial imbalance across scarcity (N=98) and surplus conditions (N=77) after this exclusion criteria is applied, despite even probabilities of treatment assignment (as I understand it). I imagine that this 44% to 56% split would be statistically distinguishable from 50:50 in even this small sample. The consequence of this is that the randomization is likely to be undone because people in the treatment and control groups possess different baseline beliefs due to the exclusion criterion; this may in turn drive differences in observed outcomes. The paper now conducts balance tests over age, gender, and education, and finds no difference; while this is somewhat comforting, these variables are not necessarily correlated with the outcomes and inspection of the tables in the supplementary materials do suggest quite large differences in education in magnitude. Together, these concerns create significant worries about the validity of the results – especially for journal like PLoS One and on an important public health issue. Fortunately, there is a simple fix here: not imposing this exclusion criteria (although the others are fine). This will immediately restore the experimental properties of the analysis. If the author is concerned about observing small effects because some participants did not internalize the treatment, they should conduct an instrumental variables analysis in addition to reporting the reduced form comparison between treatment and control. Given the simplicity of the solution and obvious risk of bias, I cannot support publication of this article until this more sensible analysis is used for the main results. This is not simply a robustness check, but the proper experimental specification should be the main result. Second, and less important, the key dependent variables deviate from the preregistered approach in two respects: (i) there is no mention of creating a mean scale; (ii) one outcome (pursuing various channels) included in the scale in the paper is not mentioned at all in the preregistered analysis. (This is not the exploratory outcome.) This obviously creates concerns regarding researcher degrees of freedom. To be more transparent in the presentation of the results, I would like the author to report the effect of manipulation on each of the three outcomes separately (i.e. not just as part of the scale, although it’s ok to keep that). This accords with the method they preregistered. This is important because the outcome that is not mentioned in the preregistration could be driving the results, and seems less conceptually relevant for the study. This is also a simple fix. Ultimately, I strongly encourage the author to implement these two simple changes that would substantially enhance the credibility of the study’s results. Absent these changes, I would continue to have serious concerns about the validity of the paper’s conclusions. Reviewer #5: I have no further comments. All of my suggestions have been addressed by the author. The document includes now all of the details about the statistical analysis that were not clear before. Reviewer #6: The author partially, but not sufficiently, addressed the raised points. The manuscript reads much better now with clear description of the methodology and better presentation of the results. I understand the author’s choice of MANOVA given that sociodemographic characteristics were not significantly different across the two groups, therefore were assumed to be relatively well randomized. However, some of the points raised originally on the manuscript still remain unaddressed. At this point, I would not see the current manuscript fit for publication in PLOS ONE for its audience. I would recommend authors to seek for publication opportunities in psychology-centric journals. Author consistently rebutted the concerns on the description of the methodology referring to psychology literatures/textbooks. However, given the main audience of PLOS ONE is interdisciplinary and the topic is targeting at public health professionals, I still see the choice of “experimental study” and “treatment” inappropriate. Specifically, the author did not respond to the raised concern on whether the instant exposure to the crafted information on vaccine surplus/scarcity can be considered an appropriate exposure to the treatment given that the immediate (and prolonged) change in participants’ perception about vaccination given this exposure is unlikely. Also, the fact that the study protocol, or its simple summary, was registered to the website the author indicated does not provide any justification on the validity of the hypothesis or the strength of the results. In fact, if this is a true experimental study involving a treatment/intervention on the human subjects, this should have been registered to a proper trial registry (https://www.who.int/clinical-trials-registry-platform/network/trial-registration) The author also claimed the superiority of the experimental study design, however, the author missed the important reason why it is considered superior to the cross-sectional correlational data (i.e. temporality). The pre-study comparison of sociodemographic characteristics is helpful, but is not sufficient to establish a difference between the two groups in the perception towards COVID-19 vaccine is established post treatment / exposure. To ensure the quantification of a causal impact, ensuring the temporality, pre-post comparison is recommended on the same outcome variable. I still strongly recommend the author to include a table on descriptive statistics. For any observational study (following the STROBE guideline) or randomized controlled trial study (following CONSORT guidelines), the inclusion of descriptive statistics is strongly recommend/required. Author kept insisting it is a “short report”, however there is no submission category for short reports of such kind. If this is to be considered under the research article category, minimum reporting requirements should be met. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Sarah J.E. Barry Reviewer #2: No Reviewer #3: No Reviewer #4: No Reviewer #5: No Reviewer #6: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
|
| Revision 2 |
|
PONE-D-21-30612R2Does perceived scarcity of COVID-19 vaccines increase vaccination willingness? Results of an experimental study with German respondents in times of a national vaccine shortage.PLOS ONE Dear Dr. Schnepf, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. ============================== ACADEMIC EDITOR: Still the reviewers are raising substantial concern on the revised form of the MS. Would you please go through the comments and amend the MS accordingly.============================== Please submit your revised manuscript by Jul 02 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, A. M. Abd El-Aty Academic Editor PLOS ONE [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #2: All comments have been addressed Reviewer #4: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #2: Yes Reviewer #4: Partly ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #2: Yes Reviewer #4: No ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #2: Yes Reviewer #4: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #2: Yes Reviewer #4: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #2: - the author should include lists of abbreviations -the author should lists source of data, method of data collection, statistical model and statistical software used for analysis Reviewer #4: Thank you for the detailed response to my comments, and those of reviewer 6. I think the author has made useful progress in this revision. However, the important point about introducing bias is still not properly addressed. I expand on why below. I also think that the author can still be clearer about what is and is not registered in the main paper, but this is a more minor concern. ISSUE 1: INTRODUCING BIAS BY POST-TREATMENT SAMPLE SELECTION I remain unsatisfied with the author’s response on my most important point regarding the individuals that were dropped from the sample. While I appreciate the author’s extended response and I appreciate that there are differences across disciplines, everything in the modern causal inference toolkit in econometrics and program evaluation says that restricting the sample on the basis of post-treatment outcome is not good practice and can introduce bias. Indeed, Aronow, Baron, and Pinson (2019) prove exactly this in the context of dropping subjects based on manipulation tests in a recent article in Political Analysis. Quoting from the introduction of their article: “…we show that the practice of dropping subjects based on a manipulation check should generally be avoided. We provide a number of statistical results establishing that doing so can bias estimates or undermine identification of causal effects. We also show that this practice is equivalent to inducing differential attrition across treatment arms, which may induce bias of unknown sign and magnitude. We do not claim that our statistical formulations are particularly novel—they follow from well-known results about conditioning on post-treatment variables and attrition—but, given the prevalence of this practice, we believe that the relationship between these findings and practice in experimentation is underappreciated.” In the context of this paper, my previous review provided an example of how such a bias could arise in this particular application. It also noted that there is a notable difference in the number of treated and control units that emerges after the sample is trimmed, which suggests the risk of bias. The author did not seriously engage with this point and seemed to miss my point about manipulation tests. Everyone, including myself, agrees that manipulation tests are useful for validating that the treatment did what it intended to do. There is no disagreement here – with either the author or their quote from Fiedler et al. (2021) or their reference to Gollwitzer and Schwabe (2021) in their response. However, as just explained, dropping observations can bias experimental comparisons when treated and control units are dropped in differential ways, which may occur when they are dropped on the basis of post-treatment outcomes like manipulation checks. I am not saying that there is necessarily bias in this particular case, but there is reason to believe there could be – as explained above and in my previous review. The author goes on to state that many other papers also drop observations based on the results of manipulation tests. (Note though that many attention tests are not post-treatment or plausibly unrelated to treatment, which makes bias unlikely.) However, I do not believe that others also following bad practice is sufficient reason to risk biasing experimental designs by dropping data on the basis of post-treatment variables – especially when the results could influence policy decisions, and particularly when better methods exist and could easily be implemented instead. Following the seminal econometric textbook by Angrist and Pischke (2008) that builds on Angrist’s Nobel prize-winning work on instrumental variables, I continue to recommend the following: 1. First conduct a reduced form analysis in the *entire* sample that compares treated and control units. The outcomes would be for the manipulation test outcome and the primary outcomes used in the paper. This would yield the average treatment effect. To be clear, this estimate is not confounded; sure, people might not react to treatment for many reasons, but a successful randomization ensures that this is balanced across treatment groups. 2. If treatment compliance is low, as the author notes, then the natural next step is to use an *instrumental variables* analysis in the full sample. This retains the underlying randomization as the driver of differences in outcomes across treatment groups, but scales up the coefficients to adjust for non-compliance. Under an exclusion restriction (treatment does not affect the outcome except through posterior beliefs) and imposing monotonicity (that treatment conditions cause people to update in a given direction), this also has a causal interpretation – the local average treatment effect among compliers. As far as I understand, this is the author’s target estimand; the sample restriction is designed to estimate effects only for the compliers. If the results are similar, great – this suggests that the author’s method is not introducing significant bias; then I think it’s fine to report these approaches as robustness checks. If the results differ substantially, there is serious reason to worry about the internal validity of the reported results. In that case, I would favor estimates from the specifications I just proposed, because they do not risk post-treatment bias. It's also worth noting that the reweighting approach is useful, and helps to make the sample a bit more representative. However, it doesn’t directly address the concern raised - potential imbalances across treatment conditions with respect to observable pre-treatment covariates. Still, as the author notes, there are not significant differences here, even if some differences look somewhat large in magnitude (but perhaps lack of significance might reflect the small sample). ISSUE 2: TRANSPARENCY ABOUT PREREGISTERED OUTCOMES With respect to deviations from the pre-analysis plan, I apologize for missing the results in appendix S3. These results do, however, show that the largest effects are on the outcome that was not preregistered, while effects are borderline or null for the two outcomes that were preregistered. Regardless of whether the preregistration plan simply forgot to include this outcome (or an “e.g.”), I think the author should clearly state in the main paper that the index results are driven by an outcome that was not preregistered. I have no problem with publishing finding for outcomes that are not preregistered, but this should be clearly communicated. Readers can make up their mind about whether to downweight the results, but in this case I agree with the author that this outcome is relevant and should be included in the index. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #2: Yes: Yenew Alemu Mihret Reviewer #4: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 3 |
|
PONE-D-21-30612R3Does perceived scarcity of COVID-19 vaccines increase vaccination willingness? Results of an experimental study with German respondents in times of a national vaccine shortage.PLOS ONE Dear Dr. Schnepf, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. ============================== ACADEMIC EDITOR: The reviewer remain concerned that the findings are not robust. Would you please go through the comments and provide the requested data/results by the diligent reviewer to consolidate the findings.============================== Please submit your revised manuscript by Sep 08 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, A. M. Abd El-Aty Academic Editor PLOS ONE [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #2: All comments have been addressed Reviewer #4: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #2: Yes Reviewer #4: Partly ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #2: Yes Reviewer #4: No ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #2: Yes Reviewer #4: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #2: Yes Reviewer #4: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #2: I have seen the previous manuscript before the authors sent rivised manuscript.all comments are edited. This manuscript should be publish. It is appropriate for publication. Reviewer #4: I appreciate the author’s detailed response, and I regard the additional text re the second issue (the outcome that was not preregistered) as resolved. However, I remain concerned that the estimation strategy could be generating biased results. The author is right that including inattentive respondents would drag down the ATE relative to the ATE for a fully attentive sample, assuming that inattentive respondents experience similar treatment effects as attentive respondents. The author further notes: “If people for whom the MC failed are included in the analyses, then there is no longer a significant treatment effect.” Given this, we cannot tell whether the results go away in the full sample because it includes people that don’t respond to treatment or because the exclusion criteria based on post-treatment manipulation tests introduces bias because, as Aronow et al. (2019) show, this practice can break the randomization. The natural way to address the author’s point that “there are many participants in the sample who may not have read the text, understood it, or simply do not believe the given information” is to conduct an *instrumental variables* analysis. This will address this issue by estimating the ATE among manipulation check *compliers*. Intuitively, this will rescale the reduced form coefficient to produce what I believe to be the estimand the author care about, but without inducing the risk of bias by differentially dropping treated and control respondents. I have suggested IV approach in all three of my reviews, but the author still has not addressed this suggested solution, which is standard practice for addressing non-compliance. (Beyond a randomized experiment, an IV analysis imposes two further identifying assumptions: monotonicity and the exclusion restriction. The former is surely satisfied, unless the treatment caused people to believe the opposite of the prompt they received - which would be a bigger problem, but seems unlikely, as the manipulation check suggests. The exclusion restriction requires that the treatment does not affect the outcome except when the manipulation check works. This is a stronger assumption, but would be satisfied if - as the author claims - individuals only did not pass the manipulation because they did not sufficiently engage with the treatment to be affected by it.) In the absence of an IV analysis, which addresses the problem of potentially biased exclusion criteria while adjusting for non-compliance (without dropping observations), it is hard to be confident in the author’s findings. I recognize that there are no imbalances after imposing the exclusion criteria on the three covariates (age, gender, and education) that the author measured. (While comparing the excluded and included samples is an informative diagnostic, the most relevant test here compares the included sample across treatment conditions – i.e. the balance tests that have always been in the manuscript.) However, there could still be imbalances on many other unobservables. There is reason to worry about this because notably more participants drop from the surplus conditions (final sample of 77) than the scarcity condition (final sample of 98). If there was no such differential attrition (instead resulting in an approximate 50:50 split), I would be not be as concerned about the risk of undoing the random assignment. If the author is not going to conduct the instrumental variables analysis, I am happy for the editors to adjudicate on this important statistical issue that seems to affect the core findings. Following these comments, I believe two minor elements of the manuscript remain inaccurate. First is the CONSORT diagram: since the manipulation check occurred *after* treatment assignment and depended on engagement with treatment, a separate box for these exclusions should occur in the branch for each treatment condition. This would clarify for reader the issue that the author and I have debated extensively. Second, the claim that “Overall, n = 98 respondents were assigned to the scarcity condition and n = 195 77 to the surplus condition.” on p9 is misleading. These are not the totals assigned to treatment, but the totals after applying the exclusion criteria. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #2: Yes: Yenew Alemu Reviewer #4: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 4 |
|
Does perceived scarcity of COVID-19 vaccines increase vaccination willingness? Results of an experimental study with German respondents in times of a national vaccine shortage. PONE-D-21-30612R4 Dear Dr. Schnepf, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, A. M. Abd El-Aty Academic Editor PLOS ONE Additional Editor Comments (optional): The authors respond satisfactorily to the comments raised by the reviewer Reviewers' comments: |
| Formally Accepted |
|
PONE-D-21-30612R4 Does perceived scarcity of COVID-19 vaccines increase vaccination willingness? Results of an experimental study with German respondents in times of a national vaccine shortage. Dear Dr. Schnepf: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Prof. A. M. Abd El-Aty Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .