Peer Review History
Original SubmissionJanuary 29, 2021 |
---|
PONE-D-21-03213 Not just words! Effects of encouragement on students’ exam grades and non-cognitive skills—lessons from a large-scale randomized field experiment PLOS ONE Dear Dr. Tamás, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. The paper is very interesting and in general well executed. However, as the three excellent reports point out, there are many important issues that need to be clarified. The authors must consider carefully all the points suggested by the referees. Importantly, there are some interpretations of the coefficients that can be misleading, some results that seem not so clear as reported (regarding student intentions to do well), and several aspects that need to be better justified. The authors must clarify the different concerns raised by the referees and consider and discuss their suggestions, that I also believe are helpful to understand better the different aspects of the paper. I know this will require a huge effort, so you consider that need additional time please do not hesitate to ask for it. Please submit your revised manuscript by Apr 29 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Alfonso Rosa Garcia Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Please include additional information regarding the survey or questionnaire used in the study and ensure that you have provided sufficient details that others could replicate the analyses. For instance, if you developed a questionnaire as part of this study and it is not under a copyright more restrictive than CC-BY, please include a copy, in both the original language and English, as Supporting Information. 3. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide. 4. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Partly Reviewer #3: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: I Don't Know Reviewer #3: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: No Reviewer #3: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Please see the attached document called "referee report". Reviewer #2: Major comments: 1 The message sent to students includes promise of a possible lottery game prize. Do you think this had any effect on students behavior towards the message? 2 The follow-up qualitative survey appears to have great attrition. Is it possible this had an effect on findings? Was there differential attrition by treatment and control samples? 3 When describing your treatment variable in the first paragraph of the 7th page it is very confusing. Particularly "regardless of whether they had received it before the first or before the second exam.". Do you mean that treatment is 1 for the first exam for A and 0 for B, and then 1 for the second exam for everyone? Or do you mean something different? Please clarify. 4 Why do you deploy multi-level random effects models here? I am unfamiliar with the technique for randomized control trials? You have a randomized sample. One table should be just the difference between treatment and control since randomization ensures treatment and control samples are balanced and one group is not treated on the first test (group B). No need to employ complicated models with randomization on the first test. 5. You use an interaction with treatment and the second test in EQ1. Are the treatment effects reported the effects on the first test then? Where can I find the interaction results for the treatment effect and the second test? 6. The results in column (2) of table 4 are confusing to me. Why does the coefficient on the treated variable barely change between column (1) and (2) given that the interaction is positive and significant? Shouldn't the treatment effect be pulled down by the positive interaction since it now represents the treatment effect for low ability students? You say in the footnotes to the table that all models include controls for ability but then show the coefficient for ability in column (2) but not (1). Does this have something to do with multi-level random effects models? What am I missing here? Minor comments: 1 Full paragraph 2 page 4, last sentence needs clarification. 2. Last full paragraph, page 5, massages should be messages. Reviewer #3: Review PONE-D-21-03213 Not just words! Effects of encouragement on students’ exam grades and non-cognitive skills – lessons from a large-scale randomized field experiment General comments This study shows results of a large-scale randomized field experiment targeted at students’ exam grades, as well as their test anxiety, self-confidence and intention to do well on a test, by using automated encouragement messages. The experiment was pre-registered. No average treatment effects were observed on exam grades, yet the intervention did show some effects on the non-cognitive skills (i.e. self-confidence and intention to do well). There also seems some heterogeneity according to the ability level of the students. The study touches upon an important aspect of learning behavior and academic performance. Not only is the importance of well-developed non-cognitive skills (among which self-concept, ability to deal with anxiety and aspirations, which are targeted in this study), next to cognitive skills, for academic success and life outcomes well-documented, there is also a growth of so-called social-emotional learning programs that address the development of such skills in school. It is important to understand what works well and which incentives have no effects. The use of large-scale RCT’s in the field are of great importance to this. Yet these are not easy to develop, and the authors took the courage to undertake such a large-scale field experiment. The intervention, in turn, is an easy to implement one in educational practice if proven effective. The study is well performed with a rigorous design and methodological approach, and the article is generally well-written. However, I think it can be sharpened a bit before publication. For example, I think the study can be somewhat bit stronger embedded in the literature. There is quite some research on the effectiveness of social-emotional learning programs or on the role of confidence nudging in relation to performance, that relates to this I guess. See some further comments below. I also recommend that the authors take a close look at how the information about the sample, procedures and measures is given, I sometimes had a difficult time grasping the details and keep the focus. Comments per section Introduction: • p2, par3: The authors might want to take a look at a paper of Tenney et al. (2015) who investigate the relation between optimism and performance with a range of (small-scale) experiments, including some of which try to impact people’s optimism by encouragement/discouragement messages. For example, they observe that manipulated optimism affected their participants’ self-reports of felt optimism and a behavioral measure of their persistence, which are in turn important for performance. • p2, par4: The authors mention that there is significant heterogeneity in the effect sizes observed in previous studies. I feel that they might elaborate a bit more these differences than is so far done in this paragraph. Now only the size and more general framework that the studies take are mentioned, but are there also some conclusions with respect to subgroups for example? • p2, par5: I am a bit puzzled by this argument. I understand that it might slow down the observed effects in studies, but in the end we want effective strategies to be integrated in teacher practice, right? I think you mean that the interventions proposed in the literature require more of an overhaul of the system. And that this might not always be feasible or desirable. But that there is a lack of studies on more easy to apply measures that could be integrated in education, independent of teacher motivation or experience. Perhaps I am misreading this, but the authors might want to explain a bit. • p2, par6: In my view there might be an additional concern prevalent when looking at the current literature and that is the lack of large-scale field RCT’s. Many of the experimental studies in these fields are either in a lab setting, or using small samples in the field if I am not mistaken. Studies in the field are mostly non- or quasi-experimental to my knowledge. The authors can correct me if I am wrong. Perhaps this might be added as an additional concern, also showing the contribution of this current paper. • p3, par6: there is some discussion going on in the (education) literature on the effect sizes (small or null) in field experiments. The last two paragraphs of the discussion of a recent paper by Feron & Schils (2020) touch upon this issue and you might find this interesting for your study. Design, data and method • p4, par7: very minor query, but what kind of things can be bought in the SZTE gift shop? This might give some information about the incentive and to what extent it is a real incentive/reward. • p5, par6: Do you know how many students know that they did not receive the encouragement message? From those only 17% was sad/very sad, right? Is it the 33% mentioned in the next paragraph? This gives a bit more insight in the extent to which we can agree that that likelihood of adverse treatment effects is ‘moderate’, as you state. • P7, par1: Perhaps you can already mention here that the first and second exams are in different subjects, because when I was reading this paragraph it was unclear to me why you did not distinguish between whether they received the message for the first or for the second exam? The information about the differences between the first and second exams, as well as information on the general exam system in Hungary follows later, but the reader might already be a bit puzzled. It is many details to digest. • p8, par 1: how much time is there between the pre- and post-test? Is it a reasonable period to expect effects? • p8, par 3, you might not know, but might the missings due to illness be related to test anxiety? If you have any information on this, that would be useful, e.g. perhaps those that scored high on test anxiety in the survey are more often absent? • p8, footnote: ether > either. • p9, point 3: I am bit surprised by the locus of control, this is not mentioned in the literature. Perhaps the authors can address it in the literature, so the reader understands why it is included. Results • p13, par 6: hypostatized > hypothesized. Discussion • p15, par 5 and p16, par 6: I was just wondering about the effect of the treatment on exam grades, these are only given in 1 2 3 4 5, right? In that case the treatment should be really strong to see an effect on grades? Or am I misinterpreting the grading system? It might be that in the previous literature the grading system used was different and allowed for ‘easier to establish’ effects? • p15, par 6 and later when you discuss this more thorougly in the discussion: this result for the high able students might indeed link up to boosting confidence that increases the grades. (it relates to the general effect on self-confidence, you observe). They already knew they were good (or among the upper part of the ability distribution) and receiving an encouragement message basically confirms that feeling and they even get more confident in that they will succeed in the exam. Perhaps the psychological literature on (over)confidence might be useful here, you might want to check out papers of Don Moore, who wrote about this. The low ability students might indeed have given up, and have become rather ignorant to studying and performing well on tests. While it is quite important also for their future training participation as many studies show that low-educated/ability workers are less prone to investing in further training during the life course. More emphasis might be put on understanding the mechanisms behind the non-effects of encouragement among low-ability students. However, having said that, I think the conclusions on the heterogeneity by ability should be modest, as the observed effects were only marginally significant. Moreover, we are talking about low-ability students in a university setting, so not overall low-ability students, i.e. those that already made it to an academic study. I think this is important to mention. • p16, par 6: perhaps the effect on the non-cognitive skills needs more time to translate to cognitive skills, have you considered that? I would at least say it did not translate into short-run cognitive results. I hope the authors can use my comments and suggestions to further improve the paper. References mentioned in this review: Tenney, E. R., Logg, J. M., & Moore, D. A. (2015). (Too) optimistic about optimism: The belief that optimism improves performance. Journal of Personality and Social Psychology. Feron, E. & Schils, T. (2020) A randomized field experiment using self-reflection on school behavior to help students in secondary school reach their performance potential”, with Eva Feron. Frontiers in Psychology – Personality and Social Psychology. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: Yes: Daniel Dench Reviewer #3: Yes: Prof. dr. Trudie Schils [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
|
Revision 1 |
PONE-D-21-03213R1 Not just words! Effects of a light-touch randomized encouragement intervention on students’ exam grades, self-efficacy, motivation, and test anxiety PLOS ONE Dear Dr. Keller, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. The paper has significantly improved in the current version review. However, there are still some important points that need to be fixed. As two referees point out, the effect of the parameter beta3 is ambiguous. If this problem is fixed, it may change the cost-benefit discussion, as suggested by Reviewer #1. Thus, the authors should carefully clarify the points raised by Reviewers #1 and #2. Please submit your revised manuscript by Aug 14 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Alfonso Rosa Garcia Academic Editor PLOS ONE [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: (No Response) Reviewer #2: (No Response) Reviewer #3: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Yes Reviewer #3: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: This version of the paper is much improved. I enjoyed re-reading it. I thank the authors for addressing my comments. It is now clearer how they selected the message content, when students received both the email and text messages, and that the length of text messages was fixed. It also now easier for the reader to interpret the parameter, beta 3, as the mean difference in exam grades between treated and control students in the second exam minus this same difference in the first exam. The authors have also done a nice job of toning down the text throughout the paper to better reflect the findings and have nicely explained how they are thinking about the cost-benefit analysis of their intervention. While these are all great improvements, I do have three remaining concerns about the paper. *First*, while the mathematical representation of the parameter, beta 3, is now clear, I still do not quite understand how to think about it in the context of a carryover effect. This parameter is negative in Tables 2 to 4, which means that the difference in outcomes between treated and control students is higher on the first exam than it is on the second exam. Why do the authors think this is the case? What is the hypothesized mechanism? In Table 4, the estimate of beta 3 is negative and, together with beta 1, implies that treatment was ineffective in group B. Is this because the treatment on group A had a persistent effect on exam 2, pushing up the exam 2 scores of group A (relative to what would have been the case without treatment on exam 1)? Or is it because treatment is simply more effective when applied earlier in the semester? It is still unclear to me how to think about the underlying dynamics that result in the estimated value for this parameter. I think the authors could provide more of an explanation than is currently provided between lines 599 and 605 of the manuscript. *Second*, and related, these issues around beta 3 become especially relevant when it comes to assuaging concerns about selection into the endline questionnaire. The authors say treatment status decreases completion of the endline survey but that Tables A6 to A8 show similar results when the sample is restricted to students who completed the endline survey twice. I respectfully disagree that the results are similar. In particular, Table 3 is one of the most important tables in the paper, highlighting the only effect on one of the secondary outcomes—namely, self-efficacy. (The authors concede that the effect on motivation is less clear because it does not replicate on exam 2.) In Table 3, the treatment effect is large and present on both exam 1 and exam 2. But in Table A7, the analogue to Table 3 but with only students who completed the endline survey twice, treatment appears to influence self-efficacy only on exam 1. That is, beta 3 is negative and significant in nearly all the first seven columns, and the sum of beta 1 and beta 3 is considerably smaller than just beta 1. I cannot tell when the sum remains statistically significant, but it surely does not in all specifications and the resulting treatment effect on exam 2 is always much smaller than the estimate of beta 1 in Table 3. I read the contrast between Tables 3 and A7 as potentially indicating that selection into the endline survey by treatment status is a problem, as Table A7 provides another instance (in addition to Table 4) of the treatment effect not replicating on exam 2. *Third*, while the authors have done a great job explaining their cost-benefit analysis, I am not convinced that this is a program worth scaling—or at least that this paper provides evidence to that effect. To start, there was no effect on exam grades. This may because there simply was not enough time between the encouragement and the exam for students to change behavior, as the authors point out in the discussion. The authors also note on lines 705 and 706 that grading on a curve may be the reason why exam grades did not go up. It seems that neither explanation warrants an expansion of the program the authors tested. If enough time had not passed between encouragement and the exam, then one should consider an intervention that encourages students earlier or more consistently throughout the semester. But that is not the intervention about which this paper presents evidence. If grading on a curve prevents an effect on exams, then it is unclear how any intervention might work. I agree with the authors that exam grades are not the only (or even the most important) outcome worth considering. The secondary outcomes the authors test are also important. But, as mentioned, I am not convinced that treatment did influence any of the secondary outcomes that the authors explore. Another, related encouragement campaign might, but again, that evidence is not presented here. In sum, I see very little evidence for the benefit side of the cost-benefit analysis for the program studied in this paper. Reviewer #2: 1. Thank you for your clarifications. I am still confused about how you are describing your models. As I understand it, you had two groups, Group A and Group B. Group A was treated on test 1, Group B was treated on test 2 (except in the small percentage of cases where this did not occur). Your basic model is specified as: Y=beta0+beta1XTreatment+beta2XTest 2+beta3XTreatmentXTest 2+epsilon. As far as I understand it the way you have specified the Treatment variable is 1 for group A for test 1, 0 for group A for test 2, 0 for group B for test 1, 1 for group B for test 2. Is this correct? The way you describe a carryover effect it is "a significant carry-over effect reflects that encouraging students before their first exam affects their grades at the second exam." I think this is one plausible interpretation of beta3 but I do not think it is the only plausible explanation for possible differences. The only accurate way to describe the effects I think is as the difference in the effects between the first and second exam. It could be that the effect differences come from the ordering of the treatments as you say, but another possibility is that there is a difference in the effect due to the difficultly or nature of the first versus the second exam. Perhaps these messages have a different effect later in the semester. Perhaps the messages have more effect on one exam because of the content of one exam versus the other. Reviewer #3: The authors have responded pretty well to the comments I raised. I am satisfied with the revisions made. I think the manuscript improved due to this revision. I actually do not have any further comments. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: Yes: Daniel Dench Reviewer #3: Yes: prof. dr. trudie schils [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
Revision 2 |
Not just words! Effects of a light-touch randomized encouragement intervention on students’ exam grades, self-efficacy, motivation, and test anxiety PONE-D-21-03213R2 Dear Dr. Keller, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Alfonso Rosa Garcia Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Thank you for your detailed edits and replies. I am glad to see that results in the appendix tables were previously incorrect and that the authors were able to correct these mistakes. These new, correct results make the paper stronger, internally consistent, and help assuage concerns about selection into the endline questionnaire (as the authors originally intended). The edits to the conclusion are also welcome, as I believe they better reflect the paper’s findings. However, I still believe the authors are limited in what they can say about carry-over effects. While the new draft improves upon the discussion of carry-over effects, I still have some of the same concerns: 1.The authors mentioned a significant carry-over effect biases the estimation of the average treatment effect. This is only the case when trying to estimate the treatment effect on the second exam because there is no pure control group in this case. A significant carry-over effect from exam 1 to 2, if it exists, is part of the overall treatment effect (not a source of bias if a pure control group were to exist). 2.More importantly, the pure carry-over effect is simply not identified in the authors setting. Identification of a carry-over or persistent effect would require a pure control group (untreated on both exam 1 and exam 2) whose exam 2 score could be used as the baseline with which to compare the exam 2 score of group A. In any event, the estimates of beta_3 do not seem to play a big role in the authors main results or message any longer (with the exception of Table 4), so I am not as concerned about these limitations as with previous drafts. Again, I commend the authors on all the improvements they have made and enjoyed reading the paper. Reviewer #2: Although I still disagree with the interpretation of the carry-over effect, I don't want to hold up publication on this account alone as your interpretation is one plausible interpretation of the effect you find. You say: "We respectfully note that the exam dummy captures all differences concerning students’ first and second exams, including the difference in the first versus second exams’ difficultly or nature." But the interaction effect is the difference in the effect of your intervention on the outcomes from the first to the second test. My suggestion was that the effect of your intervention could also be different because of the difference in difficulty or nature of the second test. This would be unrelated to whether the effect carries over and this effect would be included in the interaction term, not the exam dummy. If you think these things are not plausible given your much closer read of the data then I accede to your interpretation. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: Yes: Daniel Dench |
Formally Accepted |
PONE-D-21-03213R2 Not just words! Effects of a light-touch randomized encouragement intervention on students’ exam grades, self-efficacy, motivation, and test anxiety Dear Dr. Keller: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Alfonso Rosa Garcia Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .