Peer Review History
| Original SubmissionApril 30, 2024 |
|---|
|
PONE-D-24-17313Effective Remediation Programs for Vulnerable Students to Overcome Learning LossPLOS ONE Dear Dr. Jacobs, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Both reviewers recommended major revisions to the manuscript and I agree with this assessment, while also being confident that this paper certainly has the potential to be suitable for publication with appropriate revisions. As such, please do read and respond to both sets of reviewers' comments. Reviewer 1 provides various useful suggestions that I would encourage you to consider carefully and respond to in your revision, as I agree with their point that these could help to enrich the paper in places. In terms of changes required for acceptance, however, I would especially draw your attention to a number of very helpful comments and suggestions provided by Reviewer 2 in ensuring your conclusions are appropriately supported by your analyses. I'd summarise the most pressing issues to support acceptance as follows:
Please submit your revised manuscript by Oct 03 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols . We look forward to receiving your revised manuscript. Kind regards, Jake Anders Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Thank you for stating the following financial disclosure: "This research was supported by the Netherlands Initiative for Education Research (NRO), project number 40.5.20937.001. " Please state what role the funders took in the study. If the funders had no role, please state: "The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript." If this statement is not correct you must amend it as needed. Please include this amended Role of Funder statement in your cover letter; we will change the online submission form on your behalf. 3. We note that you have indicated that there are restrictions to data sharing for this study. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For more information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Before we proceed with your manuscript, please address the following prompts: a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially identifying or sensitive patient information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., a Research Ethics Committee or Institutional Review Board, etc.). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent. b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of recommended repositories, please see https://journals.plos.org/plosone/s/recommended-repositories. You also have the option of uploading the data as Supporting Information files, but we would recommend depositing data directly to a data repository if possible. We will update your Data Availability statement on your behalf to reflect the information you provide. 4. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Thank you for such a nice research. I have a few recommendations to enrich the paper, as following: 1. Research Question: Ensure that the research question is explicitly stated. 2. Sample Size: On page 6, correct the sample size to 66,500 (not 66.500). 3. Literature Review Citations: In the introductory paragraph of the literature review on page 6, support claims about existing research and numerous studies with proper citations in the main text and reference section. 4. Intervention Examples: In the second paragraph of the literature review on page 6, list specific examples of various interventions. Clarify which interventions, such as one-on-one tutoring, are most effective. 5. Areas of Target in Intervention: Explicitly describe the particular areas targeted by the intervention. 6. Citations for Previous Research: The second sentence of the last paragraph on page 8 needs citations for statements related to previous research. Include corresponding references in the reference section. 7. Thematic Review of Learning Loss and Remediation: Enrich the literature review by analyzing learning loss thematically and discussing specific themes related to remediation. 8. Method Section Variables: Enhance the method section by explicitly describing variables such as participation in the remediation program, impact of the program, and effective remediation strategies. Provide a theoretical framework for clarity. 9. Data Analysis Tools: Specify the tools used for data analysis in the method section of the main text. 10. Policy and Pedagogical Implications: Conclude the discussion section by adding policy and pedagogical implications based on the study’s findings Reviewer #2: This manuscript aims to (1) show which social groups participated in the national remedial education program in the Netherlands during the first year of the COVID-19 pandemic, (2) estimate the overall effect of the national remedial education program on children’s test performance in math and reading, and (3) examine how program effects co-vary with the characteristics of specific remedial education measures chosen by different schools. The topic of the manuscript and the three specific research aims are of clear importance and worthy of investigation. The main conclusions of the manuscript are supported by the data analysis. That said, I would suggest some adjustments in the interpretation of the results concerning research aim (2) and in the analysis and interpretation of results concerning research aim (3). I will explain these below and hope that they will be of help to the authors in further strengthening this important piece of research. I will also offer some minor suggestions for further improvements. -> Analyses and interpretation of research aim (2): To estimate the overall effect of the national remedial education program on children’s test performance in math and reading, the manuscript compares the performance gains of students who participated in the program and those who did not, between June 2020 and June 2021. Given that students who participated in the program had a lower baseline performance and were from more disadvantaged backgrounds, estimating overall effects in this difference-in-difference design requires that the learning rate of the two comparison groups is similar prior to the program. The manuscript acknowledges the importance of this parallel trends assumption and refers to section 7 of the Supplementary Information where this assumption is tested. While I agree that figures S7.1-4 suggest that learning progress is similar across the two comparison groups for most time periods, there is a notable divergence in the trends in reading progress between January/February 2020 and June 2020 for cohorts 1 and 2 (see F7.1-2 on p42-43 of the SI). Arguably, this time period is most relevant, as it is right before the remedial education program was implemented. This divergence in learning progress is not surprising, as it reflects early learning losses during the COVID-19 pandemic in the Netherlands (see Haelermans et al. 2022 and Engzell et al. 2021). These learning losses are known to be concentrated amongst the groups of students from more disadvantaged backgrounds, who also had a higher likelihood of participating in remedial education programs, as shown by the manuscript in the analyses for research aim (1). It would be important to take this divergence into account when interpreting the results for research aim (2). The divergence in learning progress may suggest that the estimates of the overall effects of the remedial education program are lower-bound estimates, as it is likely that the learning rate of children with a lower baseline performance and from lower socioeconomic backgrounds would have continued to have a(n even) lower learning rate in the absence of the remedial education program. -> Analyses and interpretation of research aim (3): As its third main aim, the manuscript seeks to examine how program effects co-vary with the characteristics of specific remedial education measures implemented in different schools. To this end, authors run a series of 13 models in which they successively replace the binary treatment/control dummy variable used for the analyses of research aim (2) with multi-category measures of different characteristics across which different remedial education measures vary and which are thought to be relevant for children’s learning progress. The results from these models are shown in Tables S6.1-13 in the SI. They then select three of these characteristics (remedial teaching, group size, and staff) and combine them in the coefficient plot shown in Figure 2. It is not clear to me that the chosen analytical approach is best suited to achieve the third research aim of the study and I suggest an alternative approach further below. A key problem is that the characteristics of the remedial education measures implemented in different schools are likely to co-vary and be confounded by characteristics that are not included in each of the separate models shown in Tables S6.1-13. This may explain counterintuitive results such that remedial education measures with a focus on language have no positive effect on children’s reading progress, but do have an effect on children’s math progress (see Table S6.8 on p.35 of the SI). In short, not including different program characteristics in the same model considerably limits the extent to which the associations shown between effect sizes and program characteristics can be interpreted as causal relationships. It would be important to highlight this in the text. A related problem of the current analytical approach is that one cannot gauge the statistical significance of the differences in the effect sizes between different remedial education measures. This is because students who participated in a given remedial education measure are compared to students who did not participate in any remedial education program, rather than students who participated in a remedial education measure that differed on the relevant characteristic in question. Relatedly, the current interpretation of some of results of Figure 2 reported on p.15-16 are not technically correct. While the interpretation concerning ‘remedial teaching’ is correct, the interpretation of the results concerning group size should be adjusted. Currently the manuscript states that “… positive outcomes were associated with small-group programs as opposed to larger settings, with a 0.26 SD higher increase in the test scores for participants with small-group remediation programs.” This is somewhat misleading, as it suggests that there is a substantively large and statistically significant difference in the effects of remedial education measures characterised by small groups, vs larger groups. Yet, as Figure 2 shows, the difference between ‘entire class’ and ‘small groups’ is substantively relatively small and (telling from the confidence intervals) not statistically significant. It would be important and relatively straightforward to adjust the interpretation to reflect the fact that the reference group here are students who did not participate in remedial education and there seems to be no statistically significant difference in the effectiveness of remedial education measures with different group sizes. An alternative analytical approach to address research aim (3) would be to extract effect sizes of the remedial education program for each school in the sample and use these as the dependent variable with program characteristics as the focal independent variables, controlling for school characteristics and characteristics of the student body participating in the remedial education program. Effect sizes could be weighed according to the precision of each estimate to account for cross-school variation herein. An important advantage of this alternative analytical approach would be that one could include all relevant program characteristics in the same model and thereby avoid the confounding between program characteristics that likely limits the interpretability of the results in the current version of the manuscript. The proposed alternative approach would also show the substantive size and statistical significance of the association between different program characteristics and children’s learning progress. It would also facilitate jointly showing and discussing the coefficients of all program characteristics analysed rather than focussing on only three characteristics in the main text, as is currently done. It would be helpful to explain more clearly what is meant by the concepts used to describe the different program characteristics (e.g. what is meant by ‘remedial teaching’), as well as how they are operationalised and measured. Relatedly, it would be helpful to add more theoretical discussion of why the characteristics of different remedial education measures that are examined are thought to be relevant. Minor points: * The confidence intervals shown in Figure 1 and Figure 2 seem to reflect 0.10 thresholds for statistical significance, rather than the usual 0.05 thresholds (see, e.g., Tale S6.1). Given that this is relatively unconventional, it would be important to highlight this in the main text and in the description of the figures. * It would be helpful to show results in Figure 2 for math and reading separately, given that effects vary by subject domain (as shown in Tables S6.1-13). * It is not clear why the model with school-level fixed effects shown in Table S5.5 is not used as the preferred model in the main text for research aim (2). This would seem to be the most robust specification. * The notes under Figure 1 state that “Students’ achievement is measured by the combined score of reading and mathematics on the difference between the midterm test and the end-of-year test in the school year 2019/2020.” This is not very clear and it would be helpful if it could be made clearer in the figure legend and main text what is meant here. * It would be interesting and aid the interpretability of results to show descriptive statistics on the prominence of different characteristics of the remedial education measures chosen by different schools in the Netherlands. For instance, how many schools employed small group measures as opposed to measures with larger groups of students? * It would be helpful to note that the analyses in Table S5.6 suggest that there are no differences in the effectiveness of the remedial education program across age groups. This could also be tested directly, using the alternative analytical approach outlined above. * It would be helpful to add relevant references to the discussion of the existing evidence on remedial education programs on p.6-8. For instance what evidence does the reported effect size of 0.4 SD for one-to-one tutoring on p.7 (top) refer to? * It would be helpful to state more clearly the three main aims/research questions in the introduction of the manuscript. * Using predicted probabilities rather than odds ratios may aide the interpretability of Figure 1. References: Engzell P, Frey A, Verhagen MD. Learning loss due to school closures during the COVID-19 pandemic. Proceedings of the National Academy of Sciences. 2021;118(17). Haelermans C, Jacobs M, van Vugt L, Aarts B, Abbink H, Smeets C, et al. A full year COVID-19 crisis with interrupted learning and two school closures: The effects on learning growth and inequality in primary education. ROA Research Memoranda No. 009 ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean? ). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy . Reviewer #1: Yes: Shashidhar Belbase Reviewer #2: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org . Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
PONE-D-24-17313R1Effective Remediation Programs for Vulnerable Students to Overcome Learning LossPLOS ONE Dear Dr. Jacobs, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Thank you for submitting a revision to your article, which engaged thoroughly with the points that reviewers raised. I asked both reviewers to look at your response and revised manuscript. This does lead me to ask for some further revisions, but I would like to explain the situation as I see it further. Dealing with the two reviewers in turn: Reviewer 1 has changed their assessment to Minor Revisions. Their remaining comments are fairly general and, while you are welcome to take on their suggestions, I think you have essentially satisfied their suggestions and do not propose to send the revision that I am requesting to them. Reviewer 2 welcomes your strong engagement with their suggestions but, I think partly because of the improved clarity of the manuscript (which is, of course, welcome), has identified some additional concerns about aspects of the analyses for aims 2 and 3 and, so, retains a Major Revisions recommendation. I do share these concerns and, because our overriding concern should be to ensure the conclusions of the argument are well-supported by the analysis, also frame my request in this way. In respect of aim 2, the reviewer flags up alternative possibilities about the potential bias of the difference in differences estimator used. My read is that such potential bias make cause incorrect conclusions to be drawn from your analysis. As such, I am keen that this is addressed if possible. Specifically, I would encourage you to explore whether the matched difference-in-differences analysis suggested is possible with the data that you have to hand (checking the plausibility of this strategy by whether you are then able to observe more encouraging parallel trends within the matched sample). However, mindful that it may not be possible with the data you have, you may alternatively submit a revision that much more clearly documents the potential biases that could arise such that the reader can very clearly judge for themselves the strength of evidence provided by this aspect of the study. In respect of aim 3, it seems to me (although, as always, you have the option to rebut this in your response) that it should be possible to address and revise this aspect of the manuscript based on the further explanations and suggestions provided by reviewer 2. I would, of course, also encourage you to take on board sundry suggestions provided by the reviewer (my summary here is not exhaustive). I would also like to thank you for engaging on the editorial points regarding financial disclosure and data availability. It is my assessment that these now meet the relevant policies (although the data are not publicly available you clearly document access procedures to be followed in gaining access, even if these may require moving to the Netherlands!). Thank you again for engaging wholeheartedly in this process. I hope you will agree that it is helping to strengthen the robustness of the findings reported in this paper and, hence, their credibility. Although, of course, I cannot guarantee acceptance of the manuscript at this stage, I do think it will be possible with the revisions requested. Please submit your revised manuscript by Jan 24 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols . We look forward to receiving your revised manuscript. Kind regards, Jake Anders Academic Editor PLOS ONE [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: No ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: No ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: No ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Dear Authors, Thank you for addressing all the comments in the previous review. I have a few minor comments/suggestions this time. You have three objectives in the introduction. I would recommend aligning these three objectives in: (a) Literature review and theoretical framework (that needs your attention), (b) Method (that you have done) (c) Results (that you have done) (d) Discussion (that needs your attention) (e) Implication (that needs your attention). I hope this alignment will create a clear flow of objective, process, and outcomes. Best Regards, Reviewer Reviewer #2: Thank you for this revised version of the manuscript. Some aspects of the manuscript have certainly improved compared to the initial version. However, in my view, some serious concerns remain concerning the second and third research aims of the paper, which I explain below. (1) Research Aim 2 (1.1) As noted in my initial review, the divergence in learning trends between the treatment and the control group before the intervention (seen in figures S7.2-4 in the SI and discussed in the existing literature on learning losses during COVID-19 in the Netherlands), means that the current diff-in-diff research design is not suited to provide a point estimate of the true size of the effect of the remedial education programs implemented in the school year 2020-21 (Research Aim 2). But while I was initially under the impression that the DiD estimator may (only) be downwardly biased (hence my comment on the lower-bound estimate), upon reading the new version of the manuscript, it became clear that it may indeed be biased in either direction. This becomes clear when we consider that any of following could be true, given the divergent pre-trends and the unique context of the COVID-19 pandemic right before the intervention: (1) The DID-estimator may *underestimate* the true effect of the intervention, since the treatment group (which consists of lower achieving and socio-economically disadvantaged students) may have continued to have a lower learning rate than the control group (which consists of higher achieving and socio-economically advantaged students) in the absence of the intervention, perhaps because they continued to struggle with the consequences of the COVID-19 pandemic. This is what initially led me to proposed interpreting this as a 'lower-bound' estimate may suffice. But, (2) a second, (similarly) plausible scenario could be that the DID-estimator may *overestimate* the true effect of the intervention, since the treatment group may have recovered from the initial COVID-19 induced learning loss even in the absence of the intervention, for instance due to schools reopening in the Netherlands. In short, due to the divergent pre-trends, the current DiD estimator could be biased in either direction, and the DiD research design does not allow for a meaningful inference on the true effect of the remedial learning interventions. This leads me to the conclusion that the problem lies with the choice of comparison groups. The solution for approximating a more meaningful effect of the intervention may thus be to use a (propensity score) matching approach in order to compare students who are alike in terms of their previous attainment and socio-economic background, but who differ in whether they participated in the remedial education intervention. Given that the data at hand contain information on previous performance and socio-economic background this may be feasible. The question of course is how many low-performing students from low socio-economic backgrounds *did not* participate in any remedial learning intervention during the 2020-21 school year in the Netherlands. The feasibility of this alternative approach may be gauged from whether it allows for generating a control group that matches the pre-intervention learning trend during the 2019-20 school year of the treatment group. In other words, to correctly identify the effect of the remedial education intervention, one would need to work with a control group that incurred similar learning losses as the students in the treatment group, but did not receive remedial education. I would like to emphasise that the issue of interpreting the DiD-estimator as a point estimate despite diverging pre-trends is rather serious. Many readers are likely to take the point estimate reported in the abstract (0.065 SD) at face value and conclude that the remediation programmes implement in the Netherlands after the COVID-19 pandemic were effective, even though the estimate may be strongly biased in either direction, as explained above. I hope that with the alternative approach suggested, it may be possible to address this issue. That said, if this issue cannot be addressed, it may not be possible to address the second research aim with the data at hand. (1.2) It would be helpful to combine the first panel from figures S7.2-4 in one figure and include it in the main text (with the revised comparison groups, as proposed above). This would also help the reader gauge the substantive size of the effects. It is not clear, why Figure S7.1 is shown in the SI, given that this cohort does not form part of the analysis. For the more detailed figures in the appendix, it would be important to harmonise the scale of the y-axis across the different panels. (2) Research Aim 3 (2.1) There seems to be a misunderstanding concerning the alternative, school-level approach that I proposed for addressing the third research aim. In their reply to the reviewer comments, the author(s) note that they "extract[ed] average student scores at the school level for the 2020/2021 academic year when the remediation programs were implemented [...] and analys[ed] the content of the remediation programs with the extracted test scores for composite, reading and mathematics" (see reply to reviewer comments p.13). But my proposal was to extract the *effect sizes* of the remedial education program for each school in the sample (rather than the average student scores) and use these as the dependent variable with program characteristics as the focal independent variables, controlling for school characteristics and characteristics of the student body participating in the remedial education program. Effect sizes could be weighed according to the precision of each estimate to account for cross-school variation herein. If certain programme characteristics would lead to remediation programmes being more effective, one should see an association between these programme characteristics and the size of the effect of the remediation programme (even if the sample size is relatively small). Conducting the analysis both at the school and at the individual level may be a possible way to test the robustness of the findings in response to the third research aim. In this respect, it is somewhat concerning that the results from the school-level analysis suggests that programmes run by 'internal staff' are more effective (see table R1 in the response letter to the reviewer report on p.15), while the results from the individual-level analysis (shown in Figure 2b) suggests that programmes run by 'external staff' are more effective. Such contradiction may be resolved by adjusting the school-level analysis as suggested. (2.2) As I noted in my initial report, there are two related problems concerning the individual-level analysis presented in Tables S6.1-14. The research question/aim focusses on examining "which types of remediation programs are most effective in enhancing student achievements" (p.4). In other words the aim is to compare the *relative* effectiveness of different types of remediation programmes. But the models shown in Tables S6.1-14 do not allow for inferences on the statistical significance of differences in the effectiveness between different programme types, since the reference category is students who did not participate in any remediation programme. The second problem is that the characteristics of the remedial education measures implemented in different schools are likely to co-vary and be confounded by characteristics that are not included in each of the separate models shown in Tables S6.1-13. For their revised individual analysis, and in order to address the issue of confounding between programme characteristics, authors propose to "include these remediation program characteristics in our student-level difference-in-differences analysis". They also note that "our revised approach involves including all remediation program characteristics as control variables" (see reply to reviewer comments p.13). Since the relevant tables S6.1-14 in the SI only show 'yes' for the controls, it is not entirely clear what was done here, but it seems that only main effects were controlled for. This does not seem to solve the issue of confounding of the interaction between school-year and remediation type and also leaves the first problem noted above unsolved. Aside from the school-level analysis proposed above, another way to address the second problem (to some extent) using an individual-level analysis might be to code the different types of remediation programmes into multiple, *mutually-exclusive* categories and run a two-way interaction between school year and this comprehensive, categorical measure of intervention type (with an additional category for no participation in any remedial program) and select one intervention type (perhaps the most common one), as the base category. (2.3) It seems problematic, that the manuscript only selects the statistically significant results from Tables S6.1-14 for inclusion in the main text (in Figures 2a and 2b). Readers may not appreciate the possible multiple-testing bias when only seeing two out of the fourteen models that were tested in the main text. Using either of the school-level or the individual-level approach indicated above, may be one way to present the results for the different program characteristics in a simple yet encompassing way. It would be helpful to show all results from this analysis in one coefficient plot. (2.4) It would be helpful to explain more clearly what is meant by the concepts used to describe the different program characteristics (e.g. what is meant by ‘remedial teaching’ or 'focus on language' ), as well as how they are operationalised and measured. Relatedly, it would be helpful to add more theoretical discussion of why the characteristics of different remedial education measures that are examined are thought to be relevant. (3) Other comments: (3.1) I agree that the current way to portray predicted probabilities in Figure 1a is similarly hard (or even harder) to interpret as the original figure using odds ratios. Using average marginal effects may allow for portraying the association btw. programme participation and different factors in a more parsimonious way while also allowing for the interpretation of the substantive size of the effect. Relatedly, it would be helpful to interpret the substantive size of the effects of different factors shown in Figure 1a on programme participation. The text currently only seems to comment on the statistical significance and the sign of coefficients, but not on their substantive size. (3.2) Some important labels are missing in figures (e.g. y-axis labels in Figures 1a and 1b). ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean? ). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy . Reviewer #1: Yes: Shashidhar Belbase Reviewer #2: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org . Please note that Supporting Information files do not need this step. |
| Revision 2 |
|
PONE-D-24-17313R2Effective Remediation Programs for Vulnerable Students to Overcome Learning LossPLOS ONE Dear Dr. Jacobs, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. As discussed in the previous round, I have only sought a review from one of the previous reviewers so we are just ensuring that the important issues they raise have been dealt with as best as possible. Indeed, I would anticipate being able to accept the paper if you make the minor revisions noted without recourse to further review. Ultimately, I think the biggest residual issue comes down to a disagreement over whether the parallel trends assumption is supported by the evidence provided in Figure 2, with the reviewer continuing to express concern in this regard. Looking closely at the final two points of Figure 2 I must agree that there is visible evidence of widening in more than one of the cohort/subject combinations, albeit not sizeable. Divergence in trends just ahead of treatment is, of course, potentially a particular concern because it may suggest something about the selection process into treatment being associated with pupils' trajectories. I do agree, therefore, that it is important to caveat the findings with the risks of bias that may result from a lack of parallel trends. Ultimately, this is important regardless of the quality of pre-trends which are only suggestive since the (untestable) identifying assumption of difference in differences is that there would have been parallel trends between the treatment and comparison groups in the absence of treatment. I, therefore, ask that you do note these (small) differences in trends just ahead of treatment in your discussion of Figure 2 and the risks that these pose for potential biases. In support of this, I support the reviewer's suggestion to reinstate the 2020/21 data into Figure 2 to help readers judge the change in gradients involved both pre- and post-treatment. I suggest including a vertical line between the pre-test and post-test periods to avoid any ambiguity. I would also say that while I like having this information in a single reference figure, following so many different patterns is a bit challenging — I would suggest more use of colours and just two patterns for the lines, perhaps variants of the same colour across Reading and Maths. With these additions, readers will then be able to judge for themselves the quality of the evidence presented when you lay it out clearly for them with the strengths and risks. The point made regarding the programme characteristics analyses being appropriately caveated is also important to make in this similar spirit. I am keen to accept this paper and share the evidence that it provides but must ensure that PLOS ONE's publication criterion that the data presented in the manuscript support the conclusions drawn, which I interpret to include appropriate caveating with potential risks to its validity. Please submit your revised manuscript by May 02 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols . We look forward to receiving your revised manuscript. Kind regards, Jake Anders Academic Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #2: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #2: Partly ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #2: Thank you for this revised version of the manuscript. I appreciate all of the work that the authors have done to expand on the analyses of the initial manuscript. I also appreciate that authors engage so carefully with the feedback provided. Below I focus on an assessment of the key elements of the analysis in the current version of the manuscript: (1) the DID analysis of the overall treatment effect of the remedial learning program, and (2) the analysis of how program effects co-vary with the characteristics of specific remedial education measures chosen by different schools. The DID analysis with both the matched sample and the subsample of low-performing students are certainly informative. However, it is clear from Figures 1-3 shown in the response to the reviewer comments, that the comparison groups are (a) not balanced on performance and, more importantly, (b) there continues to be a divergence in the pre trend between the mid-year test and the end-of-year test. I recognize that authors may feel like they have reached the end of what they can do with the data at hand and would not want to run further analyses to identify a comparison group that follows a similar the pre-trend, as the treatment group. If that is the case, I believe that it is important for the authors to be clearer about the limitations of their DID. Somewhat surprisingly, the current version of the manuscript still states that "the crucial assumption of the DiD analysis, the parallel time trend assumption, is checked and met" (p.18), when it is evidently not met. Similarly, Section of the SI still asserts that "the parallel time trend shows that both participants and non-participants move parallel over time in their developing test scores. This crucial identifying restriction is checked and shows that this assumption is not violated" (p.46). It would be important to correct this and to write more clearly when interpreting their results (p.18 onwards) that (a) the parallel trends assumption does *not* hold, (b) that results may be biased in either direction, and (c) that the effects shown should therefore be interpreted with utmost caution and further research is needed to properly identify the causal effect of the remedial intervention. In my view it would be important to extend the timeline shown in Figures 1-3 in the response to the reviewer report (i.e. Figure 2 in the main text, and Figures S7.1-3 in the SI) to include the 2020/2021 mid-year test and the 2020/2021 end-of-year test. This would allow the reader to visually assess the substantive significance of both the pre-trend divergence and the treatment effect. Of course this would mean that these figures should be renamed from showing only 'pre-trends' to showing 'over-time change in the performance of the comparison groups'. Relatedly, while authors state that Figure 3 (in the response letter) shows "almost no diverging trends", this is of course a matter of interpretation, and showing how the performance of the comparison group continues to develop in the the 2020/2021 mid-year test and the 2020/2021 end-of-year test would allow the reader to gauge the substantive size of both the divergence in the pre-trend and the post-intervention trend. The analyses of different program characteristics has improved as a result of the new approach of categorising and analysing these program characteristics. That said, here too, it would be important for the manuscript to highlight more clearly in the interpretation of these results (p.20-23) that the results shown constitute descriptive (as opposed to causal) evidence, that the shown associations may be confounded by unobserved heterogeneity, and that results should therefore should be interpreted with caution. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean? ). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy . Reviewer #2: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org . Please note that Supporting Information files do not need this step. |
| Revision 3 |
|
Effective Remediation Programs for Vulnerable Students to Overcome Learning Loss PONE-D-24-17313R3 Dear Dr. Jacobs, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. If you have any questions relating to publication charges, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Jake Anders Academic Editor PLOS ONE Additional Editor Comments (optional): Thank you for engaging carefully with some very detailed reviewer comments. I hope you agree that the outcome has been a really valuable contribution to the literature on this topic. Well done on a great paper. Reviewers' comments: |
| Formally Accepted |
|
PONE-D-24-17313R3 PLOS ONE Dear Dr. Jacobs, I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team. At this stage, our production department will prepare your paper for publication. This includes ensuring the following: * All references, tables, and figures are properly cited * All relevant supporting information is included in the manuscript submission, * There are no issues that prevent the paper from being properly typeset You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps. Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. If we can help with anything else, please email us at customercare@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Prof. Jake Anders Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .