Peer Review History
| Original SubmissionAugust 15, 2023 |
|---|
|
PONE-D-23-25711ChatGPT-generated help produces learning gains equivalent to human tutors on mathematics skills PLOS ONE Dear Dr. Pardos, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. I have secured three expert reviews of the manuscript. I also carefully reviewed the manuscript myself. As you will see from their reviews below, all reviewers see the work as timely and the contribution theoretically and practically relevant. I agree. However, as you will also see in the reviews below, all reviewers expressed substantial concerns about the current version of the manuscript and requested extensive changes. As a summary, Reviewers 1 and 2 have questions about the methodology and terms used and suggest extending descriptions, for example, it is unclear what the learning task was exactly. Reviewer 1 points out that as presented, it would be impossible to replicate the study. Along those lines, Reviewers 2 and 3 add that the exact prompts used and the steps to create them should be included in the manuscript. Additionally, Reviewers 2 and 3 suggest more clearly articulating how the current work is situated in the current literature and its contribution (note that PLOS One does not evaluate manuscripts for novelty and being novel is neither a requirement nor a contribution). Finally, Reviewer 3 also raises concerns with the attrition levels in your study (having run many MTurk studies myself, I share this concern, the rates seem high even for MTurk), and suggest using a different statistical approach (e.g., regression, which is often robust to non-normality). I would also add that reporting a justification for the sample size used and power expected/achieved as well as effect size measures are standard practice in the field. Given this, I am asking that you revise the manuscript. Please note that I cannot guarantee acceptance of a revised version of the manuscript, and will seek expert reviewer opinions on your resubmission if you choose to do so. A successful resubmission should address the points above as well as all reviewer comments. Please submit your revised manuscript by Dec 01 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Paulo F. Carvalho Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Your ethics statement should only appear in the Methods section of your manuscript. If your ethics statement is written in any section besides the Methods, please move it to the Methods section and delete it from any other section. Please ensure that your ethics statement is included in your manuscript, as the ethics statement entered into the online submission form will not be published alongside your manuscript. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: No ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: No ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Thanks for authors' timely efforts in exploring the topic of ChatGPT-assisted education applications. This paper highlights how hints generated by ChatGPT can enhance learning gains, which is interesting. Below are my comments for your consideration: Abstract: • Consider adding a sentence or two at the start of the abstract to briefly introduce the concept of "authoring." • For the "3x4 study design", it would be beneficial to clarify the conditions or factors. Are the three conditions ChatGPT, human tutor, and no-help control, and the four conditions refer to distinct mathematics subjects? If so, kindly specify this in the paper. Introduction: • The motivation of this research is unclear. What is the motivation to answer the proposed two research questions? The first and second paragraphs in Introduction basically described LLM and then the third paragraph described the experimental process. Why is worthwhile to answer two research questions? Additionally, what is the connection between RQ1 and RQ2? • The term "tutor authoring tools" lacks a clear definition or description in the introduction. Given that PlosONE is a multidisciplinary journal, it's essential to define such terms for broader readers • Please describe on the "hallucination mitigation technique" in the section of introduction. Related Work: • I feel the section of Related Work is more like introducing the Background of Automatic hint generation and ChatGPT development and early applications. Would be good if authors could discuss the recent progress of ChatGPT in education from the community of EDM, AIED, LAK, L@S and etc, and the research gap related to the submitted work. • “Nascent works have conducted offline evaluation of GPT-3 [16]”, why authors discussed the evaluation of GPT-3. There are bunch of research evaluated the capability of ChatGPT (GPT-3.5) [1,2,3] in educational field. Also, the main focus of this paper is about ChatGPT [1] Wang, R. E., & Demszky, D. (2023). Is ChatGPT a Good Teacher Coach? Measuring Zero-Shot Performance For Scoring and Providing Actionable Insights on Classroom Instruction. arXiv preprint arXiv:2306.03090. [2] Dai, W., Lin, J., Jin, F., Li, T., Tsai, Y. S., Gasevic, D., & Chen, G. (2023). Can large language models provide feedback to students? A case study on ChatGPT. [3] Pankiewicz, M., & Baker, R. S. (2023). Large Language Models (GPT) for automating feedback on programming assignments. arXiv preprint arXiv:2307.00150. Method: The experimental design and data analysis appear thorough. However, can authors justify the choice of their prompting strategies? and also please put the details of prompting strategies in the paper, which would largely help the researchers from the community to repeat this study. Results: How did authors determine the quality (low and high) of hints? By human expert? Any evaluation rubric? if the quality of hints are determined by the correctness (incorrect and correct), why not just say incorrect and correct? What is the implication/pracaticality of the findings from RQ1? Typo: There are several "P<<0.001" throughout the paper. Do the authors mean "p<0.001"? On page 9/14, the correct phrasing should be "necessary conditions have been met." Limitation: With the release of GPT-4 in early March 2023, the experiment date of the submitted study is in February, would GPT-4 provide different or enhanced results? Reviewer #2: The study addressed the hot topic of ChatGPT, which is timely and can be added to the literature. However, the following concerns have to be resolved. 1. Line 2: Remove the term "first" from "first efficacy evaluation" since it is difficult to ascertain if this study is truly the first in the rapidly evolving research field. 2. The term "human tutor" is unclear. In Line 145 to 154, the authors discuss the "Human Tutor Hint Generation" – OATutor system, but it is unclear if "human tutor" refers to this system or an actual person. Clarification is needed, and if the reference is to a real person, their contribution and the quality of hints should be outlined. 3. In the Experimental design section (Line 223 to 240), details about student activities in each research group should be included to provide a clear understanding of what happened in each group. 4. Specify the version of GPT used in the experimental design to enhance clarity and completeness. 5. Address the missing analysis regarding RQ1 (Line 259 to 269). How was the quality of ChatGPT hints rated, and what was the interrater reliability? Explain the absence of a similar evaluation for Human tutor hints. 6. Clarify the activities or interventions implemented in the control group and explain the unusual gains and losses observed in terms of elementary math, college math, and statistics for the control group students as presented in Table 2. 7. Suggest including p-values in the table for informative purposes to enhance the clarity of the presented results. Reviewer #3: The authors conducted a between subjects study to evaluate whether chatGPT can generate hints that lead to learning gains compared to existing human-generated hints and no-hint condition. Participants include Mechanical Turk workers. Motivation and Related Work: I think the authors are addressing a timely and important problem with widespread implications for researchers and practitioners. Generative AI is incredibly popular and we as a community need a better understanding of their capabilities and limitations. I believe the authors did a reasonable job of highlighting this importance, but it would be useful if the authors could better articulate why their specific study is necessary. Specifically, how does this work relate to other recent similar studies by Arto Hellas and others. Relatedly, the authors would benefit from restructuring their related work. The authors have an entire section dedicated to LLMs but omit a lot of work related to LLMs being applied in educational settings. The authors do include some of most directly relevant related work about generating explanations, but the paper would be strengthened by including addition work such as: - Leinonen, Juho, et al. "Using large language models to enhance programming error messages." Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1. 2023. - Hellas, Arto, et al. "Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests." arXiv preprint arXiv:2306.05715 (2023). Methods: A between subjects study is appropriate in this context given the potential for learning effects. The inclusion of a no-help condition provides a useful contrast to better contextualize the main comparisons between hint types. The authors do a good job of justifying why they chose the domain and specific problems that they used in the study. However, there are multiple shortcomings for the current methodology. I enumerate my concerns below: 1. Prompting: It was unclear what prompt was used in this study. The authors write extensively about how LLMs work in the “Model” section, but it would be more useful to focus on how the models were prompted. Given the significant impact that prompting has on model performance, it would be useful to see how performance varied across multiple prompts or at least there should be a solid justification for why the prompt was chosen. 2. Sampling: Excluding 30.4% of participants is a large amount. The authors claim that these rates are consistent with prior work, but that prior work (e.g.: Simon and Walker) conducted a multi-day study where attrition is expected to be higher. Given this was a short 20 minute study done in one session, this connection to prior work is a false equivalence. Authors should provide better justification or revise recruitment. 3. Sampling Bias: Another concern around sampling is the variability in pre-test performance. The authors had a good idea to exclude participants that didn’t have a high school background to partially account for differences, but given that some participants might not have taken algebra (or prereqs) in decades introduces some confounds. This is one of many confounds related to working with crowd workers in this learning context. The authors need to do a better job of justifying why crowd workers were used instead of students in an algebra class. 4. Questions: Providing more details about pre and post-test questions, as well as the learning task, would enhance the method section. How did the authors ensure that participants engaged with the questions and hints? 5. Hints: Describing whether incorrect ChatGPT hints were shown, along with details on hint differences (e.g.: length, content, structure, etc) between conditions, is necessary for interpreting learning gains. 6. Hypothesis Testing: A more robust statistical model considering multiple comparisons and covariates related to hints and participants would strengthen the analysis. Time-on-task should not be evaluated with a separate model but should be included as a variable. The model should ideally also include math subject rather than computing means across conditions directly. 7. Learning Gains: he disparity in pretest scores among conditions should be addressed, considering the potential influence of the ceiling effect. Given the 6% difference in learning gain, could that be partially explained by ChatGPT participants having performed 10% worse on average in the pre-test? While not influencing my review, a qualitative analysis of hints could provide additional context. What other factors besides correctness (e.g.: length, content, structure, etc) , could differentiate between hint types and affected learning gains? Finally, I would like to commend the authors for compensating their participants fairly for their participation. Overall: The authors address an important topic, but methodological issues impact validity. Challenges with sampling and ecological validity, missing study design details, and statistical analysis issues raise uncertainties about the results. Addressing these concerns could significantly enhance the paper's impact, making it more valuable for researchers and practitioners, but I think these changes are outside the scope of a minor or major review cycle. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
PONE-D-23-25711R1ChatGPT-generated help produces learning gains equivalent to human tutors on mathematics skillsPLOS ONE Dear Dr. Pardos, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. I returned the manuscript to the same three expert reviewers who evaluated the original submission. As evident from their comprehensive reviews, Reviewer 1 and Reviewer 2 are satisfied with the current version of the manuscript. However, Reviewer 3 has lingering concerns. Please note that Reviewer's 3 review might be an attachment to this email. You can also retrieve it by logging in to the editorial manager system. After a thorough evaluation of the manuscript, I find myself aligning with Reviewer 3’s perspective. While I understand the frustration that comes with lengthy review processes, I believe that the issues raised by Reviewer 3 are rectifiable and warrant your attention. If you choose to revise the manuscript, please ensure that all reviewer comments are addressed. Specifically, the methodology section should be detailed enough and include all necessary information to allow for easy replication of the study. Inter-rater reliability (IRR) must be performed; ease of rating does not equate to consistency, and we must avoid results being influenced by the individual who conducted the rating. Furthermore, the statistical approach needs improvement. My primary concern is the mismatch between your statistical approach and your experimental design. The experiment involved random assignment to different subjects and conditions, implying main effects and interactions that were not analyzed. If these were not of interest, then the rationale behind the study design becomes unclear. Using a statistical analysis that is not suited to the experimental design could increase the risk of Type I error, which is a more significant concern than p-hacking in this case, especially since the study was not preregistered. Relatedly, a complete characterization of the power achieved and the impact of the high attrition rate on your conclusions is necessary. On a minor note, I do not understand what the p-values in Table 2 refer to. Since there are 3 levels in the variable condition, what pair is that p-value referring to and what measure are you comparing? If you decide to resubmit your manuscript, which I strongly encourage, I will not return it to the reviewers. I believe Reviewer 3’s points are clear and have been reiterated multiple times. I aim to expedite the decision-making process upon your resubmission. Please submit your revised manuscript by Apr 07 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Paulo F. Carvalho Academic Editor PLOS ONE [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed Reviewer #3: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: No ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: No ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: the paper is of high quality and, in my opinion, ready for publication. It provides valuable insights and should be a meaningful addition to the literature. Reviewer #2: My concerns appear to be addressed. I do not have additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #3: I was having error messages related to "Minimum Character Count Not Met" when I submitted my full 2900 character review. It is attached as review.txt ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 2 |
|
ChatGPT-generated help produces learning gains equivalent to human tutor-authored help on mathematics skills PONE-D-23-25711R2 Dear Dr. Pardos, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. If you have any questions relating to publication charges, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Paulo F. Carvalho Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: |
| Formally Accepted |
|
PONE-D-23-25711R2 PLOS ONE Dear Dr. Pardos, I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team. At this stage, our production department will prepare your paper for publication. This includes ensuring the following: * All references, tables, and figures are properly cited * All relevant supporting information is included in the manuscript submission, * There are no issues that prevent the paper from being properly typeset If revisions are needed, the production department will contact you directly to resolve them. If no revisions are needed, you will receive an email when the publication date has been set. At this time, we do not offer pre-publication proofs to authors during production of the accepted work. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few weeks to review your paper and let you know the next and final steps. Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. If we can help with anything else, please email us at customercare@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Paulo F. Carvalho Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .