Peer Review History
| Original SubmissionApril 29, 2024 |
|---|
|
PONE-D-24-17175Exploring YouTube content creators' perspectives on generative AI in language learning: Insights through opinion mining and sentiment analysis PLOS ONE Dear Dr. Kara Aydemir, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Jul 14 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Dilrukshi Gamage, Ph.D Academic Editor PLOS ONE Journal requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. We note that Figure 4 in your submission contain copyrighted images. All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For more information, see our copyright guidelines: http://journals.plos.org/plosone/s/licenses-and-copyright. We require you to either (1) present written permission from the copyright holder to publish these figures specifically under the CC BY 4.0 license, or (2) remove the figures from your submission: 1. You may seek permission from the original copyright holder of Figure(s) [#] to publish the content specifically under the CC BY 4.0 license. We recommend that you contact the original copyright holder with the Content Permission Form (http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf) and the following text: “I request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license and complete the attached form.” Please upload the completed Content Permission Form or other proof of granted permissions as an "Other" file with your submission. In the figure caption of the copyrighted figure, please include the following text: “Reprinted from [ref] under a CC BY license, with permission from [name of publisher], original copyright [original copyright year].” 2. If you are unable to obtain permission from the original copyright holder to publish these figures under the CC BY 4.0 license or if the copyright holder’s requirements are incompatible with the CC BY 4.0 license, please either i) remove the figure or ii) supply a replacement figure that complies with the CC BY 4.0 license. Please check copyright information on all replacement figures and update the figure caption with source information. If applicable, please specify in the figure caption text when a figure is similar but not identical to the original image and is therefore for illustrative purposes only. 3. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information. Additional Editor Comments: Thank you for submitting this insightful paper. While the manuscript, "Exploring YouTube content creators' perspectives on generative AI in language learning: Insights through opinion mining and sentiment analysis," provides a valuable analysis of YouTube videos on generative AI in language learning, it requires significant revisions before it can be considered for publication despite the reviewers has recommend for Minor Revisions. Reviewers may have missed major details and seems need more rigorous review. Firstly, the introduction, while comprehensive, needs a broader discussion on various generative AI models beyond ChatGPT and a clear justification for the exclusive focus on ChatGPT. The research questions, especially RQ1, lack clarity and require rephrasing to make the expectations more explicit. The methods section, though detailed, leaves several critical points ambiguous. The role of video comments in the analysis is not clear, and there is confusion about whether these comments were used to reflect the perspectives of content creators or general users. Additionally, the classification and sentiment analysis processes need more transparency. There is no mention of accuracy checks for the classification models used, and the absence of code in the supplementary materials undermines the reproducibility of the study. The results section does not adequately relate the research questions to the outcomes. Figures 6 and 7, for example, are confusing due to the unclear units of analysis and the lack of detail about what is being measured. Moreover, the sentiment analysis lacks transparency regarding the models and libraries used, such as whether VADER or NLTK was employed. The manuscript should provide examples and include the code in the appendix to enhance transparency. The discussion section is currently disorganized and fails to effectively highlight the impact of the study. It needs to be restructured to address each research question separately and clearly. The discussion should also include recommendations and limitations, which are currently missing. To improve the paper, the authors should: Broaden the introduction to include various generative AI models and justify the focus on ChatGPT. Clarify and rephrase RQ1 to make the expectations explicit. Clearly explain the role of video comments and whether they reflect content creators' or general users' perspectives. Provide transparency in the classification and sentiment analysis processes, including accuracy checks and the inclusion of code. Relate the results directly to the research questions and clarify the units of analysis in the figures. Organize the discussion to address each research question separately, include recommendations and limitations, and end with a strong conclusion. Given the extent of these necessary revisions, I recommend that this paper undergo a major revision cycle. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: No Reviewer #2: Yes Reviewer #3: No ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: This paper delves into the perspectives of YouTube content creators regarding the utilization of generative AI in language learning. Its primary merit is the thorough exploration of these creators' views on the topic, achieved through innovative analysis that emphasizes practical approaches to language learning. Nevertheless, the discussion on how generative AI has enhanced language learning could have been more detailed and explicit Reviewer #2: Strengths: -The introduction provides a thorough background on the importance of technology in language learning, highlighting the role of generative AI. -It effectively contextualizes the study within the existing literature, mentioning various technologies previously used in language learning. -The rationale for focusing on YouTube content creators is well-explained, emphasizing the platform's reach and influence. Some suggestions to improve the manuscript can be -The introduction could benefit from a more explicit statement of the research gap. While the context is well-established, the specific gap this study addresses could be clearer. -The research questions could be presented more prominently to outline the study's objectives clearly. -The manual revision of model results by researchers introduces potential bias (line 197-198). Discussing measures taken to minimize this bias or an inter-rater reliability metric would strengthen the methodology. Reviewer #3: Thank you for the insightful paper. I really enjoyed reading it. Overall, the paper is well-written, easy to follow, and the key objectives, methods, and discussions are clearly presented and easy to understand. The perspectives on GenAI are trending in the research domain, and the authors of this paper particularly examine YouTubers' perspectives on language learning by analyzing their video content. In doing so, the authors specifically used the ChatGPT model while incorporating NLP methods. Although there is a growing body of research and interest, and the authors have highlighted valuable insights, let me point out some areas that need to be addressed. The introduction is clear and effectively sets the stage for the research. However, some parts of the introduction are dense and difficult to follow. It would have been beneficial to specify a broader range of generative AI models beyond ChatGPT and justify why only ChatGPT was chosen for this study. Although the research questions are clear, it would be more effective if you could elaborate on the expectations of RQ1. Initially, it was difficult to grasp, and it seems to need some rephrasing: “RQ1: How are YouTube video contents about the use of generative AI in language learning related to language skills?” When you ask about the relationship with language skills, what were the expectations? This is not clear in the introduction. The methods are clear, and it seems a rigorous process was followed to filter the specific videos. However, I am confused about whether you used the comments of the videos for analysis. The statement "After accessing and filtering videos, video comments were collected using a relevant API" suggests that comments were used. Since comments are from general users and not from creators, they may not provide the perspectives outlined in the paper. On the other hand, I assume you are using classification to answer RQ1, but learning skills related to reading, writing, listening, and speaking components were never introduced earlier to clarify if that was the expectation of the relationship. Additionally, when you used the model, did you check the accuracy levels? There was no indication of your code in the OSF, but the Excel sheet already categorizes the data. To increase confidence in your findings, please provide the code you used for the task. You performed a topic model using LDA. Was the purpose of this to understand the topical areas of the YouTube transcripts, or were comments included as well? What is the transparency of the sentiment model? What library was used—VADER or NLTK? Check the code in the prompt and be open about everything. Specifically, if you can analyze the sentiment into categories such as Optimistic, Distrustful, Mixed, Analytical, Ethical, Biased, Futuristic, Neutral, provide evidence of this robustness. Show examples and include the code in the appendix. The filtration process could have been explained a bit more to give a clearer idea. Which API was used to collect video comments? What was the reason for choosing the defined prompts? Were any previous evaluations done on that? In the results section, it would have been helpful to relate the research questions to the outcomes. There are a few issues to address. When you inserted Figures 6 and 7, the unit of analysis for the sentiment is confusing. Did you analyze sentiment at the YouTube video level? What is the y-axis representing? At some points, you mentioned comment sentiment, but it was not clear in the methods how you would be using the content—whether at the transcript sentence level, paragraph level, or per video transcript level. If you used comments, what was the unit of analysis, per video level? For Figure 12, which shows average sentiment scores, what specifically is being averaged? Did you take all the transcripts related to listening and average them by the number of what? Please be more specific in your descriptions. Additionally, none of these details were explicitly mentioned in the methods. The discussion section needs to be organized to highlight each research question in subparagraphs. Currently, everything is in one plane, making it difficult to distinguish the impact of the study. Add the recommendations and limitations under the Discussion section and finish with the conclusion. I believe if you can address the mentioned issues in the paper, it has the potential for the next round of review. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: Yes: Naveen Periyasamy Rajan ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
Exploring YouTube content creators' perspectives on generative AI in language learning: Insights through opinion mining and sentiment analysis PONE-D-24-17175R1 Dear Dr. Kara Aydemir, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. If you have any questions relating to publication charges, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Dilrukshi Gamage, Ph.D Academic Editor PLOS ONE Additional Editor Comments (optional): Authors have adequately addressed the comments and improved the paper significantly. Thus I believe it is ready to be accepted. |
| Formally Accepted |
|
PONE-D-24-17175R1 PLOS ONE Dear Dr. Kara Aydemir, I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team. At this stage, our production department will prepare your paper for publication. This includes ensuring the following: * All references, tables, and figures are properly cited * All relevant supporting information is included in the manuscript submission, * There are no issues that prevent the paper from being properly typeset If revisions are needed, the production department will contact you directly to resolve them. If no revisions are needed, you will receive an email when the publication date has been set. At this time, we do not offer pre-publication proofs to authors during production of the accepted work. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few weeks to review your paper and let you know the next and final steps. Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. If we can help with anything else, please email us at customercare@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Dilrukshi Gamage Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .