Peer Review History
Original SubmissionOctober 25, 2021 |
---|
PONE-D-21-34049An empirical evaluation of Lex/Yacc and ANTLR parser generation toolsPLOS ONE Dear Dr. Ortin, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. The material in the paper looks interesting, although the manuscript should be revised carefully to meet PLOS ONE publication criteria. The authors should put an effort to improve the paper by taking into account the comments of all the referees, in particular those by Reviewer 5 who raised important remarks particularly related to the methodology used for the design experiments to evaluate the two tools, including the choice on the population sample used for such experiments. Please not that such reviewer was suggesting to reject the paper, however I want to give a chance to the authors to improve the manuscript by addressing the comments raised by this particular reviewer since I believe the work done is important and the raised comments are feasible to be addressed. Please take carefully into account all the comments for improving the manuscript to meet PLOS ONE standards before resubmitting it to the journal. Please submit your revised manuscript by Jan 08 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Sergio Consoli Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Please provide additional details regarding participant consent to collect personal data, including email addresses, names, or phone numbers. In the Methods section, please ensure that you have specified how consent was obtained and how the study met relevant personal data and privacy laws. If data were collected anonymously, please include this information. 3. Thank you for stating the following in the Acknowledgments Section of your manuscript: "This work has been partially funded by the Spanish Department of Science, Innovation, and Universities: project RTI2018-099235-B-I00. The authors have also received funds from the University of Oviedo through its support to official research groups (GR-2011-0040)." We note that you have provided funding information that is not currently declared in your Funding Statement. However, funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form. Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows: "This work has been partially funded by the Spanish Department of Science, Innovation, and Universities: project RTI2018-099235-B-I00. The authors have also received funds from the University of Oviedo through its support to official research groups (GR-2011-0040)." Please include your amended statements within your cover letter; we will change the online submission form on your behalf. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: Yes Reviewer #5: Partly Reviewer #6: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: I Don't Know Reviewer #5: Yes Reviewer #6: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: No Reviewer #5: No Reviewer #6: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: Yes Reviewer #5: Yes Reviewer #6: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: This is a very useful paper. Empirical studies comparing the usability of different software tools are much needed and relatively uncommon. When software tools are compared, the criteria are mostly concerned with performance metrics rather than usability. This paper would be especially useful for faculty who are designing courses that make use of parser generators, but it would also be useful for software engineers in general. The paper is easy to read. There were only a few minor typos. For example, on line 48 "related related" should probably be "related work". I did have one minor issue with the paper. The purpose of any course is for students to learn the concepts and to acquire the skill to make use of the concepts. Tools are only a means to the end, not the end itself. Unfortunately, the student performance evaluations (i.e., exam grades) combine evaluation on theory and on lab skill. It would have been useful if these two performance evaluations had been analyzed separately. Reviewer #2: This research is of interest to researchers in language design and to instructors in software engineering and computer science. The specific question examined is fundamental to many aspects of practical programming. Generally the work meets best practice in empirical software engineering, however there could be improvements to the way it is written up. I recommend use of the guidelines in: http://people.ucalgary.ca/~far/Lectures/SENG421/PDF/Guidelines.pdf Reviewer #3: I'm glad to see research being done to help justify the move from established software to newer (and arguably better) alternatives. Overall, I don't find any major issues with this paper, however, a few recommendations that I'd like to see prior to its publishing: 1) I would suggest moving the related work section up towards the front of the paper and used to help motivate why ANTLR is being considered in this study. 2) In the results section, I would like to see plots of the actual distribution of the data, especially if it can be overlapped to clearly show how different each group is, and the percentage of data that lies outside the distribution of the opposing group. Furthermore, I think that there should be a better explanation of what exactly "1% work completed" means. How does that translate into non-percentage units? Are we talking 1 additional function in the assignment? 10 lines of code? 1 step of a 20 step process? Including the actual numbers alongside the percent would help to make the results a little more useful. If ANTLER allows students to get 10 more lines of code written during a lab, I might question whether or not its worth converting my entire course over to using it - but if 10 more students complete the assignment using ANTLR over Lex/Yacc, then its absolutely worth the switch. Reviewer #4: This paper shows an empirical study on the performance of two groups of undergraduate students of software engineering implementing a compiler using Lex/Yacc and ANTLR, respectively. The authors measure students' completion rates and time spent for labs, attendance and pass/fail rates for exams, and opinions regarding the use of the tools. All data are shown in tables and figures, indicating the superiority of ANTLR over Lex/Yacc in teaching and helping students learn compiler construction more efficiently. Overall, I think this paper is very interesting and is focused on an important issue for educators. The authors have conducted extensive data collection and quantitative analysis to support the conclusion. One interesting finding in the paper is that the choice of tools for programming assignments affects students' performance in paper exams. A shallow cause could be that students using ANTLR had more hands-on experiences, which allow them to understand the theoretical parts better. Nevertheless, a deeper one probably involves some psychological factors, which I would appreciate if the authors could include some. Another complaint is that all figures are not in place but at the end of the manuscript, causing the reading experience to be a little unpleasant. Reviewer #5: In this manuscript, the authors conduct an empirical study to compare two widely used parser generation tools: ANTLR and Lex/Yacc. Specifically, they design experiments to evaluate the two tools from the following aspects: language implementor productivity, simplicity, intuitiveness, and maintainability. The experiment results demonstrate that ANTLR yields better performance based on the measured features. From my point of view, the contribution of this work is twofold: 1. The authors design several metrics to measure the two parser generation tools. In particular, the metrics are defined based on the completion level of students' work, additional time that the students needed to finish the labs. Also, students' options on the tools (through questionnaire) are collected for the analysis of the tools. Those metrics (features) can help us better understand the tools. 2. The experiments are conducted in "large" scale since there are more than 90 students in each group. Also, the authors provide statistical significance evidences when comparing the differences of the two parser generation tools. At the same time, I have two concerns about this work: 1. All the experiments are based on students from the "Programming Language Design and Implementation" course, and those students are all beginners of the tools (this is my assumption). In my opinion, we can not say one tool is better than the other tool only based on the beginners' experience. 2. When measuring the simplicity, intuitiveness, and maintainability of the tools, only one question is used to measure one aspect of the tool. For example, "Q1: I have found it easy to use the tool(s) to generate the lexical and syntactic analyzers" is used to measure the simplicity of the tool. First, I feel the question design is too general/broad, so I am not sure the answers from students are accurately measuring the specific aspect of the tool. Second, I am not sure if the students are measuring the tool in the same scale (especially when the students are not trained on how to assign the scale of the simplicity,) since the concept of simplicity (also for the other features) is different for different students. I am not convinced by the questionnaire design. In summary, the authors design several features/metrics to compare two parser generation tools (ANTLR and Lex/Yacc) and find that ANTLR is better than Lex/Yacc based on the feedback of students from two independent groups. Even though the differences of the two tools are "significant" based on the experiment results, it is questionable whether the experiment/questionnaire is well designed (as the authors only collect the experience of beginners and the questions are too general to measure the specific aspect of a tool). In addition one typo is found in line 48 on page 3, "the related related"? Reviewer #6: Parsers are used various software development scenarios, two of which are Lex/Yacc and ANTLR. Surprisingly, though ANTLR provides more features, Lex/Yacc is used more in university courses settings. Different approaches exist for qualitative comparisons of the features provided by ANTLR and Lex/Yacc. However, no empirical study evaluates features such as language implementor productivity and tool simplicity, intuitiveness, and maintainability. This paper fills this gap with an empirical experiment with undergraduate students of a Software Engineering degree in a compiler design and implementation course. The students are divided into two random groups, implementing the same language with a different parser generator. The performance of both groups are statistically compared. The results show that use of ANTLR increased pass rates and exam attendance percentages. It is amusing that ANTLR is used less than Lex/Yacc in university courses settings. This study shows the ANTLR is only used in one of fourteen top universities for compiler design and implementation course. This suggests the current status quo needs change as ANTLR has more features and is more accepted by the students in the empirical study, e.g, the students satisfaction with ANTLR, the increased passing rates and better exam attendance. The authors comprehensively compared Lex/Yacc and ANTLR features such as parsing strategy. As with the results, the authors carefully designed a questionnaire with multiple questions for the students to evaluate the difference of the tools. This paper also contains a variety of results regarding such as the percentage of students who completed different activities related to lexer and parser implementations. A minor point and a suggestion may be in table 1, the authors can also add top universities from USNEWS ranking. Overall, this paper represents an original research study, in which the data supports its conclusions. It is also well-written in English. And to the best of our knowledge, it is not published elsewhere before. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Kenneth Baclawski Reviewer #2: Yes: Mike Holcombe Reviewer #3: No Reviewer #4: No Reviewer #5: No Reviewer #6: Yes: Sanchuan Chen [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
Revision 1 |
An empirical evaluation of Lex/Yacc and ANTLR parser generation tools PONE-D-21-34049R1 Dear Dr. Ortin, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Sergio Consoli Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed Reviewer #5: All comments have been addressed Reviewer #6: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes Reviewer #5: Yes Reviewer #6: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes Reviewer #5: Yes Reviewer #6: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #5: Yes Reviewer #6: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #5: Yes Reviewer #6: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: (No Response) Reviewer #2: The main issues have been dealt with. The reference to the recommended standards for empirical software engineering research are addressed. Reviewer #5: In the updated manuscript, the authors compare two widely used parser generation tools (ANTLR and Lex/Yacc) under the academic context, and demonstrate that one tool is better than the other in the use case of students from software engineering. Compared to the previous version, the authors improved the manuscript in the following aspects: 1. The conclusion that ANTLR shows significant benefits compared to Lex/Yacc is supported by the empirical comparison results when it is in academic context. The experiments are conducted in a relatively large scale, and the statistical significance evidences are also provided. This addresses one of my concerns. 2. The authors give more information about the questionnaire design, and also utilize Krippendorff’s alpha coefficient to measure the reliability of the evaluation. As for the use of simple questions in the questionnaire, this might be the only option to collect more valid data points. In summary, the authors did great job to improve the manuscript. Those changes address my concerns in the previous review round. Reviewer #6: I am delighted that the authors have resolved my minor concern for last submission of this paper. Parsers are used various software development scenarios, two of which are Lex/Yacc and ANTLR. Surprisingly, though ANTLR provides more features, Lex/Yacc is used more in university courses settings. Different approaches exist for qualitative comparisons of the features provided by ANTLR and Lex/Yacc. However, no empirical study evaluates features such as language implementor productivity and tool simplicity, intuitiveness, and maintainability. This paper fills this gap with an empirical experiment with undergraduate students of a Software Engineering degree in a compiler design and implementation course. The students are divided into two random groups, implementing the same language with a different parser generator. The performance of both groups are statistically compared. The results show that use of ANTLR increased pass rates and exam attendance percentages. It is amusing that ANTLR is used less than Lex/Yacc in university courses settings. This study shows the ANTLR is only used in one of fourteen top universities for compiler design and implementation course. This suggests the current status quo needs change as ANTLR has more features and is more accepted by the students in the empirical study, e.g, the students satisfaction with ANTLR, the increased passing rates and better exam attendance. The authors comprehensively compared Lex/Yacc and ANTLR features such as parsing strategy. As with the results, the authors carefully designed a questionnaire with multiple questions for the students to evaluate the difference of the tools. This paper also contains a variety of results regarding such as the percentage of students who completed different activities related to lexer and parser implementations. Overall, this paper represents an original research study, in which the data supports its conclusions. It is also well-written in English. And to the best of our knowledge, it is not published elsewhere before. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: Yes: Mike Holcombe Reviewer #5: No Reviewer #6: Yes: Sanchuan Chen |
Formally Accepted |
PONE-D-21-34049R1 An empirical evaluation of Lex/Yacc and ANTLR parser generation tools Dear Dr. Ortin: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Sergio Consoli Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .