Peer Review History
| Original SubmissionFebruary 2, 2022 |
|---|
|
PONE-D-22-03288Mind the gap: Distributed practice enhances performance in a MOBA gamePLOS ONE Dear Dr. Vardal, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Jun 15 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Rabiu Muazu Musa, PhD Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. In your Methods section, please include additional information about your dataset and ensure that you have included a statement specifying whether the collection and analysis method complied with the terms and conditions for the source of the data. Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: No Reviewer #2: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: ## Summary The manuscript under review aims to leverage large sample data from the videogame league of legends to assess the influence of distribution of practice through time on performance. They use two performance measures, KDA and GPM that are common metrics across many similar games. They find that distributed practice leads to better performance after 100 games. Further, using various clustering tools to identify similar distributions of games across learning, they find that differences in kinds of game distributions, while overall the same length of time, led to statistically significant but clinically insignificant differences in performance. ## Review There are many things about the study that are laudable. First, the authors are in the forefront of leveraging large samples of game data to investigate cognition. This work is important because it encourages breadth in cognitive theory by extending relevant data from the simple lab task to complex, real world domains and performance spanning much larger learning trajectories. Second, the work is a model for careful scientific study of complex datasets such as theirs. As task and data complexity goes up, the opportunity for mistakes also increases. Throughout, I found the analysis thoughtful and thorough: it was especially careful about testing possible confounds and artifacts and left the reader with a sense of confidence that findings were robust (if small at times). Third, I found the paper, at all points, admirably well written and clear. Finally, I found the conclusions suitably cautious: despite have a sample that 1,000 time larger than most studies the authors resisted overselling their findings. I have only one real worry. It is not likely to be the case, but as it impacts the main findings, I think it is worth ruling out. GPM and KDA will vary by position, with support positions getting lower numbers over all. It is also true, anecdotally, that players are encouraged to play support positions when they are new, so that their poor farming speed does not unduly impact their teams chance of winning. What if the extra performance of the distributed group is due to the fact that more of those players are transitioning to core roles than in the groups that mass their play into just a short period? The results would look like what you got, but be caused by differential position change, rather than distributed practice. It should be easy to check the frequency of position in each group. If they similar, then I think you are fine and the work is publishable. (although, now that I think about it, a null effect should also be publishable especially with a large n and a real world complex task.) I have some notes below on little things that stuck out to me during my reading, but they are minimal, and I think the work is publishable with very minor revisions, assuming my worry above is disproven. Well done, and thank you for a blessedly easy manuscript review. ## Notes As League of Legends developer Riot Games 309 keeps the MMR algorithm confidential, we normalised all values of MMR across the 310 data and analyses reported here. > I’m not sure I understand why normalization is a solution to confidential algorithms. 332 of differences arising prior experience > typo Specifically, we expected new players to 346 suffer more losses against the relatively more experienced majority (unobserved in the 347 sample) towards the start of the season, where the matchmaking algorithm has begun 348 to calibrate for fair matches. This intuition is supported by the trajectory of loss 349 percentage, which descends to 50% as the average rating of the sample stabilises 350 (plotted together with MMR). > this is a nice point, and shows the authors are thinking carefully about the nature of the game, and of the nature of competitive online matches with matchmaking. Displaying some thorough data analysis awareness here. 4.1 Optimising training for MOBAs > The final advice for players 4.1 (line 616) is not really useful. In paragraph 1 the authors themselves give both practical and reasons of preference to ignore it. The second paragraph has literally nothing to do with the actual findings of the study, and so seems an awkward bit of advice, and a poor fit for the concluding paragraph of the manuscript. While I do really like giving the game community some payback, this might be better cut. If you do cut it, make sure to grab the related sentence in the abstract as well. Reviewer #2: In the present work, the authors aim at studying the effect of practice distribution on performance in League of Legends, a well-known MOBA game, by extending previous work via exploratory data analysis and statistical analysis. They also claim to investigate through the lens of machine learning whether the timing of breaks influences performance in the game. Overall, the paper is well-written (with few exceptions) and easy to follow. The authors do a good job in introducing the problem, the relevant literature and discussing some of the gaps in studying how the distribution of practice affects performance. However, there are some major shortcomings that make the present work not ready for being published. 1. In the contributions, the authors state that they aim at “generalising laboratory work to an ecologically valid motor skill environment”. While this point only relates to the fact that League of Legends is used to analyse distribution of practice and its relation to performance, it is not clear how this work generalises previous work done in a laboratory setting in a different way than previous studies on games have done. In particular, the authors highlight how comparisons with laboratory study and new studies are particularly difficult. One specific reason mentioned is the very definition used for practice distribution, which in the first case is the time elapsed between trials, and in the second case is the time gap between the first and last game session. The authors argue that the conflation between practice distribution and frequency is a concern. However, they do use frequency in their very own analysis, thus, limiting their contributions and making their application an extension of previous research on a different type of game. It would have been interesting to actually filling the gap between the two lines of research by comparing different definitions of practice distribution and discussing their generalisability to different contexts. 2. League of Legends is a particular kind of game, which entails team collaboration and role-based playing. Champions are a fundamental part of the game, having their own sets of skills and powers. As the authors also note, champions have different roles, which influence the final performance metrics (used in the paper) that players achieve: a role as a support entails less kills and gold for instance. However, this aspect is not considered in the analysis, where players are evaluated independently of the role and champion they were playing. While looking at each and every champion might have included too much noise in the analysis and led to inconclusive results, the analysis of performance achieved separated by main roles in a team would have made the results stronger in my opinion. 3. In contrast to the data that is available through the API, the authors have access to the MMR which is another interesting metric for performance. However, the authors do not take this aspect into account in the main part of the manuscript. As they notice that the MMR has an opposite trend than the other performance metrics used, they quickly leave this aspect aside. However, this aspect also takes a big part of the discussion at the end of the manuscript, where additional results (in the SI) are introduced. I would suggest at the very least to introduce this discussion beforehand, as having it only at the very end is confusing for the reader. 4. One main concern in the current analysis is related to the use of only ranked games to compute the practice distribution. The authors highlight the fact that this can have an effect on the starting performance that is observed across players, however, it is also not clear how it could affect the results in case players would play other types of matches between ranked sessions. 5. Another weakness in the analysis is related to the samples used and their sizes. Not only the authors end up not considering almost two thirds of the players (as they state In the discussion) but they are also not clear on the samples they use at each step of the analysis and their sizes. In particular, the authors subsample on the basis of the initial GPM and KDA and report a sample size of more than 52K in the first case but none in the second. Moreover, when discussing the results the samples are reduced to a couple of thousands players and in one case to less than 400. However, when reporting values in Table 3, sample sizes seem to be higher. Also, when discussing the time gap (Section 3.1) they use a sample of more than 162K players. The selection process is obscure, and it should be clearly explained. 6. Connected to the previous point, the authors also select three ranges of days for practice, i.e. 1-15, 76-90, 136-150. These ranges are not motivated anywhere. The authors do not provide any background information on the underlying distribution of days, and do not describe how they defined these ranges and why. Moreover, they are not consistent as they always talk about days, but display hours in Figure 3 for instance. I suggest the authors to revise these points to make clear the process behind their data selection and sampling. 7. The discussion around the results in Figure 5 and 6 is not clear. The authors talk about cluster 1 and 4 being the extremes of a spacing spectrum. However, it seems that cluster 1 and 3 should be mentioned instead. Moreover, they describe Figure 6 as the temporal distribution of matches, and how clusters are characterised by different timing and intensity. However, this discussion seems related to the second subplot of Figure 5 as Figure 6 displays the performance metrics in time spliced across clusters. 8. Finally, the figures could be improved by using different line styles. At the moment colours are the only element used to distinguish among the lines which are close and overlapping. Moreover, even if Figure 3 has the distributions along the two axes displayed, it is still hard to understand the density of the points in the scatter plot. A better visualisation could make the use of a heat-map, for instance, to clearly show the two-dimensional distribution of the data. Minor comment: I recommend language editing, as there are a few typos. ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
PONE-D-22-03288R1Mind the gap: Distributed practice enhances performance in a MOBA gamePLOS ONE Dear Dr. Vardal, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Oct 28 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Rabiu Muazu Musa, PhD Academic Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: No ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: I have no further concerns, and believe the research is now suitable for publication. I thank the authors once again for an interesting read. Reviewer #2: I thank the authors for addressing my concerns and providing, in particular, additional evidence of the effects of role playing on the group performance results. Overall, I believe that their revisions make the present work almost ready to be published. I only have two final (and minor) edits: - When replying to my comment about the MMR, the authors also notice that "Although [MMR] is somewhat dependent on match outcome, match outcome itself is dependent on many factors including the behaviour of teammates and opponents." I think the authors should actually add this observation to the discussion about MMR in the paper. - In the edited text about ranked/unranked matches the authors write "our results 2". Should this be Figure 2? ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 2 |
|
Mind the gap: Distributed practice enhances performance in a MOBA game PONE-D-22-03288R2 Dear Dr. Vardal, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Rabiu Muazu Musa, PhD Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: |
| Formally Accepted |
|
PONE-D-22-03288R2 Mind the gap: Distributed practice enhances performance in a MOBA game Dear Dr. Vardal: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Rabiu Muazu Musa Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .