Peer Review History
| Original SubmissionMarch 8, 2021 |
|---|
|
PONE-D-21-07619 The Hot Hand in the Wild PLOS ONE Dear Dr. Pelechrinis, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. You will find below (also one of the attached) the reports of two reviewers that are very familiar with the topic. As you will see they see promise in the paper and make a number of comments. Both of them ask for more data. As they said if you feel that you cannot analyze more data you need to explain the reason. Please note that I will send back the revised paper to the same reviewers. Please submit your revised manuscript by Jul 01 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Pablo Brañas-Garza, PhD Economics Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Please modify the title to ensure that it is meeting PLOS’ guidelines (https://journals.plos.org/plosone/s/submission-guidelines#loc-title). In particular, the title should be "specific, descriptive, concise, and comprehensible to readers outside the field" and in this case we feel that it is not informative and specific about your study's scope and methodology. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The paper tests the "hot hand" effect using a data set from NBA shots but, unlike previous studies, relaxes the assumption that a player's shots are identically distributed. Instead, the paper uses information about each shot, such as distance to the basket, distance to the closest defender, touch time prior to shooting, or shot type, to construct a probability distribution about the chances that the current shot will be successful (a "make"). The authors then compare this probability against the observed probability, showing that the hot hand effect exists. Main comments 1.The paper considers only two NBA seasons. Are they robust to data from more seasons, or are the results stronger in some seasons but weaker in other seasons? Given the applied nature of this paper, it could help the authors show that the results remain across multiple seasons. 2.From my interpretation, the authors calculate the probability distribution of a given shot being successful using the list of information in the bullet points of page 3 for each shot. What is a bit unclear is whether the probability of each shot being successful is drawn from this probability distribution (breaking the identically distributed assumption from other papers), whether they just use this probability to compare it against the observed probability that the shot was successful, or whether the authors do both. Please highlight which of these points the paper makes, as that will help differentiate it from the literature and, importantly, it will also help you convey the results of the paper to researchers in other fields. 3.It would be interesting if the authors can test their model in a free throw data set. For a given player, his shots may not be identically distributed (with the probability of "make" in each shot being drawn from a different distribution, or alternatively, the conditional probability of "make" shot k is a function of the number of previous "makes" in the free throw contest. 4.In the last paragraphs of page 5, you mentioned that you also condition the probability of a make on whether the player made the last two shots as well, showing that your results are qualitatively unchanged (although the intensity of the hot hand effect is diminished). It would be interesting, as a robustness check, if you can condition this probability on whether the player made a longer history of previous shots as well. Besides being a standard robustness check, it could help the reader understand how persistent the hot hand effect is, once a player starts experiencing it. 5.In the Discussion section, you mention several questions in the second paragraph, without offering some answers, or educated guesses, based on your results. From your findings, one could interpret that altering some of the independent variables listed in the bullet points of page 3 will affect the cumulative distribution function from where a player draws his probability of success in his next shot, ultimately affecting the emergence of the hot hand effect. If this interpretation is correct, then the authors should provide a clearer description of which variables listed in the bullet points of page 3 have the largest effect at the cumulative distribution function they construct. A clear understanding of these effect would provide more concrete policy implications, namely, which variables should a player (or a coach) affect to increase the chances of makes in each minute of a game. Minor comments 1.Please define "relu" on page 3. 2.Please rewrite the paragraph at the bottom of page 3, as it's rather unclear. 3.Vector P is described in page 4 as "the vector with the shot make probabilities for each trial in the sequence". There is other, quite long, description of vector P in other sections of the paper. To simplify the notation, and clarify the intuition behind this vector, the authors could use notation from repeated games, calling P the "shot history." 4.The sentences starting at "Furthermore, from a technical standpoint, an ordinary least squares regression…" until "…estimation of the corresponding p-values" in page 7 are quite unclear. Please rewrite them to improve clarity. Are you refereeing to other papers in the literature using ordinary least squares in their regressions (that's what I think you mean) or did you use ordinary least squares at some point of your analysis? Reviewer #2: The authors use two full seasons of shot outcome and shot situation game data from SportVU optical tracking system. They use the first year to train a model of shot probability for different types of shots taken throughout the course of a game, for each particular player, accounting for things like distance to the basket and to the closest defender. Then they use these estimated shot probabilities to simulate shot sequences for each player in the 2nd year, based on the types of shots they take, out of sample, and use these simulated datasets to construct null distributions for their stat tests of shooting performance in the second year. The paper has promise, but needs a bit more work, as I outline below. Major Comments: 1. The authors' analysis is limited to two seasons of NBA data, which results in shot sample sizes for individual players that the authors state are too small to test for shooting performance on streak lengths longer than two. They offer no explanation for why they did not consider more years of data. Using more data would be nice for a number of reasons. For one, the tests would be more powered on the individual level. Second, they could consider performance on streaks of length three, which is fairly standard in the literature. Third, they could consider more individual shooters than the 21 that they currently consider (they should explain how they selected the 1000 shot inclusion criterion, and the sensitivity of their results to it). This would make their tests more robust, and the selection of shooters would be more representative of the typical shooter in the league. I hope that the authors can obtain more data and extend their analysis to the larger set of data. 2. As mentioned above, the authors train their model of hit probability for each shot type for each player using one season (the first) then use this to build null distributions for testing shot performance of the players in the second season. I understand that this may be standard procedure for "hold-out" out-of-sample testing, but I don't understand why the authors do not perform robustness checks. For example, cross-validation makes sense to me here, starting with instead using the second year to train the model, then test on the first year of shooting performance. Also, is it overly problematic to train the model on both years of the data then test it on performance from both periods (at least for a robustness check)? In principle, this would seem to allow for a more apples-to-apples analysis given that the model would be better calibrated to the "true shot probabilities." Also, this would allow more shot data to be used in tests on performance. Though as mentioned in comment 1, I hope the authors can obtain and analyze more data. Other Comments: 1. The authors quickly mention that if they were to have instead run a permutation test on shooting performance when on streaks of hits, permuting on the game x individual level, then they would have found much less evidence of streak shooting. This is a bit misleading in the sense that permuting on the game level eliminates the possibility of detecting any hot hand that initiates between (rather than within) games, whereas the authors' primary analysis does not. Thus, this is a bit of a stacked comparison. In Miller and Sanjurjo's working papers on controlled shooting experiments (R&R at Review of Economics and Statistics) and the NBA Three Point Contest (forthcoming at European Economic Review) they perform robustness checks in which they consider more granular permutation strata, e.g. for contest year, shooting round, ball on rack within round, and so on. Stratifying on the "contest x round" level (which would be the most similar to permuting on the game level in the authors work, tends to reduce statistical significance, for the reasons they discuss: it desensitizes tests to both: hot hand that activates between rounds, as well as other systematic changes in shooting behavior that activates between rounds not due to the hot hand. In this sense it is conservative to permute on the more granular levels. The authors should qualify their discussion accordingly. 2. It seems it would be worth adding a bit of discussion on the variables the authors use vs. those used by Bockocksky et al, and Rao before them, and explaining why there are differences, if there are. 3. There are a few things the authors can clean up: (i) the discussion of base rate vs. p(M|M)_perm vs. p(M|M)_data can be written better; as is it is a bit confusing, (ii) The discussion on the stability of effect size for streaks of length one vs two is potentially a bit misleading; depending on the model of hot hand shooting, the extent of hot hand expected on each of these two shot situations could easily be different, and there is an attenuation bias in true effect size due to measurement error (which varies with streak length) that is pointed out in the literature in work by Arkes, Stone and Arkes, and in each of the papers of Miller and Sanjurjo, (iii) the second para of the intro is written as if GVT were unaware of potential confounds in game data; this is misleading in the sense that they acknowledge this so consider also free throw shooting and conduct a controlled shooting experiment, (iv) similarly, say "free throw attempts or three-point contests are typically used when studying the phenomenon in basketball"; here, controlled shooting studies are excluded; the authors should consider citing the papers by koehler and Conley and Miller and Sanjurjo on 3pt shooting contest, and the controlled shooting study of Gilovich et al, and the analysis of several controlled shooting studies in another paper by Miller and Sanjurjo. Similarly, this may be the place to quickly cite other work on NBA game shooting, (v) (last para of the Intro) the authors state that permutation tests are common in hot hand studies with basketball data (and in discussion say "permutation tests have been used by the majority of the hot hand literature.."). They are used in Miller and Sanjurjo's work, but were not used in GVT and those that followed, until the recent work; in particular, who has used permutation tests on game data, as the authors suggest?, (vi) in the same paragraph the authors suggest that permutation tests are vulnerable to the small sample bias observed in Miller and Sanjurjo (2018), but those authors make clear that permutation tests under the i.i.d. assumption are not vulnerable to the bias. The authors should make clear what biases they are referring to. As written "b" does not seem correct to me. As written it seems possible that GVT conducted permutation tests that were vulnerable to the small sample bias. This is not the case, (vii) the small sample bias does not just appear in small samples; it appears in all samples but is more pronounced in small samples. 4. The writing can be polished a bit. For example, "extent" rather than "extend", "shot" not "show", "sampling this process...several times" should be more explicit, e.g. 10,000 times, I´m not sure "robust" is the right word when talking about whether the hot hand is common across players, "in the different.", "is less than 1&", "decisions making" ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
|
| Revision 1 |
|
PONE-D-21-07619R1 The Hot Hand in the Wild PLOS ONE Dear Dr. Pelechrinis, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. You will find the reports attached. As you will see Reviewer 1 is entirely satisfied while Reviewer 2 is not. His report is extensive and specific and personally I feel that the paper will be clearer following his advice. On top of that he notes that the paper needs to tone down the contribution and putting into the context of the existing literature. Please submit your revised manuscript by Sep 24 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Pablo Brañas-Garza, PhD Economics Academic Editor PLOS ONE Journal Requirements: Additional Editor Comments (if provided): [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Partly ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: N/A ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The author addressed all my comments, both major and minor. The specific contribution of the paper is now clearer and the explanations easier to follow for a general audience. Reviewer #2: The authors effectively double their dataset by performing the same analysis from first year to second, and now from second to first also. In addition, the peform an adjustment for measurement error in their model. Together, with these results the pattern of results changes qualitatively. I think the paper has promise, but still needs quite a bit of work to be publishable, as I explain below. Main Points: 1. The authors need to do a much better job of relating their work to the previous literature. For example, their work is quite closely related to that of Rao (2009) and Lantis and Nesson (2019). Neither of these papers is cited. Lantis and Nesson (2019) study 12 years of NBA shooting data, including the 2 years studied by the current authors. In addition, they study both field goal shooting and free throws, as opposed to just field goal shooting. Further, they use a considerably longer list of controls than the current authors. Their pooled test results are similar, and they don't have tests on the individual level like the current authors now do. The authors cite Bocskocsky et al. (2014) as if it is perhaps the only regression analysis that has been performed in the hot hand fallacy literature. This is very misleading. Arkes, Lantis and Nesson, Green and Zweibel, Miller and Sanjurjo, and others use regression analysis as well. The authors should make this clear. It is a valid, and typical, approach used in the literature. Multiple approaches have been taken to correct for the bias pointed out by Miller and Sanjurjo. The particular weaknesses of Bocskocksky et al. are not general to these studies. An obvious question is what result would these other typical regression approaches produce with the same data. Lantis and Nesson provides an answer to this question, with more data. 2. If I understood the authors' empirical approach correctly, they are comparing the computed P(M|M...M)_sim in (heterogenous Bernoulli) simulated datasets to the observed P(M|M...M)_data in the real data, rather than comparing P(M|M...M)_data to performance on the directly matched shots taken on the same streaks the real shooter was actually on. The latter approach seems more direct, and that it will be less suceptible to measurement error. The former approach may even be suceptible to systematic bias if the types of situations in which real shooters go on streaks create systematic differences in the sampling of their streak shots and those in the simulated datasets. I suggest that the authors run the analysis I mention, and compare results with those of their current analysis, or if they agree that it is better, replace the previous analysis with it. (Later, I see new added text "We then simulate his shot sequence after k consecutive makes 100 times..."; this suggests the latter approach; this should all be explained much clearer, so there is no ambiguity) 3. Relatedly, in the revision the authors apply an ad hoc model error adjustment that samples from all shots (not just streak shot situations), then apply it to adjust hot hand estimates. First, this is a potentially serious issue with the modeling approach that the authors should address in a more complete way. Why is this happening? Is it happening systematically across shooters? If so, why? Second, if this adjustment samples at random from all shots (including shots taken on streaks) then this is creating in turn another mechanical bias against hot hand effect size. An alternative would be to sample without including these shots, which is not ideal for other reasons, but perhaps better. In either case, because this adjustment is non-trivial in size, if the authors are going to use it, they should make sure that it is not biasing hot hand effect sizes in some other more subtle way. For example, they can run simulations using different models of hot hand shooting (that is, in which they have assumed there are hot hand shooters), and see if their model systematically produces errors in the same direction. If so, this would suggest that the measurement error they are picking up is actually a mechanical artifact of hot hand shooting, so making the correction they are could bias results against finding the hot hand. Are the errors similar in both directions when the authors train the model in one direction, then the other? 4. in the KW example how is it possible that he only has 333 shots following at least one make, if he has taken more than 1000 shots? The only explanation that seems to make sense is that the authors are considering streaks of exactly one hit, rather than of at least one hit. But conditioning on at least one hit makes more sense because it provides more powered tests. The same goes for longer streak lengths. 5. The authors should discuss how using one year to train a model to make probability estimates for another year can affect their results if there are systematic differences in shooting performance from one year to the next, e.g. the shooter improves, or changes technique over the offseason. 6. The sampling without replacement example to explain the bias discovered by Miller and Sanjurjo is not correct. In the Econometrica they show that the bias is generally larger than the analogous sampling without replacement type effect. 7. In response to point 3 of the other reviewer, the authors argue that an analysis of free throw shooting is not interesting for them to include because--unlike field goals---free throws are essentially identically distributed. This is not correct. Players shoot systematically better on the second ft than the first, for example. 8. The authors say: "In particular, we group the shots in our test set based on their predicted probability, and for each set, we estimate the fraction of them that were actually made." I am confused. What grouping? This should be explained clearly from the outset. Other Comments: 1. It is not that the streak selection bias "could lead to underestimation of the hot hand phenomenon", rather that it does lead to the underestimation. 2. The authors say: "Inset (B) at Figure 1 presents the reliability curve for our shot probability model as obtained through an (out-of-sample) test set." Which one? Why just one? 3. Need to explicitly state how many simulations of the heterogenous Bernoulli process were performed. "...repeatedly simulate" is too vague. Also the statistical testing procedure should be explained clearly. 4. The authors say: "We use players with at least 1000 shots during the two seasons. Given that there are 82 regular season games in each season, this means that we filter out from our analysis players that took approximately less than 6 shots per game. This threshold was chosen in order to provide, on average, sequences from individual games that can be used to examine the hot-hand hypothesis for values of k > 1. As we will see in our results, we are able to examine up to k = 4." What is this claim based on? It needs to be backed up by some evidence. Presumably the authors performed a power analysis, and they are summarizing the results here. They need to explain the results of the power analysis in a clearer way. They also suggest elsewhere that they are sufficiently powered. Based on what analysis? 5. One cannot "calculate the probability" of Kemba Walker making a shot. Should fix the language. 6. The authors say: "Furthermore, in the studies by Miller and Sanjurjo on controlled shooting (both actual three-point contests [19] and a shooting field experiment [20]) they also examine whether there is hot hand activated between the different contest rounds. They achieve this by considering sequences of consecutive makes that might span different competition rounds." This is not quite representative of Miller and Sanjurjo's approach. They first permute on the player level, to facilitate comparison with the previous literature's results, then on the player x session (or round) level, which does two things. First it eliminates vulnerability in the estimates to any systematic variation between sessions (not due to the hot hand), but if any hot hand effect at least partially activates between sessions or rounds it eliminates that too. By contrast, they are not actively trying to "examine whether there is a hot hand activated between the different contest rounds." 7. The authors say: "Our results from two seasons of shooting data indicate that overall the league is subject to shooting regression, i.e., players shoot below expectation after consecutive makes, thus, regressing towards their shooting average. However, there are players that exhibit strong statistical evidence for the presence of the hot hand individually." This writing is not clear. First, the first statment is about the average or representative shot, across shooters. I don't understand the use of "regression towards their shooting average" here. Which shooting average? 8. The authors say: "An important context that we have to add in the hot hand analysis in actual game situations is that the presence of hot hand does not necessarily have to do with what fans might have in mind when they talk about a \\player getting hot". It can be simply the ability of specific players to hunt and exploit good matchups for them within a game, leading to a streak of successful shots" and later "exploiting missmatches..." I don't understand what the authors are trying to express here. What this example brings to mind to me is that the short list of controls the authors use makes their results vulnerable to possible confounds, such as the type of strategic variables they are alluding to, that is not controlled for in their analysis. I´m not sure why they assume this lack of control would inflate rather than deflate estimated hot hand effect sizes. They should clarify this discussion. Bocskocsky et al, Lantis and Nesson, Miller and Sanjurjo, and perhaps Rao, have pointed out this limitation when working with field goal data. 9. The point that the authors attribute to Bocskocsky et al (2014) about players taking incrementally difficult shots on a streak of makes was made previously by Rao (2009). As mentioned above, the authors should cite Rao's important work. 10. I do not understand how controlled shooting settings, or Stephen Curry shooting at practice, or the NBA three point shootout, are not evidence of shooting performance in the "real world," as the authors claim. I believe what they mean to say is that it is in the real world but not in games. Then, game basketball settings are still basketball settings, so I think the authors should be careful not to overstate the applicability of their results, or unnecessarily make implicit critiques of controlled shooting studies. After all, controlled experiments play a pretty important role in science in general, and the trade-off between studying game and controlled shooting data is pretty obvious for people who understand a bit of statistics. Here is another example: "While there is literature that has examined streaks in real environments such as career trajectories and professional success [11]" 11. Should say a "large number of permutations", as in 10,000 or 25,000, rather than "a number of permutations" which is a bit too vague. Should explain clearly, somewhere, exactly how many. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 2 |
|
The Hot Hand in the Wild PONE-D-21-07619R2 Dear Dr. Pelechrinis, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Pablo Brañas-Garza, PhD Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: |
| Formally Accepted |
|
PONE-D-21-07619R2 The Hot Hand in the Wild Dear Dr. Pelechrinis: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr Pablo Brañas-Garza Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .