Peer Review History
| Original SubmissionSeptember 3, 2020 |
|---|
|
PONE-D-20-27790 Responsible Product Design to Mitigate Excessive Gambling: A Scoping Review PLOS ONE Dear Dr. McAuliffe, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Apologies for the delay in your review - we had multiple attempts to secure appropriate reviews. As you will notice the reviews are somewhat mixed. The reviewers have raised concerns regarding appropriateness of the study design in relation to the aim of the study. Furthermore, the reviewers also believe that the limitations of the article has not been sufficiently discussed, as well as the presentation of results may be difficult for the readers to interpret. I am not sure that the authors will be able to address these concerns within the scope of major revision - however given the mixed reviews I would like to invite a resubmission. I would be grateful if you can take into consideration the aforementioned concerns when deciding whether to submit a revision. Please submit your revised manuscript by Feb 06 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Simone Rodda Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Thank you for stating the following in the Financial Disclosure section: "This research will be supported primarily by a research contract between the Division on Addiction and GVC Holdings PLC (hereafter, GVC; https://gvc-plc.com/). GVC is a large international gambling and online gambling operator. GVC had no involvement with the development of our research questions or protocol. They will not see any associated materials (i.e., retrieved studies, charted data, and manuscripts in preparation) while the study is in progress or have any editorial rights to any resulting manuscripts. GVC communication about this work will require approval of the Division on Addiction" Please add this information as a COI within the online submission Furthermore please also provide a table with the list of included studies within the manuscript text." a. Please add this information to the Competing Interests section within the online submission form. Within this Competing Interests Statement, please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials by including the following statement: "This does not alter our adherence to PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests). If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared. Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests 3. Please provide a table with the list of included studies within the manuscript text. 4.We note that this manuscript is a systematic review or meta-analysis; our author guidelines therefore require that you use PRISMA guidance to help improve reporting quality of this type of study. Please upload copies of the completed PRISMA checklist as Supporting Information with a file name “PRISMA checklist”. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Partly Reviewer #4: Yes Reviewer #5: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: Yes Reviewer #5: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: No Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: Yes Reviewer #5: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: Yes Reviewer #5: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The manuscript reports a scoping review of consumer protection technologies directed towards the Gaming sector. Systematic reviews fall within PLOSone’s editorial guidelines, but I would have preferred a meta-analysis. The focus of the study is power and statistical significance of quantitative research considering attempts to reduce gambling related harms. And concludes somewhat speciously there are some shortfalls in transparency. Speciously, since the same claims could be directed towards most sectors as these guidelines have only recently been introduced and implemented. An editorial could achieve a similar outcome. The study is somewhat understated. Given that governments use the Gaming industry as means of raising revenues (i.e. a “stupidity tax”), there is an issue of accountability and there can be an obligation to minimise harm. Indeed, this is a sector where serious crimes such as embezzlement occur, and the perpetrators will oft claim “diminished responsibility”. For instance, there have been claims in court that gamblers were dissociated and lost track of time. As litigation is likely, responsible harm minimisation measures that are evidence based, peer reviewed, and generally accepted (cf. Daubert standard) may be required. The manuscript is clearly quite systematic, but I struggled to identify “findings of interest”. But then I am not a fan of scoping reviews. A meta-analysis would better identify which interventions were efficacious. Even so, the tendency to clump a range of interventions together may mean that a powerful intervention can be overlooked if it is categorised with studies with poorer methodologies. Specific points Page 20 – reports both chi square and Fisher Exact test. Something requires clarification here. Fisher exact test is a one df test, whereas the chi square has 4 df. Page 41 – there are definitely studies looking at irrational cognitions in Asian populations, but perhaps none that specifically address harm reduction methodologies that target these cognitions. Presumably because gambling would be illegal in many of these jurisdictions. Page 34 – unstandardized effect sizes are not unstandardized Beta weights. For interpretability an unstandardized Beta requires information about intercepts which are rarely reported. Reviewer #2: This is a review of responsible gambling interventions, with a unique focus on open science and replicability. Therefore, the review does have some attributes that warrant its publication. My comments are as follows. 1. The paper itself should include a summary table of all of the reviewed literature and some high-level features of each study. Although the online materials look good overall, the authors should not rely on the reader to use the materials to yield the review’s main conclusions. For example, starting on page 21 there is a categorization of the studies into intervention type, e.g. popup messages and breaks in play. However, this narrative review only covers some papers, and so many papers included in the review are not cited anywhere in the manuscript! This is unfortunate given the manuscript’s emphasis on openness and transparency. At present, I cannot reject the hypothesis that the papers actually cited in the review are restricted to the present authors’ “mates” --- based on my knowledge of this literature I certainly do not believe that the most replicable studies were selectively cited. Not citing these papers harms the average reader, and even harms the authors as their work will be harder to discover and will receive less attention. 2. The introduction to z-curve should give more background to these techniques and justify why z-curve was used instead of p-curve, which I believe is more commonly used. 3. The authors cite questionable research practices as a potential reason for low replicability. I think the authors should also explore other potential explanations. Biases in the published literature could also be due to rational responses to incentives. Null results papers are harder to publish, especially in high-impact journals. And not all authors have the financial resources to publish in PLOS One. Preprints are, in my view, the main way to counteract these biases. Now, uptake of preprints in gambling studies is likely low, but at least in principle allow authors a low-cost way to communicate their findings. If I am correct, then an expanded analysis should see a significant moderation of the z-curve results once preprints are included. This should be considered as a suggestion for future work in the Discussion. Reviewer #3: Comments to authors This paper is a scoping review of features that have been introduced to help minimise excessive gambling (so-called responsible gambling features). The authors have put a lot of work into this paper, and made their materials available via OSF, which in most cases is useful but in some cases I think that certain bits of information should be included in the actual paper instead (e.g., better descriptions of things coming from the Z curve analysis, because the descriptions of things like the file-drawer ratio aren’t great since most people won’t know this method, given how new it is). I hope that the comments below will be useful to the authors. And before the authors read the comments below, I hope they understand that I appreciate the amount of work that has gone into this paper, especially in what seems to be a strong understanding of a novel technique. The manuscript seems to be trying to do two things at once. First, it provides an overview of 86 studies that met inclusion criteria for this meta-analysis, in that those studies examined game-based features, tools or initiatives to help reduce excessive gambling. This part of the paper is generally done well, although discussion is lacking because it focuses more on the replicability components of the study. However, I have some concerns around the use of tests like correlations that are implicitly between-subjects, when the units of analysis (the studies) are clearly not independent of each other. For example, there are many papers from the same author(s), which of course would be similar to each other in terms of any relationship between variables, and would be likely to drive these results. This does not seem to be acknowledged. I have some strong reservations about the Z curve analysis. Strong claims are made based on this novel technique. For example, I am uncomfortable with the claim that comparing the Observed Discovery Rate to the upper confidence limit of the Estimated Discovery Rate providing unambiguous evidence for publication bias. The way this is written on p13 (lines 222-223) suggests that a point estimate (the ODR) is compared to the confidence interval of the EDR, but the ODR has its own confidence interval, which does not seem to be taken into account. Indeed, comparing the confidence intervals throughout the iterations, the CI for the ODR and the EDR often overlap, which tells us nothing about differences (see https://towardsdatascience.com/why-overlapping-confidence-intervals-mean-nothing-about-statistical-significance-48360559900a or https://www.sciencedirect.com/science/article/pii/S0741521402000307 ). And this is my main problem with this section of the paper. This is a novel technique that is not well known, and is not necessarily described well enough in the paper (although the authors made a valiant attempt!). The reader is instead required to read the supplementary materials to gain an understanding of the technique and the terms within. Having all of this in the supplementary material meant that this reader had to do a lot of work to understand this section of the manuscript, and the same would be true of other readers. I believe that this merits its own paper, with enough space spent in the methods to help a reader develop an understanding of this novel technique, rather than burying crucial information in online supplements. My other concern about the Z curve analysis was initially that the citation was a preprint, which indicated that it hadn’t gone through peer review. It seems now that it has been published in Meta-Psychology, which lends the technique more credence, but I had to do the work to find that myself – the citation is still to a pre-print (it may be that this paper has been under review for awhile). I would have been very wary if the whole replicability part of the paper had been based on a preprint that had not at least gone through a peer review process. I’m still wary about early adoption of novel techniques that are not well understood, when very strong claims are made, or would have been made if the results were more conclusive, without appropriate acknowledgement of this as a limitation. In fact, the limitations study is lacking several important limitations, that I have outlined in this review. I also think there’s a large discrepancy between the results section (which are mostly the first part of the study, referring to efficacy, and a little bit on replicability) and the discussion (where most of the discussion is about replicability, and there’s little about efficacy). And this leads to my recommendation below. There is a subtle but somewhat pervasive issue with the way certain things are reported. For example, studies were classified based on certain things, e.g., whether there was a conflict of interest statement. There is an implication that this is an omission on the part of the authors, and this becomes very clear at the end of the discussion. I think it is important to recognise that many journals historically haven’t required this, and some don’t even have space at all for it, which is why it’s not there. Other journals require these statements now, which is why they may be there now. I think things like this should be acknowledged to provide a more balanced presentation of these results, especially since the authors appear to blame the authors for this later in the paper. To highlight this point further, preregistration is indicated as being quite new (e.g., the finding that all of the preregistered studies were in 2019 or 2020), but this doesn’t seem to have been looked at for conflict of interest statements. In my experience, journals are moving more towards these types of requirements. So I’m not convinced that authors should be instructed to become more routine about disclosing their potential conflicts of interest and funding sources, without acknowledging that the journal may not have had space for that at the time of publication, or that they may not have been required before but are now, and that their exclusion is not necessarily the author’s fault.. Further, certain classifications like “abstract deemed vague” are a little… condescending, really. I’m aware that this may feel like tone policing, and I apologise if it comes across like that, but I hope the authors can see where I’m coming from. Overall, I think this paper would be far improved by splitting it into two papers, rather than trying to do too many things in one paper. (And of course I am aware of salami slicing concerns, but I think these are two very distinct messages.) I think the first part of the results, that describes the meta-analysis and efficacy, is strong enough to stand alone, but needs more space in the discussion in particular. I think the second part is not described in enough detail in the methods to be interpretable for most readers, and therefore requires far more explanation than is in the actual paper itself. As the paper currently stands, it feels like the authors set out to look at efficacy, but actually they really want to talk about open science, and this is why I think these things are two quite separate papers, with each deserving its own space to be fully explored and discussed. But to be clear, it seems like the authors have done a good job with this work, and I commend them making all of their material available online, including data and analysis scripts. It’s just that the way that it’s written doesn’t work very well for the reader, in my opinion, and that the paper overall needs a more singular focus. I do hope to see these data published in the near future, and I hope these comments help the authors to make this happen. Specific comments: Line 279 - Some studies are described as experimental, but then some of those don’t involve random assignment of participants to condition. For most readers, experiment = random assignment, so what is it about the 5 studies that make them experimental without involving random assignment? Line 341-342 - Did any studies report BOTH unstandardised and standardised effect sizes? This kind of issue potentially occurs with a lot of the classifications, where there may be overlap, so the authors might consider this throughout. Line 371 - This may be a personal preference, but terms like “marginally significant” are, in my opinion, pretty ordinary. Similarly later on where the authors discuss results being suggestive but not conclusive. Line 395 - 2020 papers apparently go to May, but the database search was in February. I understand the authors included some other papers that were in reference lists, but this needs to be actually explained at this point to explain this apparent discrepancy. Reviewer #4: Overall, the study is well executed in terms of scope, method and language. Several reviews have been carried out in recent years. However, this current scoping review address an important topic that the previous reviews point towards. Investigating how studies have been carried and the possibility to replicate is important! As the authors have noted, there is a lack of studies and many of lack scientific. I think the current study adds a lot to the discussion about responsible gambling on a general level. However, some minor changes need to be made. There some minor mistakes in the manuscript, e.g. double parenthesis. The manuscript should be checked for those types of mistake. However, this is a minor problem. Introduction The first paragraph of the introduction can be deleted. The paragraph is not necessary. A focus on addiction is perhaps better. A part from that the introduction is well-written and presents the subject and the need for the in a good. came to similar conclusions that was reached in the scoping review. However, the examination of the studies was done in a less rigorous way than in the scoping review. The meta-analysis can be used both in the intro to better show the importance of the aim of the scoping review and in the discussion when it comes to transparency and the effectiveness of the interventions. Methods The way the study was carried out and described is good. The use of PRISMA makes is it easy to understand the selection of the studies. The pre-registration and reliability analysis of the raters is very good and shows the scientific rigor of the researchers responsible for the study. One thing that could be added is how this type of analysis has been carried out in the field of addiction would be interesting and also what types of benchmarks thy have used. Also, the description of the analytic strategy covers most of the analysis done. The authors have not included a reference or a description of the tetrachoric correlations and why it was in the Methods section. Information about Fisher’s exact test is also missing. Results The results are presented in a clear way. It is long and perhaps it is possible to make it a bit shorter. Discussion Overall, the discussion is well-written and covers most of the presented results. The discussion about the origin of most of the studies included is important and rarely discussed in the field of gambling research. The discussion about should be elaborated using Ladouceur, Shaffer, Blaszczynski, & Shaffer (2019). Responsible gambling research and industry funding biases. Journal of gambling studies, 35(2), 725-730. It can also be discussed in relation to transparency a bit more. What are the potential consequences of this lack of transparency? Reviewer #5: An interesting paper that provides insight into the evaluation of responsible gambling features to date and sets out some sound principles for best practice going forward. I am not sure that the term "gambler" is appropriate for describing all peoples that gamble, as it has negative connotations and may not reflect the self perception on many who gamble from time to time. The term "player" is less value laden. Also, I believe the EGM ban in Norway was in July 2007 not 2006. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: No Reviewer #4: No Reviewer #5: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
PONE-D-20-27790R1 Responsible Product Design to Mitigate Excessive Gambling: A Scoping Review PLOS ONE Dear Dr. McAuliffe, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Thank you for taking the time to carefully respond to the reviewers remarks. My reading of the reviews and your response is that we are close to meeting expectations with only one set of comments left to address in this round. If you would respond to these comments I would appreciate that a lot. Please submit your revised manuscript by Apr 23 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Simone N. Rodda Academic Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #2: (No Response) Reviewer #3: (No Response) Reviewer #4: All comments have been addressed Reviewer #5: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #2: Yes Reviewer #3: Partly Reviewer #4: Yes Reviewer #5: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: Yes Reviewer #5: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: Yes Reviewer #5: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: Yes Reviewer #5: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #2: I appreciate the table but all papers in the table are not as requested in the reference list at the end of the paper. Reviewer #3: Thanks to the authors for addressing my comments, and those of the other reviewers, in their revision. I’m guessing that they may have been expecting some pushback on some of their comments, so my apologies in advance, but hopefully this is useful feedback and we’re all still friends at the end of it. In my initial review, I suggested that the authors might consider splitting the paper into two. It seems in their reply that they have indicated that they feel there is not enough on efficacy to be a full paper, and instead their focus is on the replicability and transparency side of things. There seem to be plenty of pages of results about efficacy and maybe three pages about replicability, including a table. In fact, in their reply they say that they are not actually that interested in efficacy per se. The title of the paper has nothing about replicability or transparency, so this isn’t clear. The abstract (which has been rewritten) does seem to cover both sides of things, but I feel that if I went into this study to look for evidence of efficacy, then I’d walk out of it feeling like I’d been deceived into reading about replications instead. Don’t get me wrong, I feel strongly about replication too – it’s crucial to this whole science thing that we’re all trying our best to do. But I’m not sure that I agree with the authors that replication cannot be separated from efficacy for the purposes of a paper, because then you can say that replication cannot be separated from anything and every single paper on anything from now on is going to have to do these analyses. I agree with Reviewer 1, who noted that a lot of this around replication could be an editorial. But I suspect the authors will again rebut this point, and we could go back and forth on this for awhile. I think this will be a call for the editor, and I am of course happy to defer to her views. The authors have otherwise mostly addressed my concerns, although there are a couple of things that I’ve suggested could be improved throughout, but haven’t been. For example, I pointed out that some studies might have reported BOTH unstandardised and standardised effect sizes, but that other similar things might have been incompletely reported. The authors only addressed the effect sizes example that I highlight. For example, game-based tool or interaction type seems to have some overlap, as percentages add up to more than 100%. There may be other such cases in the paper. As the authors are more familiar with the data, they will be able to identify more of these. I think the distinction between quasi-experimental and true experimental is actually pretty important, and should be reflected in the new table. I am not sure that I conveyed my concern around the comparison of the Observed and Estimated Discovery Rates. Both of these things have a point estimate, but also a confidence interval around that point estimate. Just because the ODR point estimate is outside of the EDR confidence interval doesn’t mean statistically significant difference, because the ODR has its own confidence interval and uncertainty which is not taken into account through only using the point estimate of the ODR. The authors, in their revision, have stated that the Observed Discovery Rate is only a point estimate, and that the confidence interval has been omitted, because the sample represents the entire population of interest. I have strong concerns about this, because despite their rigour, there may have been some that were missed for a variety of reasons, or because of errors during search and/or classification, and so on. So, I have a concern with this approach. To emphasise my point, consider these (obviously made up) data for two groups: Group 1: 3, 4, 4, 4, 4, 5, 4, 6, 4, 3 Group 2: 6, 3.5, 5, 6, 4, 4, 4, 5, 5, 6 Mean for group 1 is 4.1 (95% CI: 3.47 to 4.73) Mean for group 2 is 4.85 (95% CI: 4.17 to 5.53) So the estimate for each is not within the CI of the other. If I forget that group 2 has a CI, and just compare 4.85 to the CI for group 1, I would conclude that there is a statistically significant difference. But there is a CI for group 2, and if I run an independent samples t-test (Welch or standard), the p-value is .082, so we would conclude no significant difference. Admittedly, it’s close, but if we’re to use an alpha criterion of .05 for p-values (as absolutely no one ever intended them to be used, and that’s a discussion for another day), it’s not the same as just comparing a single point estimate against a confidence interval, which is what is done in the paper. Given that this is used as evidence for a talking point by the authors, this is pretty important. If this is how the author of the z-curve suggests that the tool is used, then I have hesitations about the procedure, and this harks back to my concerns from my initial review about using new techniques. Once again, I hope that this discussion is useful. I feel that the authors will probably continue to push back on these points, as is their prerogative, but I do hope that they will find the comments at least vaguely useful. Reviewer #4: (No Response) Reviewer #5: (No Response) ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #2: No Reviewer #3: Yes: Alex M T Russell Reviewer #4: No Reviewer #5: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 2 |
|
Responsible Product Design to Mitigate Excessive Gambling: A Scoping Review and Z-Curve Analysis of Replicability PONE-D-20-27790R2 Dear Dr. McAuliffe, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. On a personal note - thank you for responding to the reviewer comments so clearly across the various rounds. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Simone N. Rodda Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: |
| Formally Accepted |
|
PONE-D-20-27790R2 Responsible Product Design to Mitigate Excessive Gambling: A Scoping Review and Z-Curve Analysis of Replicability Dear Dr. McAuliffe: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Simone N. Rodda Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .