Peer Review History
| Original SubmissionMarch 17, 2021 |
|---|
|
PONE-D-21-08854 Are you confident enough to act? Individual differences in Action Control are associated with post-decisional metacognitive bias PLOS ONE Dear Dr. Zajkowski, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised by the reviewers. Please submit your revised manuscript by Jun 20 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Eugene V Aidman, PhD (Psychology) Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements.
https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and
[Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: This area of research is of much interest, and the paper is aimed at gaining a deeper understanding of how individual differences in action control relate to metacognitive bias, here over- and under-confidence, based on confidence judgements embedded in the cognitive act. While this manuscript has merit, in its current state it suffers from a number of problems which I hope the authors will be able to address. Overall, I support this line of research, and think that it adds important novel information to and clarifies the existing body of knowledge. One of my main concerns is the author’s lack of knowledge of research which already goes on in this area. Below, I list just a few papers, but there is a larger body of research available in the literature. This compromised the authors’ use of terminology, which is inconsistent with the main body of literature and some of their claims. My other concerns are with the method. Some of the issues can be worked around by declaring them in the limitation sections, but some issues are more serious. Please see my comments below. Introduction and literature review: 1. The authors want to examine the “cognitive mechanisms responsible for generating choices and post-decisional processing” p. 4. They used confidence judgements embedded into performance on various cognitive tasks, which they labelled as “post-decisional confidence”. I must admit that I struggled with the use of this label as much of the metacognitive literature I’m familiar with, use different terms. Importantly, this, more accepted terminology is also described in the Fleming SM, Lau HC. How to measure metacognition. Front Hum Neurosci. 2014 Jul 15;8:443 cited by the authors. It includes ‘confidence judgements’ instead of ‘post-decisional confidence’, ‘discrimination’ instead of long-handed explanations provided by the authors about a need of having higher confidence for correct answers and lower confidence for incorrect answers, sometimes referred to as confidence magnitude and accuracy (sensitivity), sometimes referred to as “accuracy” [p. 24, line 580] metacognitive accuracy [see p. 24, line 581]. I strongly encourage authors to use terminology consistent with the previous research done via decision-making and metacognition/learning paradigms. I also encourage the authors to present relevant calibration curves for each test. 2. I doubt that Dibbelt and Kuhl would use confidence judgements to capture “post-decisional maintenance processes” described in their theory. In general, I cannot help but question the authors choice to operationalise Dibbelt and Kuhl’s model using the metrics selected. Currently, the provided justifications are very relaxed (see p. 7). Please provide more detail justifications. 3. I also encourage the authors to create a table stating definitions and relevant formulae for ALL metrics used in this study. The later was a point of much confusion to me as Table 1 was not sufficient enough. E.g., I was not sure whether the authors used absolute difference between confidence and accuracy (referred to on p. 4) or a raw difference, which would also capture under-confidence. Some other questions. The authors state “The association between bias and accuracy is complex. Both extremely high and low bias would lead to poor accuracy (very low discriminatory power between correct and incorrect choices). However, apart from extreme cases, the dimensions become more independent, as documented by dissociating effects found in the aforementioned studies.” P. 5. I find this statement to be misleading. The dimensions do not become more independent as there is a straightforward mathematical dependency between bias and accuracy. Papers based on the individual differences approach of direct relevance to this ms (more are available): Jackson, S. A., Kleitman, S., Stankov, L. & Howie, P. (2017). Individual differences in decision making depend on cognitive abilities, monitoring and control. Journal of Behavioral Decision Making, 30(2), 209-223. THIS PAPER TALKS ABOUT THE CRUCIAL LINK BETWEEN CONFIDENCE JUDGEMENTS AND ACTION CONTROL. Kleitman, S., & Stankov, L. (2007). Self-confidence and metacognitive processes. Learning and Individual Differences, 17(2), 161–73. Stankov, L & Crawford, J. D. (1997). Self-confidence and performance on tests of cognitive abilities. Intelligence, 25(2), 93-109. BOTH PAPERS ADDRESS MEASUREMENT ISSUES OF CONFIDENCE AND ITS CALIBRATION. Method/Results: Why these sample sizes were selected in each study and for each experimental condition (n=30)? And why more participants were not recruited when some did not show up for the second experimental sessions? In results, the authors should report descriptive statistics for ALL of the metrics used, including the estimates of their internal consistency whenever appropriate. Also, was the accuracy of performance analysed in a separate analysis? Were these results presented? An optional suggestion: I would also like to see correlations between different metrics, possibly in appendices/sup materials. I’m painfully aware of small sample sizes used in both studies; but still, these estimates might provide further clarifications of what is going on. Also, was there an effect of gender? Discussion in Experiment 1: The authors state “Confirmation bias assumes biased post-decisional sampling, where evidence confirming choice is associated with a greater gain [78]. This account predicts that the longer information is processed, the stronger the bias should be” (p. 25, lines 595-97). I do not agree with this stipulation, as longer information processing can also indicate that the individual is considering other options, activating system 2, thus resulting in smaller confirmation bias. The authors also state that “Additionally, some of the results might have been confounded by the ceiling effect and an ambiguous confidence scale” (p. 25, lines 601-2). These two points are unclear. Providing descriptive statistics can help to clarify the existence of ceiling effect. I’m not clear, however, why the authors claim that that the confidence scale used in Experiment 1 was ambiguous? And how was it different in Experiment 2? Experiment 2 The authors state: “Perceptual choices are associated with an objective, external criterion that defines the quality of the choice. Value-based choices are driven by an internally defined criterion supported by subjective evidence. Previous studies comparing the domains suggest similar cognitive mechanisms associated with the decision process [82] and some similarities between their neural signatures [83]. However, we are unaware of any studies comparing metacognitive processing between the two”. P. 26, lines 618-623. This statement is incorrect as there is extensive previous research available. E.g., (and there are more): Kleitman, S., & Stankov, L. (2001). Ecological and person-driven aspects of metacognitive processes in test-taking. Applied Cognitive Psychology, 15, 321–41. doi:10.1002/acp.705 Stankov, L. (1998). Calibration curves, scatterplots and the distinction between general knowledge and perceptual tasks. Learning and Individual Differences, 10(1), 29-50. Method/Results: Please justify sample sizes and provide clarifications specified for Experiment 1 (e.g., table with descriptive statistics etc., calibration curves). Were there gender differences? Discussion/general discussion “Our results expand on the literature providing a universal mechanism shaping basic decision-making even in the absence of affective manipulation.” (p. 34). I find the use of the word “universal” to be a bit inflated, especially given the limitations described above. Also, what seems to be missing for me is a discussion of how despite using completely different method and analytics, some of these results extend and replicate the results obtained via using the individual differences paradigm. Below is some additional feedback from one of my senior PhD students whose research focuses on confidence, its calibration and control aspects. I endorse their comments below and think they should help the authors with their revisions. Reviewer #1a: "This is an interesting paper which compares action control to cognitive and metacognitive constructs within decision-making. The paper investigates an important research area, combining individual differences in cognitive and metacognitive processes with decision-making processes. Some comments: 1. There seems to be considerable differences between the theories surrounding High and Low Action Control individuals and the results of the current experiment. It appears that much of the Introduction is on how action-oriented individuals have beneficial attributes, particularly those surrounding flexible decision-making in more challenging situations whilst state-oriented individuals cannot adjust plans and will fail in their decision-making. From the results, it appears that the only difference between the two groups is that one is more confident than the other. The authors mention that without a time pressure manipulation, it was hard to induce differences. However, I don’t see how this is a limitation considering participants had generally responded to items within 1 second. 2. The action control variable appears to be similar in definition to adaptability and resilience, specifically within the decision-making domain. It would be good to mention this to some extent. 3. Although I understand the benefits of confidence, particularly those stated in the conclusion, it is important to point out the differences in metacognitive accuracy, or lack thereof. The current study indicates that there is either a greater or similar metacognitive accuracy comparing action-oriented and state-oriented groups. In this sense, it’s important to note that action-oriented individuals appear to have greater confidence without the negative aspect of overconfidence. Minor Comments: - In the introduction, the authors mention that Action Control is related to differences in cognitive control and ability citing moderate positive correlations with consciousness and self-monitoring. Although I understand the self-monitoring relationship with cognitive control, I don’t see any mention of the relationships there with cognitive ability (i.e., intelligence). - In the 2nd experiment, are participants allowed to rate preferences similarly across the items (e.g. a preference for 80 for both)? If this is the case, how would accuracy be computed if the two are paired with each other? - Please label the y axes of the Experiment 2 figures." Reviewer #2: The paper presents a well-designed, novel, and interesting study with results that could potentially make an important contribution to the literature. The drawbacks of the study include small sample size which may have limited the possibility to detect between-subject effects and their interactions, as well as complexity of the paper, which makes it difficult to understand and appreciate the findings. As a general recommendation to improve the paper readability, I would suggest to make more connection in the discussion to the original hypotheses, and more comparison and connection of the findings of the two experiments. I would like to point out some additional minor points in the text where additional detail could benefit the presentation. Line 104: “The association between bias and accuracy is complex. Both extremely high and low bias would lead to poor accuracy” : perhaps, saying “strong positive” and “strong negative” instead of “high” and “low” would be clearer. Line 110: Adding a few words to clarify the notion of “alternative spreading” for readers familiar with the concept, but not the term, would be good (e.g., “alternative spreading (opposite changes in the attractiveness of the preferred and the competing alternatives)”) Line 258-260: It might be good to quantify the magnitude of the difference between the contrast groups (as Cohen’s d) here and in Experiment 2, to make it clear how huge the effect size is, thanks to the large initial sample. Line 280: It would be better to report the correlations between the ACS subscales in the present sample (N = 724), either here or below (Line 396). Line 310: From the description of Procedure it is not clear whether the study was carried out in a lab setting or online. Line 355: “Conventionally, differences larger than 10 in DIC scores are considered significant [61]” : this appears to simplify things a bit too much, given that Spiegelhalter et al. (2002, p. 613) write: “Burnham and Anderson (1998) suggested models receiving AIC within 1–2 of the ‘best’ deserve consideration, and 3–7 have considerably less support: these rules of thumb appear to work reasonably well for DIC.” Based on this, the authors’ approach to interpreting DIC appears to be an oversimplification with ramifications for model choice: “All the models displayed very similar DIC scores (between 8441 and 8448), indicating that adding action control as a between-person factor did not significantly improve model fit.” (Line 445) : it looks like the models with a DIC difference of 7 can hardly be deemed equivalent (although several models with DIC 8441-8443 can be?)? Because this rule of thumb might have substantive consequences, a more careful treatment of DIC seems necessary. I would suggest not to interpret the differences in DIC between models as “significant”, but rather in terms of relatively better/worse model fit. Line 423: “Condition-dependent” probably needs a dash. Lines 420-430: A description of DDM parameters might be better placed to the Methods section. Lines 542-543: The legend for Figure 2 needs to explain all the abbreviations (ACC, SP, CON, NEU, INC). Line 546: It would be good if the results and discussion of Experiment 1 made reference to the numbered hypotheses presented at the beginning. Line 588: “These results align with action-driven theories of post-decisional processing.” An explanatory sentence describing the predictions of these theories would be good here. Line 654: It would be great to present the study goals in a more structured way of numbered hypotheses, as in the description of the first experiment. Line 672: How exactly was the task formulated for participants? The text mentions “size” and “surface area”, which might not be completely interchangeable. Line 685-686: The preference ratings are based on each object being rated by a participant on a 100-point scale and just once. The following procedure using these individual ratings to generate easy/difficult pairs treats preference ratings as perfectly measured. This approach is totally fine for size, which is an objective property, but for value one can expect some measurement error (which might be fairly large, given the single-item nature of the measure). For “difficult” pairs, the distance between two items being compared might actually be smaller than the uncertainty associated with the error of measurement (which, unfortunately, is unknown). As a result, an item rated 50% might really be less preferred than one rated 48%. The untenable assumption of perfect measurement makes it hardly possible to interpret accuracy ratings for values (at least for difficult choices), which needs to be reflected in the presentation of the results and discussion. (As an additional possibility, it appears that individuals did not have a time limit and could change their answers, which may have introduced some metacognitive bias, especially for state-oriented individuals. Could it be that the size and the distribution of measurement error in deriving initial value ratings was different in action and state-oriented individuals?) ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
PONE-D-21-08854R1Are you confident enough to act? Individual differences in Action Control are associated with post-decisional metacognitive biasPLOS ONE Dear Dr. Zajkowski, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Apr 04 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Dragan Pamucar Academic Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: (No Response) Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: No Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Dear Authors, I believe this manuscript version is improved, but I still have several questions and concerns. 1) I was pleased to see calibration curves and correlations added in the appendices. However, no interpretations have been offered. Please provide them, especially about your findings, e.g., how accuracy and confidence, confidence and meta-d' and accuracy and meta-d' correlations may have affected your results and their interpretations (when they are high and then they are low). 2) Is it possible to adjust the formatting of your correlation tables. I'm aware that R produced them, but I wonder whether you can fix the font and appearance. 3) I question your main conclusion that "We propose that a high confidence bias might be crucial for the successful realization of intentions in many real-life situations". A high confidence bias is linked to arrogance and not such great life outcomes (see Bruine de Bruin, W., Missier, F. D., & Levin, P. I. (2012). Individual differences in decision-making competence. Journal of Behavioral Decision Making, 25, 329–330. Bruine de Bruin, W., Parker, A. M., & Fischhoff, B. (2007). Individual differences in adult decision-making competence. Journal of Personality and Social Psychology, 92(5), 938. Kleitman, S., Hui, J.SW. & Jiang, Y. (2019). Confidence to spare: individual differences in cognitive and metacognitive arrogance and competence. Metacognition Learning, 14(3), 479–508 https://doi.org/10.1007/s11409-019-09210-x). Dunning, D., Johnson, K., Ehrlinger, J., & Kruger, J. (2003). Why people fail to recognize their own incompetence. Current Directions in Psychological Science, 12(3), 83–87. Del Missier, F., Mäntylä, T., & Bruin, W. B. (2012). Decision-making competence, executive functioning, and general cognitive abilities. Journal of Behavioral Decision Making, 25(4), 331–351. Please compare and contrast your research results with these studies. The significant limitations of your studies are small sample sizes, a limited selection of your research task, some stats dependency of your metrics, narrow range. Given all these limitations, I believe you should 'soften' your conclusions and consider possible alternative explanations. 4) the paper is very dense. I think it would benefit from good editing/proofreading. Reviewer #2: Thank you for addressing the feedback and adding the missing details, as well as supplementary analyses to clarify things. I am completely satisfied with the revision and have no more substantial recommendations, except for one: please edit the newly incorporated text fragments for language and readability, e.g.: "All tasks were performed offline [and] took place in a testing lab on the premises of the university." (p. 15) "To account for this, we perform an additional simulation-based analysis (see: Supplementary Materials) and show that we have over 80% power..." (p. 40) -- it is better to explain the steps of the analyses in the past tense ("we perfomed... and found...") and only the conclusions in the present tense. "We found no support for hypotheses 1 to 3, postulating differences in the decision-making process." (p. 25) -- the comma is probably not needed, as it is a reduced relative clause. There are also similar minor issues in other fragments. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 2 |
|
Are you confident enough to act? Individual differences in Action Control are associated with post-decisional metacognitive bias PONE-D-21-08854R2 Dear Dr. Zajkowski, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Dragan Pamucar Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #2: Thank you for revising the manuscript, in particular, with respect to the language. I have no further comments. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #2: No |
| Formally Accepted |
|
PONE-D-21-08854R2 Are you confident enough to act? Individual differences in Action Control are associated with post-decisional metacognitive bias Dear Dr. Zajkowski: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Dragan Pamucar Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .