Peer Review History
| Original SubmissionJanuary 13, 2025 |
|---|
|
--> PCOMPBIOL-D-25-00064 Using artificial neural networks to reveal the human confidence computation PLOS Computational Biology Dear Dr. Shekhar, Thank you for submitting your manuscript to PLOS Computational Biology. After careful consideration, we feel that it has merit but does not fully meet PLOS Computational Biology's publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. You will see both reviewers found the work interesting, but pointed out some issues with the manuscript. In particular, I will draw your attention to the comments of reviewer 2 about the predictions of the models tested and the misalignment with human data. In particular, what does it mean for conclusions made that all models misaligned with humans? Please submit your revised manuscript within 60 days Jun 17 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at ploscompbiol@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pcompbiol/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript: * A rebuttal letter that responds to each point raised by the editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. This file does not need to include responses to formatting updates and technical items listed in the 'Journal Requirements' section below. * A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. * An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, competing interests statement, or data availability statement, please make these updates within the submission form at the time of resubmission. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter We look forward to receiving your revised manuscript. Kind regards, Alex Leonidas Doumas Academic Editor PLOS Computational Biology Lyle Graham Section Editor PLOS Computational Biology Journal Requirements: 1) Please provide an Author Summary. This should appear in your manuscript between the Abstract (if applicable) and the Introduction, and should be 150-200 words long. The aim should be to make your findings accessible to a wide audience that includes both scientists and non-scientists. Sample summaries can be found on our website under Submission Guidelines: https://journals.plos.org/ploscompbiol/s/submission-guidelines#loc-parts-of-a-submission 2) Please upload all main figures as separate Figure files in .tif or .eps format. For more information about how to convert and format your figure files please see our guidelines: https://journals.plos.org/ploscompbiol/s/figures 3) We have noticed that you have uploaded Supporting Information files, but you have not included a list of legends. Please add a full list of legends for your Supporting Information files after the references list. 4) Please amend your detailed Financial Disclosure statement. This is published with the article. It must therefore be completed in full sentences and contain the exact wording you wish to be published. 1) State the initials, alongside each funding source, of each author to receive each grant. For example: "This work was supported by the National Institutes of Health (####### to AM; ###### to CJ) and the National Science Foundation (###### to AM)." 2) State what role the funders took in the study. If the funders had no role in your study, please state: "The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.". Reviewers' comments: Reviewer's Responses to Questions Reviewer #1: This paper makes a clear case for a simple point: comparing top 2 options gives the best prediction of confidence ratings. The authors use multiple different analyses to support this point, including individual and group comparisons and exploring other network architectures. The claim is also consistent with cited past work. What was lacking in the paper was an analysis of the choices made by the models. Specifically, are these confidence ratings taken only from images where the subject and the model make the same choice? It is hard to think about how to compare the model's confidence in an incorrect choice to a subject's confidence in their correct choice or vice versa. Furthermore, do the different architectures tested have better or worse correspondence to human choice on an image-by-image basis? This could be relevant for understanding their confidence rating differences (this is hinted at in the results but I don't think it was reported on directly). Do the authors think that performing step 2 training on individual data would impact the results in any way? The authors find no confidence difference between speed-focused and accuracy-focused trials. Can they remark on if this is consistent with prior studies? "However, while this type of computation may not match well what humans do, it must be noted that it produces more informative confidence ratings." What does this mean and what is it based on? Do the authors have anything to note about what aspects of the data that the models still don't match, in order to inspire future research? Clarification: I think it would be useful for readability to have some more description of the model architectures and training the main Results. Please add clarification to the methods about what timepoint/layer the logits/raw activations are taken from for confidence calculations. (I'm assuming it is the timepoint where the model reached the threshold or the layer they did, depending on model architecture). typo: "Second, we evaluated the effect of the speed-accuracy trade-off manipulation on confidenceClick or tap here to enter text.." Reviewer #2: The current report examines the best way to compute confidence in deep neural networks, in order to mimic human confidence judgments. From a technical perspective this excellent work, and likewise the manuscript is very well written. From a conceptual standpoint, however, the novel insights are less clear to me. Most researchers interested in human confidence would not be surprised that softmax is not a good rule to capture human confidence (I’m not aware of any theory making such claims) and likewise most researchers working with DNNs would not be interested in whether they model human confidence (but instead would want the most informative measure). 1. For metacognition researchers the conclusion that human confidence is not well explained by softmax or entropy is not very surprising. First, it is the same conclusion reached by Li & Ma, admittedly using a more complex model that works with actual images. Also, I am not aware of any theory of confidence arguing that humans compute confidence using softmaxt or entropy? Second, a recent paper by Comay et al. 2023 (Cognition) showed tat confidence depends on a dud-alternative and proposed a average-residual account for their findings. It would be important to also test this strategy, and perhaps see whether the top2 rule can account for the dud effect (given that this seems to directly contradict predictions from this rule). 2. The authors model confidence based on the (accumulated) activation in the output layer, and call this evidence. I have some concerns about this. First, the top2 rule seems to imply that humans do not consider the evidence of other options, but this conclusion is only valid if the nodes in the output layer are independent. If that is not the case, and especially for MNIST it is not hard to conceive that evidence for 8 and 9 will not be independent from each other, the top2 rule could be an indirect way to consider all evidence. Second, I wonder whether this is the most sensible way to think about evidence, for example compared to a metacognitive module having access to the full processing stream of evidence. Why is confidence only calculated based on the final output node? 3. It is a bit worrisome that one of the three key empirical findings in the human data are not captured by (any of) the model(s). Humans do not give higher confidence in accuracy vs speed conditions, yet all models predict this. This is a rather glaring misfit that the authors seem to dismiss rather easily. Does this not point to a clear issue with the model(s)? 4. One of the first papers examining human confidence in DNNs, by Webb and colleagues (2023) is only briefly mentioned in the discussion as a paper implementing softmax confidence. My reading from this paper, however, is that confidence is implemented as a separate head which learns to compute confidence via reinforcement learning. As such, we don’t really know whether or not it learned to use softmax, or whether perhaps it learned to use the top2 diff rule. Likewise in the discussion Rafiei et al. (2024) is dismissed as using the softmax rule, but in that paper it reads “The confidence of the model was obtained by taking the difference in evidence scores between the chosen response and the second-best choice.” Is this not the top2 rule? 5. As a minor point: it would be interesting to get some insights about how all of this work relates to related human confidence literature. For example, is the top 2 rule consistent with confidence as a distance-to-criterion signal? ********** Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: None Reviewer #2: Yes ********** PLOS authors have the option to publish the peer review history of their article (what does this mean? ). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy . Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] Figure resubmission: While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. If there are other versions of figure files still present in your submission file inventory at resubmission, please replace them with the PACE-processed versions. Reproducibility: To enhance the reproducibility of your results, we recommend that authors of applicable studies deposit laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols--> |
| Revision 1 |
|
Dear Dr. Shekhar, We are pleased to inform you that your manuscript 'Using artificial neural networks to reveal the human confidence computation' has been provisionally accepted for publication in PLOS Computational Biology. Let me also thank you on behalf of myself and the reviewers for your very careful revision. You did an exemplary job clearly and thoroughly addressing the raised points in the cover letter, and reflecting those changes in the manuscript. Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests. Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated. IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript. Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS. Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology. Best regards, Alex Leonidas Doumas Academic Editor PLOS Computational Biology Lyle Graham Section Editor PLOS Computational Biology *********************************************************** Reviewer's Responses to Questions Comments to the Authors: Please note here if the review is uploaded as an attachment. Reviewer #1: I appreciate the substantial additional analyses the authors have performed. It has helped to address my questions. Reviewer #2: This is an excellent revision. The inclusion of several alternative models makes the paper much more interesting and compelling. It remains a bit unfortunate that the SAT manipulation provides such discrepant results between model and data (while I agree with the reply that the data are somewhat unusual, that of course does not explain the discrepancy) but this has been somewhat addressed. Congrats on an excellent revision. ********** Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: None Reviewer #2: Yes ********** PLOS authors have the option to publish the peer review history of their article (what does this mean? ). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy . Reviewer #1: No Reviewer #2: No |
| Formally Accepted |
|
PCOMPBIOL-D-25-00064R1 Using artificial neural networks to reveal the human confidence computation Dear Dr Shekhar, I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course. The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript. Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers. For Research, Software, and Methods articles, you will receive an invoice from PLOS for your publication fee after your manuscript has reached the completed accept phase. If you receive an email requesting payment before acceptance or for any other service, this may be a phishing scheme. Learn how to identify phishing emails and protect your accounts at https://explore.plos.org/phishing. Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work! With kind regards, Anita Estes PLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom ploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiol |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .