Peer Review History

Original SubmissionJune 4, 2024
Decision Letter - Darrell A. Worthy, Editor

PONE-D-24-20376Multifaceted confidence in exploratory choicePLOS ONE

Dear Dr. Solopchuk,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

I received one review from a relevant expert who had a positive overall impression of the paper.  The reviewer lists a few queries that should be addressed in a revision.  I'll also note that it is stated in the manuscript that the data are publicly available, but I did not see a link in the paper.  Please include the link in your revision, along with a response to each comment made by the reviewer. 

Please submit your revised manuscript by Sep 26 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.
  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.
  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Darrell A. Worthy, Ph.D

Academic Editor

PLOS ONE

Journal requirements:   

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Thank you for uploading your study's underlying data set. Unfortunately, the repository you have noted in your Data Availability statement does not qualify as an acceptable data repository according to PLOS's standards.

At this time, please upload the minimal data set necessary to replicate your study's findings to a stable, public repository (such as figshare or Dryad) and provide us with the relevant URLs, DOIs, or accession numbers that may be used to access these data. For a list of recommended repositories and additional information on PLOS standards for data deposition, please see https://journals.plos.org/plosone/s/recommended-repositories.

Additional Editor Comments (if provided):

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Thank you for the opportunity to review this manuscript. In this study, the authors examine whether human participants will differ in their confidence for immediate versus long term gains depending on whether they opt to exploit or explore in a horizon task. Across two studies, the authors find no evidence for their prediction that exploration choices are associated with greater confidence for long-term rewards. This is a concise study with a clear hypothesis that is appropriately addressed by the data and analytical approach. I have some queries about the study that would improve the overall clarity and may help the authors consider other possible explanations of the data, which I outline below:

Introduction:

It would be helpful for a broader readership to briefly define key terms like decision horizon, as well as outlining the benefits of different explore/exploit strategies for short or long horizons.

Methods:

There is an absence of important demographic information about participants. For example, what was the age, gender distribution and ethnicity of participants? The age and gender would be particularly important, given age-related changes in exploration strategies, as well as evidence that males employ different explore/exploit strategies compared to females (Bach et al., 2021; Mata et al., 2013). Reporting the demographic information of participants would also be important to ensure the findings are reproducible.

Regarding the location of participants, were checks conducted to ensure IP addresses matched participants’ reported location?

Could the authors clarify how was the performance dependent bonus calculated?

Results:

Did the authors consider how the trial number was associated with self-report confidence for exploration choices? It may be that as participants become more familiar with the task structure they become better able to accurately estimate their confidence at increasing overall rewards over the task. Relatedly, it would be helpful in the methods to note how many trials the participants completed in total.

Another possible reason for the null findings is that the sample size was not large enough to detect a 3-way interaction. Did the authors conduct a power analysis to establish the sample size for this study?

I note the degrees of freedom for the correlations was 137 and 92 for Experiments 1 and 2, respectively (lines 150-153). Does that mean some participants did not have confidence data?

Discussion:

The discussion is clear and concisely summarises explanations for these findings.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Revision 1

Dear Editor, dear Reviewer,

Many thanks for considering our manuscript. We updated the submission with a link to publicly available code and data: https://github.com/solopchuk/confidence_exploration. Here are the point-by-point replies to the comments:

>Reviewer #1: Thank you for the opportunity to review this manuscript. In this study, the authors examine whether human participants will differ in their confidence for immediate versus long term gains depending on whether they opt to exploit or explore in a horizon task. Across two studies, the authors find no evidence for their prediction that exploration choices are associated with greater confidence for long-term rewards. This is a concise study with a clear hypothesis that is appropriately addressed by the data and analytical approach. I have some queries about the study that would improve the overall clarity and may help the authors consider other possible explanations of the data, which I outline below:

>Introduction:

>It would be helpful for a broader readership to briefly define key terms like decision horizon, as well as outlining the benefits of different explore/exploit strategies for short or long horizons.

We restructured the relevant introduction paragraph accordingly:

‘Such conflict arises in the exploration-exploitation dilemma in which optimal choices trade-off exploiting existing knowledge to pursue immediate rewards, and exploring to get information for the sake of future gains (Gittins ‘79). Longer decision horizons (i.e. more future choices) tip the balance towards exploration, while shorter horizons favour choosing the highest reward option known. Human subjects behave in accordance with this dependency, with longer horizons leading to more uncertainty-directed, as well as more stochastic choices (Wilson et al ‘14). ‘

>Methods:

>There is an absence of important demographic information about participants. For example, what was the age, gender distribution and ethnicity of participants? The age and gender would be particularly important, given age-related changes in exploration strategies, as well as evidence that males employ different explore/exploit strategies compared to females (Bach et al., 2021; Mata et al., 2013). Reporting the demographic information of participants would also be important to ensure the findings are reproducible.

Thank you for pointing this out, we have updated the methods section with the demographics data:

‘We filtered the submissions for performance ($>$ 65 $\\%$ accuracy in the short horizon condition; immediate and total reward confidence SD$>$5), which left us with $150$ participants in experiment 1 (86M, 63F, 1 not indicated, mean age = 39.12) and $100$ participants in experiment 2 (66M, 33F, 1 not indicated, mean age = 41.64).’

>Regarding the location of participants, were checks conducted to ensure IP addresses matched participants’ reported location?

We used the platform 'Prolific' that checks IP addresses according to this excerpt from their website: "IP addresses are checked, then identities verified with Onfido’s bank-grade ID checks.".

>Could the authors clarify how was the performance dependent bonus calculated?

The participants were instructed that their payment would depend on both choices and confidence judgement as follows:

"Your payment for taking part in this experiment is composed of 2 equal halves: 1. A computer program will choose a game you played randomly, and the points you earned will be converted to money. You can increase your chances to get a high payoff for this half if you collect a lot of points in as many games as possible. 2. A computer program will choose a game randomly, and reward you based on the confidence judgement you made in that game, using a lottery. The payoff of the lottery is set up in such a way that you can increase your payoff chances by evaluating and reporting your real confidence as accurately as possible in as many games as possible."

Despite the instruction, we gave everyone a fixed bonus of 1 pound. We updated the methods to clarify this and included the instruction text above in the Appendix.

>Results:

>Did the authors consider how the trial number was associated with self-report confidence for exploration choices? It may be that as participants become more familiar with the task structure they become better able to accurately estimate their confidence at increasing overall rewards over the task. Relatedly, it would be helpful in the methods to note how many trials the participants completed in total.

Thank you for the suggestion. Since some participants made no or only a few purely exploratory choices (which we define as those with a lower mean and just one sample shown), we pulled such choices of all subjects together. We found no correlation between the difference in confidence about total and immediate reward and the trial number(Experiment 1: r(547) = 0.05, p = 0.2, BF10 = 0.121; Experiment 2: r(385) = -0.06, p = 0.25, BF0 = 0.122). We now include this analysis in the manuscript and additionally provide the corresponding scatter plots below:

‘In addition, it could be that the difference in confidences about immediate and total reward gradually emerged over the course of the experiment as the subjects got more familiar with the task structure. Since some subjects made no or only a few purely exploratory choices (which we define as those with a lower mean and just one sample shown), we pulled such choices of all subjects together. We found no correlation between the difference in confidence about total and immediate reward and the trial number (Experiment 1: r(547) = 0.05, p = 0.2, BF10 = 0.121; Experiment 2: r(385) = -0.06, p = 0.25, BF0 = 0.122).’

[Figure in the response file]

We also updated the methods section with the number of trials (112 in both experiments): ‘In both experiments, each subject performed 112 trials in total.’

>Another possible reason for the null findings is that the sample size was not large enough to detect a 3-way interaction. Did the authors conduct a power analysis to establish the sample size for this study?

Following the completion of experiment 1, we estimated how many more subjects we would have needed to collect to get a decisive Bayes factor for the difference in confidence about immediate and total reward following an exploratory choice. For each of 10000 simulations, we appended the original dataset with N extra sampled subjects, calculated the Bayesian t-test, and recorded the resulting Bayes factor. We then decided to collect 100 subjects as this would result in a less than 10% chance of obtaining an inconclusive result (see the last panel of the figure below). We also followed a suggestion to restrict the confidence scale to 50 to 100 for a better calibration. These together gave rise to experiment 2, with its conclusive negative result.

[Figure in the response file]

>I note the degrees of freedom for the correlations was 137 and 92 for Experiments 1 and 2, respectively (lines 150-153). Does that mean some participants did not have confidence data?

The reason for reduced degrees of freedom is that some participants did not make any purely exploratory (lower mean and 1 sample shown) choices in which we expected the confidence judgments to be dissociated.

>Discussion:

>The discussion is clear and concisely summarises explanations for these findings.

Attachments
Attachment
Submitted filename: confidence_exploration_response_to_reviewers.docx
Decision Letter - Darrell A. Worthy, Editor

Multifaceted confidence in exploratory choice

PONE-D-24-20376R1

Dear Dr. Solopchuk,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

I think you have adequaltely addressed the concerns raised by the reviewer.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. If you have any questions relating to publication charges, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Darrell A. Worthy, Ph.D

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Formally Accepted
Acceptance Letter - Darrell A. Worthy, Editor

PONE-D-24-20376R1

PLOS ONE

Dear Dr. Solopchuk,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

If revisions are needed, the production department will contact you directly to resolve them. If no revisions are needed, you will receive an email when the publication date has been set. At this time, we do not offer pre-publication proofs to authors during production of the accepted work. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few weeks to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Darrell A. Worthy

Academic Editor

PLOS ONE

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .