Peer Review History
| Original SubmissionNovember 20, 2019 |
|---|
|
PONE-D-19-32012 Captivated by thought: "sticky" thinking leaves traces of perceptual decoupling in task-evoked pupil size PLOS ONE Dear Mr. Huijser, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. As you can see below, I have received two expert reviews. Each reviewer raises a number of important points that need to be addressed. In particular, both reviewers indicate that the interpretations and conclusions with regard to the processes at work here might not be appropriately based on the data presented (see Reviewer 1, main point 2, and Reviewer 2, main point 2). This is an important issue, as PLOS ONE specifically requires that “the data presented in the manuscript must support the conclusions drawn. Submissions will be rejected if the interpretation of results is unjustified or inappropriate, so authors should avoid overstating their conclusions. Authors may discuss possible implications for their results as long as these are clearly identified as hypotheses instead of conclusions.” It is therefore essential to address these issues in a revision. Furthermore, both reviewers have questions with regard to the sample size, the validity of the sticky thought measure, and analytical choices concerning this measure and other measures. PLOS ONE requires that sample sizes must be large enough to produce robust results, so it is necessary to address this. It is also necessary to address the validity of the measure (also see Reviewer 1, main point 1 and below), the number of comparisons, and other analytical choices, and it is necessary to report model data for all comparisons and descriptive statistics for all variables. Both reviewers express concerns with regard to the theoretical validity of the concept of sticky thoughts. They point out potential overlap with other types of thoughts defined elsewhere in the literature, and give excellent suggestions for further improving the theoretical clarity of your manuscript. Finally, both reviewers provide an extensive list of minor issues (clarifications, typos, missing items/legends in figures, etc.) that need to be addressed. I would like to invite a revision that addresses these issues. We would appreciate receiving your revised manuscript by Mar 08 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols Please include the following items when submitting your revised manuscript:
Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out. We look forward to receiving your revised manuscript. Kind regards, Myrthe Faber Academic Editor PLOS ONE Journal Requirements: 1. When submitting your revision, we need you to address these additional requirements. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at http://www.journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and http://www.journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The manuscript describes a study investigating the pupillometric and behavioral performance correlates of a category of thought dubbed "sticky thinking", a form of perseverative thought that it is hard to disengage from. I think that the study is interesting and there is value to study this particular type of thinking; I also liked the transparency with which the methods and analyses are described, and I think that GAMMs are very promising to describe this type of data. I also have some major and minor comments that, from my point of view, could help improve the paper. My main comments are about the two central topics in the manuscript: sticky thought, and perceptual decoupling. Regarding sticky thought, I see the authors introducing this concept as a (somehow) new category of thought. However it is not very clear (especially in the Introduction) how it differs from other types of mind-wandering or self-generated thought, and how it relates to them. The whole mind-wandering literature is plagued with a problem of semantics: researchers in the field often use different words to refer to the same thing, or the same words for different things. I think that in this context, conceptual clarity is even more important. For how it is described in the Introduction, sticky thought seems greatly conflated with general mind-wandering: e.g., the descriptions in line 39 to 43 could easily be used for mind-wandering. Sticky thought also seems conflated with rumination, a negative form of perseverative thought common in psychopathologies like major depressive disorder or anxiety disorders. I think that, if such a new concept is to be introduced in the literature, it should be clearly defined relative to existing concepts: for example, the authors say that it is a form of rigid, and narrow-focused thought process that it is hard to disengage from, but how is it different from rumination? I imagine that the authors implicitly assume that the stickiness of thought is a category of thought that is orthogonal to the valence of the thought (so that it can also be not negative), but in this case "neutral" and "positive" sticky thought should exist. Is this the case? Is there research on this, or could the authors provide examples? As the debate on the categories of thought is an ongoing one (see Seli et al., 2018, TiCS; Christoff et al., 2018, TiCS), I think it would be helpful to better define the concept of sticky thought relative to other types of thought. Moreover, for example the evidence provided in lines 46-54, or lines 65-67, is all related to rumination/worry, but the authors’ conclusions generalise to sticky thoughts: this does not seem warranted. While the idea of a “sticky” factor of thought is interesting, as it is now, I was not 100% convinced that it exists outside of rumination-like thoughts. My other main comment is on the concept of perceptual decoupling. This refers to a hypothesized process by the brain, aimed at insulating the mental stream of thought from external (perceptual) distractions. It is an interesting hypothesis, but it is very much open to debate if such a process exists in the brain, and evidence is still limited. One previous study (Smallwood et al., 2011, Plos One) somewhat linked this hypothesized process to a reduction in pupil sizes during on-task and off-task periods of a sustained attention task; however, this is far from proving that small pupils are an index of perceptual decoupling. While reading the manuscript, I feel that the authors imply that, finding smaller task-evoked pupil responses, automatically signals a perceptual decoupling process in action (e.g. lines 145,146, or 627-632). This seems like a form of reverse inference to me. The fact that perceptual decoupling (if it exists) might be linked to smaller pupils, does not necessarily mean that finding small pupils means that the brain is decoupling from the external environment. If this hypothesized process is really in act, we should observe other concurrent measures of reduced processing of the external environment: I don't think that the current paradigm in the study allows to do that, and additionally, there appears to be evidence for no differences in behavioral indexes of RT and RT variability between sticky and non-sticky thinking (e.g., line 623). While the stickiness factor did discriminate in no-go accuracy, this is a finding also common for mind-wandering thoughts in general, and is not intrinsically linked to perceptual decoupling. It is also not clear what the mechanism that would link smaller pupil task-evoked responses to a perceptual decoupling process would be. All in all, I do not think that the current study provides enough evidence to warrant strong affirmations such as those in lines 627-632, or lines 686-688. I don't know if the authors agree, but my suggestion would be to use a much more cautious language throughout the manuscript. Some other comments that I have that I hope the authors will find useful: Lines 58-84: This is true but obviously not limited to sticky thought, but is one of the main obstacles in the field of mind-wandering and of consciousness research in general. I feel like the whole paragraph could be shortened if the authors have an interest in doing so, given that thought probes are pretty much the standard way in the field to study these types of ongoing thoughts. Lines 93-95: I have read the cited study and I couldn’t find some of the results here described (e.g. off-task thought, but not stickiness specifically, seemed to be related to task accuracy). Could the authors double-check? My apologies in advance if I missed or misread something. Line 97: One key reference that could be added here is Seli, Cheyne & Smilek (2013), JEP:HPP. Lines 141-142: This is slightly misleading as sticky thought is a novel concept. I do believe that there is some pupillometric research on rumination (e.g. Siegle, Steinhauser, Carter, Ramel & Thase, 2003), which could be added here, if the authors think it would be interesting for their argument. Lines 147-149: I was confused by this sentence, as it is possible to dissociate, and measure separately, baseline to task-evoked pupil responses. Could the authors clarify? Lines 149-151: Another study that could be discussed here and potentially in other parts of the manuscript is Konishi, Brown, Battaglini & Smallwood (2017), in which the authors find smaller baseline pupils for off-task thought, and specifically for negatively valenced and intrusive thoughts. As an author of that study I have a conflict of interest in pointing to it, but it seems to have some obvious links to the present manuscript (e.g. the concepts of intrusive and negative thoughts seems very close to those of rumination/worry). Lines 154-158: Indeed both smaller and larger baseline pupil sizes have been found in the literature (e.g. Gilzenrat, Niewenhuis, Jepma, Cohen, 2010; Smallwood et al., 2011, 2012, Van den Brink, Murphy & Niewenhuis, 2016; Van Orden, Jung & Makeig, 2000; Konishi et al., 2017; Unsworth & Robison, 2016), and it is still not 100% clear what factors account for these differences. Lines 163-165: This seems a little bit like a filler sentence. Maybe some concrete examples could be provided. Line 166: a “which” seems to be missing in the phrase “in we embedded”. Lines 170-174: To me it appears that these sentences construct a false dichotomy between sticky thoughts and general off-task thought/mind-wandering. For example, all the predictions described here can relate to off-task thought too. Lines 182-183: How was the number of participants decided? Lines 231-232: I think I missed how the experiment included 16 personal concern triplets, if the authors selected the 2 main concerns for each participant, and then translated each into a triplet of words. Sorry in advance if I misunderstood. Lines 249-251: Was this second question always presented, even if for example, participants reported to be on-task or externally distracted in the first question? Were the authors still analysing such cases? Figure 2: There’s a small typo in the third question (maters instead of matters). Line 263: A reference could be added for PsychoPy (the most recent study is Peirce, Gray, Simpson, et al., 2019, Behavior Research Methods). Line 304-305: It seems like the fixed order would cue participants to know in advance when a thought-probe would be presented. This could be seen as an issue, do the authors have any opinions on this? Lines 322-324: How were these cut-offs decided? Lines 364-365: This decision seems a bit strange in the context of pupil size and arousal, as external distraction is likely arousing, the opposite of inalertness. Lines 367-368: What was the reason? What was the point of including a 6 point scale if it’s not used in the analyses? Lines 481-483: Apologies in advance as I’m not sure if I’ve missed it, but was this result also taking into account the overall proportion of mind-wandering reports? If not, this result would be confounded by that, as they also increase over time. Figures 5 and 6: The legend for the plot on the right could be a bit clearer (or is missing). Lines 511-512: There is a serious and common problem in these designs, in which a thought-probe is presented always after a target stimulus. Following a mistake, participants might confabulate and report they were not on-task. This is a hard problem to overcome, but I think that this possibility should be discussed, given that the whole field relies on self-reports. Lines 516-519: Again, I am not 100% sure if I missed this, but was this analysis also taking into account the underlying proportion of mind-wandering thoughts? Lines 534-535: Did the authors analyse baseline pupil sizes across attentional states with simpler models such as LMMs? Given the amount of previous research on this, I’m just wondering if the null result depends somewhat on the GAMMs. Lines 565-567: It seems a bit strange that the difference is smaller between sticky and non-sticky than between sticky and neutral. If these are opposite of a continuous state, one would expect the difference to be bigger between sticky and non-sticky. Do the authors have any comments on this? Figure 10: The legend here could be clearer. Maybe it’s better to use colored lines instead of different patterns. It could also be that it wasn’t very clear because the image included in the review was lo-res. Line 654: Minor typo, missing a “to” in “compared neutral/non-sticky”. General: Would it be possible to have a table or a description of the best fitting models for each analysis conducted? General: Did the authors measure and check the response times to the thought-probes? It sometimes happens that some participants will start to respond very quickly to those probes (because they want to finish the experiment faster). Such fast responses should be discarded, as it is debatable if the participants can introspect and report on their previous mental states so quickly. Reviewer #2: Thank you for inviting me to review this paper. I thought it was generally well-written and easily understandable. The concept of sticky thought also seems important and relevant to the field of thought in general. For the most part, I think the methods, procedures, and results were described well. At the same time, I have some significant concerns that I think should be addressed before publication. Many of central issues (detailed below) deal with conceptual framing, theoretical importance, and analytical choices can potentially be addressed with a substantial revision. However, at least one of the main issues pertains to sample size (here only 34 participants were used); I believe the authors may want to consider a replication or additional data collection. One of the main concerns I have is the limited literature covered in the Introduction. Although I think the authors have written concisely, key literature is missing. For example, an expanded discussion on the relationship between sticky thought, possibly negative valence, and arousal might help; it is only briefly mentioned now. Part of this relates to the authors choice not to make a hypothesis about the direction of sticky thought with baseline activity. This choice is completely fine, but the review of relevant concepts is still missing to make this case. There are many studies regarding rumination, etc. More directly related may be Christoff et al.’s (2016) paper which explicitly discusses concepts related to sticky thought, and some of Smallwood’s multi-dimensional experience sampling studies may also prove useful. I also found the discussion section to be somewhat speculative without many clear theoretical implications. The authors attempt to explain some of their results with relatively shallow explanations – e.g., why they did not replicate past studies and why sticky thought was not influenced by personal concerns. Sample size is a major concern given there is no a prior power analysis mentioned. The authors cite a previous study on sticky thought which may have been used to estimate an effect size. This field typically sees small (to medium) effect sizes, and these are not necessarily addressed by using lmer. The authors do mention computing BF in the discussion for one of the results that did not replicate, but I do not think this necessarily justifies the sample size for all the various relationships tested given the relatively low effect sizes seen for mind wandering (and other related dimensions) and performance in the literature. One of the other main concerns is on the validity of measuring sticky thought. Figure 2 and the in text wording do not match. How were participants trained/instructed about this question? The authors also note in the discussion people may not be able to discern different levels of sticky thought using their method. While I appreciate the honesty, it seems like it could be problem with the study design/materials used since the research question was not to test a measure of sticky thought but rather the findings/interpretation depend on a reliable measure. I was surprised to see that the authors chose to treat the sticky thoughts as categorical from the outset and also their choice to bin into three categories. There also did not appear to be a consideration for the responses in other categories as part of their models. Moreover, although the authors did not dichotomize, Seli, Beaty, et al. recently made a case to avoid binning thought dimensions because it can artificially inflate rates. Related, this also increases the number of comparisons made when binning. In general, there were a large number of tests computed here. Were the number of tests considered in the calculating significance? During the statistical analyses section and at the outset of the results section, the authors mention time/timeseries as an important factor to consider. Based on how this shaped their entire analytical approach, I think it might be factored into the Introduction and theoretical motivation a bit earlier. The authors also bring up their goal to assess whether they could induce sticky thought, as if this was one of their main questions. However, the paper was not framed to address this as a main question from the Introduction, so I suggest making this more explicit from the outset. Please report all descriptive statistics for all variables. Please also provide model data on all those constructed in a Table and mention them in the text. For example, the model comparison for sticky vs non sticky in terms of task evoked pupil size is not mentioned. Why were go and no go trials analyzed separately? The authors also do not mention other related metrics assessing dynamic measures of thought such as freely-moving thought. The authors may also consider making a point about the fact that sticky thought appears to be different from task unrelated thought, making it an important dimension to study. Figure 1’s caption mentions performance but it is not in the figure. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Mahiko Konishi Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.
|
| Revision 1 |
|
PONE-D-19-32012R1 Captivated by thought: "sticky" thinking leaves traces of perceptual decoupling in task-evoked pupil size PLOS ONE Dear Dr. Huijser, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. As you can see below, both reviewers positively evaluated your revised manuscript. Reviewer 2 still has a couple of important clarification questions about conceptual, methodological, and analytical details. I would like to invite a revision that addresses these points. If you address these points comprehensively in your revision, the manuscript should be acceptable for publication. Note, however, that the final decision of course depends on the quality and clarity of the invited revision. Please submit your revised manuscript by Nov 22 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Myrthe Faber Academic Editor PLOS ONE [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: All my previous comments have been adressed in a satisfactory manner by the authors. Congratulations for an interesting paper! Reviewer #2: Thank you for the opportunity to review the revised manuscript. In general, I am supportive of this paper and think the authors have made substantial progress. At the same time, there are still unresolved issues that need more attention and clarification before recommending publication. More details are still needed on the choice to group sticky thought into three categories. 1) what were the distributions that made the authors choose to group this way; 2) can the authors confirm the model treated these as categorical rather than continuous?; 3) The way the models are reported currently makes additional aspects unclear. For example, if they were categorical, a single chi-sq/p-value may not capture the full set comparisons because R defaults to a reference group for categorical variables based on my understanding. Was the reference group sticky? More detailed model descriptions and results could help this confusion, as mentioned in my previous review. The revision has improved the paper, but there are still some unresolved issues with terminology. I suggest the authors avoid using the term “constrained mind wandering” which may cause further confusion with respect to terminology. The papers cited for this term also do not use this term, and they may actually argue against the use of this term given their proposed definition (i.e. Christoff et al 2016). Rather, constrained “thinking” might be more appropriate given these recent, and currently unresolved debates. Moreover, the authors actually do assess what they call mind wandering in other places in a separate question, leading to further confusion. The last sentence of the first paragraph is confusing – i.e. the one that references the differences in their methods and theory of constraints. I still think it is relevant to mention in the methods how participants were trained/instructed to use the scale. This is critical information. Perhaps the answer is none, which would still be important to know. I also do not think including important model comparisons on OSF is entirely sufficient. For example, p values and effect sizes are missing throughout some of the analyses. Some additional models may also be critically important to the main results, such as sticky vs non-sticky thought for evoked pupil size. However, I do appreciate the inclusion of open materials. Why were only correct trials analyzed? The authors have added some descriptive statistics. I suggest a table with the M an SD for all key variables, which are still missing. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Mahiko Konishi Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 2 |
|
Captivated by thought: "sticky" thinking leaves traces of perceptual decoupling in task-evoked pupil size PONE-D-19-32012R2 Dear Dr. Huijser, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Myrthe Faber Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: |
| Formally Accepted |
|
PONE-D-19-32012R2 Captivated by thought: “sticky” thinking leaves traces of perceptual decoupling in task-evoked pupil size Dear Dr. Huijser: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Myrthe Faber Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .