Peer Review History
| Original SubmissionFebruary 8, 2023 |
|---|
|
PONE-D-23-03662Subjective and objective measures of visual awareness convergePLOS ONE Dear Dr. Kiefer, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Editor comments: Two reviewers commented on you manuscript and I have read the manuscript myself. As you can see from their comments, both referees are generally positive though both raise some concerns about the conclusion finally drawn from the null effect of response mode (objective vs. subjective measures) on threshold, as it is based on a single experiment and may require replication across larger samples or a more effective experimental variation. They also have some reservations about the methodological approach to estimating thresholds from both measures of awareness and the analysis of the psychometric functions, suggesting a more detailed description in the methods section to clarify some aspects. Based on these comments and my own reading, I advice you preparing a revision of the manuscript and to provide more details and justifications in the methods section and clarify any vague statements. Additional empirical data would be fine (as it would increase the impact of this already fine work), though I would not strictly demand it. Please submit your revised manuscript by May 29 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Michael B. Steinborn, PhD Section Editor PLOS ONE Journal requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: I Don't Know ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The authors report one study, in which they investigated whether objective and subjective measures of visual awareness differ or not. They used two different tasks, a detection and a discrimination task and analysed objective (accuracy) and subjective (PAS) measures by means of psychometric analyses. The results show very similar detection and comparison thresholds for objective and subjective measures. This study tackles a timely and important issue in consciousness research. The manuscript is well written and the results seem clear. My only concern with this study is that the results are dependent on how the thresholds are defined. For the objective accuracy this is clear, however, for the subjective case (PAS) it seems not so clear to me. The authors used the PAS levels 1.5 and 2.5 to determine the detection and the discrimination thresholds, respectively. I can intuitively follow this approach, yet I am wondering whether the PAS levels 2 and 3 wouldn't also make sense. Importantly, such thresholds would completely change the result pattern (higher thresholds for PAS than for accuracy) and also the conclusions of the study (in favour of a lag between subjective awareness and objective performance). I think the authors should clarify why these levels were chosen for the thresholds and why other levels are considered inappropriate. Minor points - Power analysis: the power analysis is based on d, even though the effect of interest is a 2 x 2 within-subject interaction. There is a paper by Langenberg et al. (2022), which highlights that the standard conversion of effect sizes (large effect: ηp2 =. 14 and dz = .80) is not appropriate for within-subject designs. For example, if one wants to find an interaction effect with ηp2 =. 14 (a large effect according to Cohen) the conversion to dz would yield dz = .40 (https://statsbyrandolph.psychologie.uni-bremen.de/Power/powerANOVA_gui). I am not sure how this problem could be solved (maybe new conventions need to be established), but it is probably better in repeated measures designs to have a desired ηp2 (instead of dz) for an a priori power analysis. - Results: Please report p-values in APA conform format in all cases - Please check the manuscript for typos Literature Langenberg, B., Janczyk, M., Koob, V., Kliegl, R., & Mayer, A. (2022). A tutorial on using the paired t test for power calculations in repeated measures ANOVA with interactions. Behavior Research Methods . https://doi.org/10.3758/s13428-022-01902-8 Reviewer #2: Review of PONE-D-23-03662 Subjective and objective measures of visual awareness converge This manuscript reports the data of a single experiment in which participants had to detect masked words vs. identify the case of masked words with varying word-mask SOA. In addition to the task, also the contrast of the words was varied. Accuracy in the tasks served as an objective measure of visual awareness, and additional Perceptual Awareness Scale (PAS) ratings served as a subjective measure of awareness. The authors derived awareness thresholds from both measures. Both measures of awareness varied similarly with the task demands (detection vs. identification) and contrast. The central conclusion of the authors is based on a null effect of response mode (objective vs. subjective) on threshold – that both measures yield converging estimates of perceptual awareness. In general, the approach to estimate and compare thresholds from both measures of awareness to be able to compare them directly seems quite smart and elegantEspecially with the present set of data, the conclusion that objective and subjective measures of awareness converge might be a bit overstated. However, I think that this study actually adds something useful to the debate of how to optimally measure subjective awareness (and avoid some common mistakes that have been outlined recently, e.g. Schmidt, 2015). Specifically, I do have several reservations about the manuscript, mostly pertaining to the methodology and to the conclusions. 1. While I find the fact that both thresholds vary to a very similar amount with task and contrast to be quite interesting, I am not as confident about the lack of an effect of “response mode” as it is basically a null effect. To be more convinced of this finding, I would like to see replication across larger samples and or more manipulations of the stimuli. This is especially in the light of previous findings with similar methodology that observed a lag between the measures, thus leading to very different conclusions (Sandberg et al., 2011). The authors suggestion that these differences might be due to the different task designs (e.g. the use of a temporal 2AFC task here) might be empirically validated. 2. Even though the authors already discuss this, it still is difficult for me to understand why the pronounced dissociation in the spread of the psychometric functions for both measures (interaction of response mode and contrast) would not invalidate the conclusion that both measures reflect converging measures of awareness. Especially the author’s point ”psychometric functions were fitted to the frequency distribution of the four discrete response categories compared to the continuous accuracy distribution in objective measurements, which could have affected their slope” (p. 22) is too vague. If this is true, I would suspect that this might also have affected threshold measurements or at least their stability? I think there should be some reference or proof of concept that validates this specific estimation procedure. 3. Relatedly, the authors suggest that an important factor for the diverging results might be the employment of a temporal 2AFC task because it is “free of bias”. Actually this is not completely true: the data from a 2AFC task still may include biases (e.g., preferring one response or interval over the other) but this gets balanced out in averaging across the data in which the correct target is presented in the first or the second interval. Crucially, the presence of such a “conceiled” bias will affect (typically artificially enlarge) estimates of the spread of the psychometric function. It is therefore advisable to analyse data from both trial types (target in first interval and target in second interval) separately – or jointly, by taking the trial types into account (cf. Vorberg and Ulrich, 2009, Ulrich, 2010) 4. One potential complicating fact is that both measures of awareness were assessed in the same trials (and always in the same order). I suspect that PAS ratings might at least to some degree be highly biased by the “objective” responses given directly before, e.g., by their ease, or by some monitoring process. I wonder – and this is an additional experiment that I really would like to see – whether the thresholds would still converge as well if the objective and subjective measures were collected in different blocks of the experiment. 5. Relatedly, is it warranted to enter thresholds derived from the two different measures (here_ response modes) in the same ANOVA? Especially because these were collected in the same trials and thus might not be independent. 6. Even though I found the manuscript sometimes a bit lengthy and slightly repetitive, in other parts (especially the methods section) it seems to lack detail. For example, I did not find the rationale why three runs of the adaptive SOAs were used – and which color in Fig. refers to which type of run? Why are the runs depicted in Fig. 2 of different lengths? Why did the authors decide to use an adaptive procedure rather than the method of constant stimuli which would provide equal numbers of trials for the different SOA durations? The word stimuli (in Fig. 1 it seems it is actually rather pseudowords?) should be described in more detail. Maybe most crucially, how was the training of the PAS actually administered? What was the criterion for using the scale correctly? Depending on the exact training procedure, such a training might very strongly prompt the participants to base their subjective ratings on objective performance (or at least the ease with which the decisions could be made), or maybe to form an association between the PAS categories and the SOAs. 7. On page 9 it is stated that a larger width of the psychometric function indicates a more gradual transition from unconscious to conscious. Could it also just indicate an abrupt transition which however varies on a trial to trial basis? Averaging across many such trials might also lead to a flattening of the psychometric function 8. I think the full ANOVA results should be reported in the main text rather than as supplementary material, as all possible interactions are relevant to the interpretation of the results. 9. I hope I did not overlook it but the osf repository only seems to include the aggregated data (parameters of the fitted function). Should not also raw data be included according to the PLOS standards? 10. Some of the highly relevant empirical background could be described in more detail in the Introduction. For example, it would help if the details of the Sandberg study (and its differences to the present study) would be described more clearly. Being not an expert in the field of perceptual awareness, it was difficult for me to figure out what exactly is new in this study compared to previous ones. Given the more general scope of PLOS, I feel that many readers might encounter a similar problem. Sandberg, K., Bibby, B. M., Timmermans, B., Cleeremans, A., & Overgaard, M. (2011). Measuring consciousness: task accuracy and awareness as sigmoid functions of stimulus duration. Consciousness and cognition, 20(4), 1659-1675. Schmidt, T. (2015). Invisible stimuli, implicit thresholds: Why invisibility judgments cannot be interpreted in isolation. Advances in Cognitive Psychology, 11(2), 31. Ulrich, R., & Vorberg, D. (2009). Estimating the difference limen in 2AFC tasks: Pitfalls and improved estimators. Attention, Perception, & Psychophysics, 71(6), 1219-1227. Ulrich, R. (2010). DLs in reminder and 2AFC tasks: Data and models. Attention, Perception, & Psychophysics, 72(4), 1179-1198. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
PONE-D-23-03662R1Subjective and objective measures of visual awareness convergePLOS ONE Dear Dr. Kiefer, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Editor comments: Both reviewers have commented on the revised manuscript, noting considerable improvements. R1 advocates for publication in its current state, whereas R2 has remaining issues and insists on demanding additional evidence substantiating the equivalence of subjective (introspective) and objective measures. Although I am not an authority in your field, I find R2's critique compelling in the main proposal. While additional empirical evidence would clearly fortify your manuscript, I also see clearly that your work is already robust in its present form. Therefore, I suggest that you carefully consider R2's final remarks and determine to what extent they should be addressed, either through further evidence or through bolstering the existing arguments. I anticipate that no additional review rounds will be necessary, and I shall make a decision following the resubmission of the manuscript. Below I have some more detailed comments of my own that are aimed to help you in the final preparation of the manuscript. Please submit your revised manuscript by Oct 27 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Michael B. Steinborn, PhD Section Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. Additional Editor Comments: Firstly, I wish to clarify that my comments are intended to enhance the quality of the manuscript. While I am not the foremost expert in your specific field, my view may offer a scholarly perspective from slightly outside your domain, that is, from the view of an interested reader that stands as proxy for a broader audience. I do not expect you to adhere strictly to my remarks, however, you may find certain points to be of use of value, that exactly is my intent. (-1-) overly lengthy discussion Although the manuscript is generally well-written and concise, this ceases to be the case at the onset of the discussion section. While I find the discussion to be insightful and thoughtfully debated, it is somewhat lengthy, and overly focused on technical aspects, in my estimation. Consequently, my recommendation would be to revise and streamline the discussion section to make it more succinct. (-2-) objective and subjective measures are different processes While I am not an expert in the field, it appears to me that the question of whether objective and subjective measures represent different processes could potentially be addressed a priori through meticulous analysis of the necessary components and characteristics of the process in question. To name an example of what I mean, I suggest some specific literature that did such an analysis of the relationship between introspective and performative outcomes (doi:10.3389/fpsyg.2022.867978, chap. 4.3., 4.4, 4.5). Empirical verification in this context may be quite challenging, given that it relies on the design of the experiment and the specific factors manipulated. In the case of a temporal 2-AFC task, for instance, there are crucial factors that naturally influence the awareness threshold. At this juncture, I would expect a more comprehensive analytical exploration into the matter. (-3-) subjective and objective measures within same trial Your arguments to acknowledge a potential criticism—that the convergence of measures might be due to the fact that both subjective and objective measurements were collected within the same trial, could be reconsidered. I would clearly agree with the authors that pointing out that previous research, which also collected data within the same trial, did not find such convergence, in and in this way, the present research corroborates previous findings, which could be an argument that no more data collection would be needed to corroborate the present finding. In my layman perspective, however, I feel that this aspect of design may indeed by crucial here. I name an example from another field: when introspecting inattention in reaction-time series (see my aforementioned suggested review paper, around chap. 4.1 to 4.5), the self-ratings are typically given as probe trials as performance is sensitive to the frequency of "asking introspectively". Maybe it is more than to be in line with the other findings, but maybe you have some ideas of design features could be improved in future studies to - in my layman view - be better. (-4-) slopes for easy and complex tasks One crucial observation the authors point on is that shallower slopes (i.e., wider widths) were associated with more complex tasks (e.g., discrimination vs. detection). This result supports the notion that the transition from unawareness to awareness may differ based on the complexity of the perceptual features being evaluated. This is an absolutely interesting and in my view crucial theoretical argument that merit stronger emphasis, at least I feel it so by impulse. My immediate association is a recent work of Cao et al. (doi:10.1007/s00221-020-05861-4) who argued in the same direction supporting the author's theoretical proposal, namely that even the complexity of the rather motoric task or movement has a fundamentally altering effect on individual's introspective judgment. I suggest giving this theoretical argument more elaboration in the final revision. (-5-) take home message Although I am not an expert, I must express some reservations regarding the final conclusions. While there is no doubt that the findings are the result of an outstanding empirical study, the theoretical conclusions—namely, that the study challenges the traditional dichotomy between access and phenomenal consciousness—merit scrutiny. My scepticism arises from the belief that the design methodology (typically accepted within the field) may not be sufficiently critical to reveal these rather subtle differences, and this exactly is what future research should be more aware of. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: (No Response) Reviewer #2: Partly ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: (No Response) Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: (No Response) Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: (No Response) Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: (No Response) Reviewer #2: Review of PLOS R1 Overall, I think the authors have revised this manuscript thoroughly and carefully. They have responded extensively to all of my previous comments. I think the additional explanations in the methods section help clarify the procedure. I like the newly added Fig 3, as it is very illustrative of the effects. I am still not so strongly convinced by the null effect on which the main interpretation of the results rests, as the final sample was rather small (N=20) and the power analysis was based on the assumption of a rather large effect size. In addition, the null effect is very unexpected based on the results of previous studies and thus stands quite isolated in the literature. The explanations of the authors for this divergence are plausible but purely speculative at this point. The novel analysis on the order effects does not exactly help, either, as the pattern of thresholds from the subjective ratings do not follow the objective thresholds, but given the small trial numbers I agree that this should not be strongly interpreted (and it is also questionable if the order bias in the objective thresholds would be accessible to subjective report). So I am still a bit ambiguous of this manuscript - on the one hand I find the approach and the results interesting (and of course I am not opposed to publishing null results). On the other hand, I feel that the theoretical contribution of this manuscript would be so much stronger if the null results were replicated in a second, more highly-powered, experiment; or if the authors attempted to empirically explore the reasons for the divergence of their and previous results, so publishing the work at this stage feels a bit like a missed opportunity for a much higher-impact contribution. I spotted a few typos and other minor points: - P. 3: …on a four-point rating scales…-> scale - P. 16: … we did not sufficiently collected data…-> collect a sufficient number of data points - P. 16: to empirically confirm guessing rate of 2AFC -> a guessing rate for the 2AFC task - P. 19: for low contrast condition compared to high contrast condition -> the low, the high - P. 20: compared to discrimination task -> to the discrimination task - P. 24: subjective ratings threshold were -> thresholds - P. 25: we analyzing thresholds -> analyzed - P. 30: could also test, whether the thresholds would still converge, if the … -> remove commata - Fig 2: shouldn’t the x-axis on the first column be labeled “trial” instead of “run #” ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 2 |
|
Subjective and objective measures of visual awareness converge PONE-D-23-03662R2 Dear Dr. Kiefer, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Michael B. Steinborn, PhD Section Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: |
| Formally Accepted |
|
PONE-D-23-03662R2 Subjective and objective measures of visual awareness converge Dear Dr. Kiefer: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Michael B. Steinborn Section Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .