Peer Review History
| Original SubmissionJuly 14, 2021 |
|---|
|
PONE-D-21-22853Brains Over Beauty: A Preregistered Test of the Effects of Objectification on Women’s Cognitive PerformancePLOS ONE Dear Dr. Zola, thank you for submitting your manuscript to PLOS ONE. After careful consideration by several experts in the field with (you will see) somewhat disparate opinions, we feel that it has merit but as it currently stands, has to be improved in order to fully meet PLOS ONE’s publication criteria. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please make sure to carefully, point-by-point respond to the concerns raised by Reviewers, paying particular attention to the technical aspects of your work. Also, pease explain and discuss any differences of your research from the original one by Gay and Castano as mentioned by Reviewer 1. Please submit your revised manuscript within six months from this date as thereafter, any revision has to be considered a new submission. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Thank you for choosing PLOS ONE for reporting your research. Kind regards, Sasha Alexander N. 'Sasha' Sokolov, Ph.D. Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Thank you for stating in your Funding Statement: “This research was supported by grants from the Northwestern University Undergraduate Research Grants Program (awarded to AZ).The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.” Please provide an amended statement that declares *all* the funding or sources of support (whether external or internal to your organization) received during this study, as detailed online in our guide for authors at http://journals.plos.org/plosone/s/submit-now. Please also include the statement “There was no additional external funding received for this study.” in your updated Funding Statement. Please include your amended Funding Statement within your cover letter. We will change the online submission form on your behalf. 3. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: No Reviewer #2: Yes Reviewer #3: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The authors correctly point to the fact that the experiments testing the effects of objectification have been typically underpowered, and that this may account for some inconsistent results. They also suggest that in these studies the use of covariates and of moderating variables might have been too ad-hoc, and "after the fact." Considering that the authors have no evidence of any of this, I found their statement in this regard (p. 11) unwarranted. This is a small detail, however. To correct all this, they set out to conduct a well-powered, pre-registered experiment, with a three-level condition intended to test the effect of male gaze on cognitive performance of women. I truly applaud their systematic approach, the effort, and the fact that they pre-registered the study and shared all the data. The experiment, however, suffers from three major methodological flaws that make the results unpublishable. 1. Previous research, discussed by these authors, finds that the effects of an objectification manipulation is likely moderated by trait self-objectification. Objectification scholars clearly theorize this, as the authors themselves write on page 12 "As noted, self-objectification can be conceptualized as both a state and trait variable (Noll & Fredrickson, 1998), and therefore, in line with original theory, women who have the tendency to self-objectify could have a greater chance of being distracted by body monitoring." Yet, the authors decide NOT to take any trait measure of self-objectification. This alone could be the reason why they did not find effects of their manipulation (moderated by TSO, of course). 2. The authors seem to model their manipulation after the study by Gay and Castano, in which the above-mentioned interaction was found. Yet, instead of having women in the male-gaze condition filmed by a man, they only tell them that the video might be later viewed by a man. Why the authors chose to water-down so much a manipulation that not only had worked in the past, but that it is also a direct operazionalization of the theory of objectification and it has excellent ecological validity, is puzzling to me. 3. Previous research also shows that the task must be somewhat challenging for the effects of objectification to emerge. This is true, by the way, also for other social cognition research that does not deal with objectification: If it is too easy, you won't find the effect of the manipulation because participants can complete the task even when they are cognitively depleted. Yet, here too, the authors make the puzzling choice of selecting 20 very easy items: range 0-20, M = 16.33. Reviewer #2: Thank you for the opportunity to review, “Brains over beauty: A pre-registered test of the effects of objectification on women’s cognitive performance.” The authors should be commended for conducting work that is pre-registered and adds clarity not only to an important tenant of objectification theory, but also to the literature that has provided mixed results in terms of the relation between women’s self-objectification and cognitive performance. I also would like to say that I thought this manuscript was very well-written and easy to follow, from the literature review to coverage of the results. Below I outline a few points that I believe could make this an even stronger manuscript. I appreciated that the authors took time to outline all previous work conducted on self-objectification and cognitive performance. I found myself trying to trace citations to determine whether work that found significant effects was one in the same with work that was potentially underpowered. This made me think that including a table of all papers, with columns of all relevant pieces of information (e.g., manipulation, sample size, findings), could make for an easier examination of inconsistencies in past findings and possible reasons why (e.g., which seven of the studies were likely underpowered?). Given that the Guizzo paper assessed the closest concept to the current work - sustained attention - can the authors elaborate on what this task looked like? Could you also clarify that Guizzo’s moderator of internalized beauty ideals was an conceptualization of trait level self-objectification? My only true concern with this manuscript is that it seems the authors may have missed some key citations. For instance, Winn & Cornelius have a 2020 review of literature on the topic, which I think if referenced would provide further justification for the previous mixed results and need for clarity. Additionally, a few other studies came to my mind that examine the relation between self-objectification and cognitive performance (broadly defined): Baldissari & Andrighetto (2021), Aubrey & Gerding (2015), and the sexual harassment program of work from Gervais & Wiener. While the authors may have purposely not included these articles in their literature review, it’d be helpful to understand how they came to the conclusion that there are only 9 studies on the topic. In the intro, the authors mention that much previous work includes covariates in analyses. While true, I think it is an inappropriate assumption that this is a post-hoc decision. It’s true we cannot know without pre-registration, but many of the covariates are supported by research or theory, so I feel like this language should be tempered. In the authors’ current stance, it seems as if they think this is always an unacceptable practice, but I’m not sure I agree. Fredrickson & Roberts suggest that not all women will respond to instances of objectification in the same manner. This is also most likely true of experiences that prompt self-objectification, whether intra or interpersonally - for women high in trait self-objectification, they are likely to engage in higher levels of self-objectification than those lower in this trait. As a result, I was surprised when the authors revealed the correlations between state body surveillance and performance without taking condition into consideration. Because women in the control condition did not experience any environmental prompt to self-objectify, body surveillance measured among these participants would likely be akin to a trait level of self-objectification. Although not pre-registered, I would like to see these additional analyses at least in a footnote. One issue I have with how researchers discuss manipulations involving objectification is a lack of specificity. While many claim to manipulate self-objectification, they are actually manipulating interpersonal objectification, from which increased self-objectification levels ensue. I think when discussing the manipulations used, this paper (and the literature more broadly) may benefit from a bit more specificity here in terms of what is truly being manipulated. Within the discussion, I’d like to see more take-away points from the current work. Specifically, while the authors contrast their findings with those of past work, how do these findings inform objectification theory? Moreover, beyond calls for replication and power analyses, do these findings have implications for how objectification researchers conduct their work (e.g., in a literature with such varied ways of manipulating objectification, are the two manipulations in the current work interchangeable?)? Smaller details: I felt the abstract could have used a bit more specificity as well as a bit more elaboration about the implication of the findings. While the authors note that the sample in the current work may have differed from past samples in terms of math abilities, the current sample seems demographically similar to past samples in terms of age and race. I think this similarity is worth mentioning. I also think it’s worth mentioning in the introduction that affect was included because of previous findings. Because the authors critically analyzed sample sizes of past work, could you benefit future work by providing the number of participants per condition after exclusions? I hope this feedback is helpful to the authors and I look forward to seeing this paper in its final state! Reviewer #3: The paper “Brains over Beauty” tested, using a more rigorous methodology than that used in previous research, whether the induction of state self-objectification interferes with women’s cognitive performance. The paper has several advantages over past research in that it (a) used a sufficiently powered sample, (b) examined two different types of manipulations to induce self-objectification, and (c) used a dependent variable that is unlikely to be influenced by stereotype threat (which could be an alternative explanation to the effect(s) observed in several of the previous studies). Because of these strengths, and because finding out what *does* not work despite being theoretically plausible is highly important for scientific advancement (see Eronen & Bringmann, 2021), I think that the paper makes an important contribution to the literature on women’s sexual objectification. Nevertheless, I identified several weaknesses, which I list below. 1. In my reading of objectification theory, the original 1997 paper by Fredrickson and Roberts aimed to explain (through the concept of self-objectification) why women experience higher rates of unipolar depression, eating disorders, and sexual dysfunctions as compared to men. The idea that self-objectification should interfere with cognitive performance, tested in the 1998 paper by Fredrickson et al., is an extension of the original theory (as put forward in the 1997 paper). I think that the literature review in the present paper should reflect this (unless the authors have a different view on how objectification theory evolved, and if so – perhaps they can explain their view, at least in the response letter). 2. When you discuss the research by Quinn et al. (2006) please note that there is a major flaw in how the DV (performance in a Stroop test) was calculated in this study (instead of looking at participants’ interference score, the authors looked at the overall reaction times without comparing congruent vs. incongruent trials – which is NOT how performance in a Stroop test should be calculated). 3. I think that it will be very helpful to the readers if you could add a table summarizing the main findings of experimental research on the effects of state self-objectification (SSO) on cognitive performance. That is, for each research mention (in a separate column) how was SSO manipulated? what was the DV? what was the main finding(s)? was the study sufficiently powered? This information appears in the text, but it will be much more convenient for readers to have it all briefly concentrated in one place (in addition to the longer description that currently appears in the text). 4. The term “stereotype threat” is mentioned but not defined. I suggest briefly explaining what it means (as you cannot assume that all readers are familiar with this literature). 5. To the best of my knowledge, it is recommended to quantify the evidence in favor of the null hypothesis (which seems to be supported in the present study) using Bayesian hypothesis testing (e.g., Wagenmakers et al., 2018). 6. In p. 18 there is a link to the test that participants had to solve, but it took me some time to find in which document it is located (because the link brings the reader to a list of 9 documents). So, you can say something like “see here (in the document titled MATH appendixes) for list of problems”. 7. I wonder whether the manipulation in the “male gaze” condition could be strengthened by leading participants to believe that they are going to meet with a man who saw their video (taped from the neck down). 8. I think that the fact that the authors used a simple (rather than a difficult) math task is an advantage of the present study. In my understanding, there is no theoretical basis to the prediction that the effects of SSO should appear for difficult rather than simple tasks. In other words, I don’t agree with the authors’ suggestion, in p. 27 in the GD, that “objectification may have a greater impact on difficult compared to easy tasks,” and that this possibility should be tested in future research. The use of a simple math task, as done by the authors, is correct. However, I do think that a measure of participants’ pre-existing math ability/performance or at least their math identification should have been included and controlled for. I understand that the random assignment to experimental conditions should “take care” of any pre-existing differences in math ability. Nevertheless, the results would be more convincing if the authors could show that they persist even when controlling for pre-existing differences in math performance and/or identification. This is because there are huge interpersonal differences in these variables, which can create a lot of “statistical noise”. 9. In p. 28, you refer to the Fredrickson et al.’s (1998) study as if participants were “completing an advanced math test in a room alone while wearing a bathing suit”. To the best of my memory, participants in this study first tried on a bathing suit in front of a mirror, but then completed the math test wearing their regular clothes. Please double check it to verify that the information is correct. 10. In the GD, you discuss trait self-objectification (TSO) as a potential moderator. I think that not testing for moderation by TSO is a major limitation of the present study. The authors explain the choice not to measure TSO prior to the experimental manipulation in that they didn’t want to prime this concept. I agree with this explanation, but I think that at the very least they should mention that future studies can overcome this problem by splitting the study into two parts, such that in the first part the potential moderators (including TSO) are measured, and in the second part (which can be held one week later, or so) the experimental manipulation is induced and the DVs are measured (as done by Kahalon et al., 2018). 11. Another potential moderator that can be proposed in the GD (to be tested in the future) is women’s enjoyment of their sexualization (e.g., Liss et al., 2011). It makes sense the negative effects on mood are observed for women who are low on this measure, but not for women who enjoy being admired by men. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
PONE-D-21-22853R1Brains Over Beauty: A Preregistered Test of the Effects of Objectification on Women’s Cognitive PerformancePLOS ONE Dear Dr. Zola, thank you for submitting your revised manuscript to PLOS ONE. Now, as the revised version has been assessed by three experts in the field, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses all the technical and methodological points raised by Reviewer 1. Please make sure to carefully respond to the Reviewer's comments in a point-by-point manner. Please submit your revised manuscript within six months from this date as afterwards, any revision has to be regarded as a new submission. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. Thank you for choosing PLOS ONE for reporting your work. We look forward to receiving your revised manuscript. Kind regards, Sasha Alexander N. 'Sasha' Sokolov, Ph.D. Academic Editor PLOS ONE [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: (No Response) Reviewer #2: All comments have been addressed Reviewer #3: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: No Reviewer #2: Yes Reviewer #3: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: I had reviewed this manuscript approx 6 months ago, and while I applauded the attempt to provide better-powered studies to this important question, I pointed to three major flaws. In this new version, none of these have been addressed. 1. In response to the first flaw, the authors write "Although moderators can certainly be important to consider when testing theoretically derived hypotheses, our reading of the published literature on this topic does not lead us to conclude that TSO is an essential moderator when it comes to the effect of objectification on cognitive performance." Such a reading is inaccurate. The fact that the authors go on to detail the studies that have and have not looked at the moderating effects of TSO does not add anything to what I think is an insufficient answer. The whole point of the work that is being reviewed here is to provide better empirical tests and to clarify outstanding issues, in the sense of mixed findings. All the literature on objectification points to the importance of predispositions, why not included such measures? Especially since they model their study after the study by Gay and Castano, which finds the interaction effect that is predicted by the objectification theory . I was and remain puzzled by their choice. 2. In response to this point, the authors write "..instead of having a male vs. female experimenter video record participants, we opted to use an anticipated male gaze manipulation as in Calogero et al. (2004), as Calogero et al. found that anticipating a male gaze increased social physique anxiety (a construct that overlaps with body surveillance). Across two studies, Gay and Castano did not include a manipulation check (i.e., a measure of body surveillance/state self-objectification), so it is difficult to evaluate the extent to which their precise manipulation worked – especially since they found no main effects of their male vs. female experimenter manipulation. Gay and Castano did report some statistically significant effects for tests of two-way interactions and a three-way interaction (with TSO and item difficulty), but one interaction suggested that the female experimenter (vs. the male experimenter) had a more negative effect on latency. Given the very low sample sizes in Gay and Castano (25 participants in one study and 50 in another), it is difficult to know what to make of these interactions and the likelihood of false positive findings in these analyses appears high." Again, the immediate question is WHY? Calogero et al used anticipated male gaze, but they were looking at anxiety, which is by definition a response to something that MAY or WILL happen. The focus of this experiment is not the same. The comments regarding the lack of a manipulation check in the experiments by Gay and Castano is not relevant. The present experiment was set up to IMPROVE upon previous studies, including that by Gay and Castano. This is clear from the beginning, but it should not be used as a justification for using a ì manipulation that is inappropriate given the goal of the experiment. Finally, the comment "especially since they found no main effects of their male vs. female experimenter manipulation. " is bizarre, since it is an interaction of the manipulation with TSO that objectification theory predicts (see #1). 3. I find the response to the issue of item difficulty also unsatisfactory. Easy items are easy items. Given previous findings, one would expect that items with varying difficulty be chosen, so to be able to add difficulty as a predictor. In fact, in the study by Gay and Castano, this was done in both experiments, and in both experiments the pattern of results indicates clearly that difficulty needs to be taken into consideration. In this experiments the authors, who allegedly aim to provide a more sound test than what was done in previous experiments, not only do not vary item difficulty, but chose only simple ones. Puzzling. In addition, I re-read carefully the procedure, and I find the set up quite strange and extremely artificial. The fact that participants had to stand at a specific place, use an ipad, and give their answer to questions their heard through an audiorecording on a whiteboard. At the very least, this procedure makes the response latency data unusable. Experiments measuring latency do so recording item by item latency, through a software that is purpose-built. And this was done in the experiments on objectification as well, to which the authors refer. Why using such an unreliable procedure that is all but guaranteed to provide inaccurate latencies? In conclusion, the authors are rightly interested in testing these hypotheses using the new standards that, in the last 10 years or so, researchers have agreed upon to improve the quality of our research and thus confidence in the results that are published. The study presented here, however, suffers from major, fatal flaws. The authors could and should have done better, precisely because there is so much previous research that informs their current choices. In my view, an article which is predicated upon improving the quality of experimental test of hypotheses derived from objectification theory should careful test these hypotheses, also building on the strengths of and insights gained from, the very previous experiments that they want to improve upon. This experiment does not. Reviewer #2: Thank you for the opportunity to once again review this work. I think the authors did a wonderful job addressing the concerns outlined by me and the other two reviewers. I look forward to seeing this paper in press! Reviewer #3: The authors carefully addressed all the issues raised by the reviewers (including myself). In my judgement, the paper is ready to be published and it has the potential to advance current understanding of the effects of self-objectification on women's cognitive performance. I have only one minor comment: Although they use this term, Quinn et al.'s (2006) study did not implement "a modified version" of the Stroop test. Rather, it used a flawed calculation of the DV (looking at participants' reaction times, instead of interference score, is simply a wrong way to evaluate their' "allocation of attentional resources"). Eronen and Bringmann (2022) argue that one of the barriers to the advancement of our field is that even when findings are demonstrated to be flawed and/or non-replicable, scholars continue to cite them. I therefore recommend mentioning, maybe in a footnote, or in Table 1 (which summarizes what was done in previous research), that the conclusions derived from Quinn et al.'s (2006) study are questionable. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 2 |
|
Brains Over Beauty: A Preregistered Test of the Effects of Objectification on Women’s Cognitive Performance PONE-D-21-22853R2 Dear Dr. Zola, We’re pleased to inform you that your manuscript has been judged suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Thank you for choosing PLOS ONE for reporting your work.Kind regards, SashaAlexander N. 'Sasha' Sokolov, Ph.D. Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: |
| Formally Accepted |
|
PONE-D-21-22853R2 Brains Over Beauty: A Preregistered Test of the Effects of Objectification on Women’s Cognitive Performance Dear Dr. Zola: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Alexander N. Sokolov Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .