Peer Review History
| Original SubmissionMarch 26, 2022 |
|---|
|
PONE-D-22-08989Disclosing Political Partisanship Polarizes First Impressions of FacesPLOS ONE Dear Dr. Cassidy, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Aug 15 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Peter Karl Jonason Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Please note that according to our submission guidelines (http://journals.plos.org/plosone/s/submission-guidelines), outmoded terms and potentially stigmatizing labels should be changed to more current, acceptable terminology. To this effect, “Caucasian” should be changed to “white” or “of [Western] European descent” (as appropriate). 3. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The present work examines a critical question with important social implications: whether disclosing political affiliation impacts impressions drawn from faces. The authors present two studies that read methodologically stringent and strong. Overall, I enjoyed reading the paper and think that this work is worth getting published eventually. However, I refrain from making this recommendation at this point without suggesting some major revisions, mostly on the analyses and the interpretation of the results. Below you can find a list of reasons. Major issues Theoretical -The difference between the past work by Mallinas et al. (2018) and the present work is not clear enough. Why would examining the same question in a romantic relationship context be a “limitation”? Is there a reason why the observed effect (e.g., negative attitudes towards those of opposing ideology) may or may not be generalized across different contexts? Perhaps the authors can highlight other procedural differences (consideration of the veracity of political ideology, changes in evaluations, etc.) across that past work and their work early on, as the current framing does not make these differences clear. -The conceptualization of political identification gets obscure at times. Sometimes they interpret identification in terms of extremity (see page 7, though page numbers are missing on the initial pages). But they are measuring identification in terms of the strength of affiliation, which is clearly different from the extremity. I would suggest clarifying their definition of the concept and staying consistent throughout the paper. Methods - Some methodological details were missing, making it challenging for me to interpret the task in detail, especially in Experiment 1. The authors cited an osf page for additional methods, but I could not locate any files about methods on that page. For instance: -Which software was used to program the studies? -How were picture pairs constructed in Experiment 1? Were the pairs randomly selected from lists of Republicans and Democrats for each participant, or were the pairs pre-determined and the same for all participants? -How did they test recognition of each face (familiarity) after the task in Experiment 1? -Exclusion criteria were vague for both experiments. How did they determine “not following the task instructions”? Results -I think the most major issues were about the chosen analyses and the interpretation of the results. 1. The description of the models made it a bit difficult to comprehend. As far as I understood, in Experiment 1, the DV was a choice (binary; republican = 0, liberal = 1), and the trait of evaluation (competency, likability; how that was coded is unclear) was a predictor. Is that correct? How were the predictors coded specifically? Were they dummy coded, effect coded, etc.? How was political ideology standardized? Was it mean-centered? Specifying all these either in the text or on the table would help with interpreting the results. 2. As the codings are not specified, I have trouble interpreting the tables. For instance, political ideology has a positive estimate of choice (DV). It is mentioned that democrat was coded as 1 and republican as 0 when choosing. I am confused because if higher ideology scores indicate more conservatism, were conservatives more likely to choose democrats as more competent/likable overall? As estimated marginal means suggest otherwise, there should be something I am missing here. The authors go straight to interpreting the post hoc tests without explaining the main tests here. I believe the main analyses require explanation. 3. The motivation behind adding the Trait as a predictor to the model is unclear. Did the authors have any predictions about trait differences? If competence and likability choices are strongly related (it seems so), it would be justifiable to average these scores and create a simpler model. Simplifying the model this way could also help with the convergence of random slopes (which were not included in the veracity analyses, perhaps that was because of a convergence issue?) and would also make a new (missing) model with the perceived threat as another predictor possible and more interpretable (see the analysis suggested below in point 6). 4. The veracity analysis in Experiment 1 is left uninterpreted. What does the significant main effect of veracity in Table 3 mean? Does it mean that the option labeled as “democrat “is evaluated as more competent overall (democrat labeled as democrat > democrat labeled as republican)? 5. The relevance of the partisan affiliation analyses to the study's main purpose was unclear. It is not surprising that ideology relates to relevant partisanship. I would move these analyses to supplementary materials and not interpret them in terms of face perception, as face perception is not part of these models. 6. Partisan threat could have been relevant to the study's main question. Yet again, the threat was analyzed separately from the face perception data, though the findings were interpreted in terms of its relevance to face perception. A direct test of the partisan threat’s role in face choice is missing in both experiments 7. Another standing question: were participants more likely to evaluate the faces of ingroup members (same ideology) as more likable/competent than those of outgroup members (other ideology) when ideology was not disclosed? The authors mention past work suggesting that in the introduction, but they do not report their findings on this question. Veracity was only analyzed among the disclosed ideology condition. Also, the title of these analyses reads a bit misleading: “Characterizing partisan disclosure effect on face impressions by veracity,”: but the manipulation of disclosure (nondisclosed vs. disclosed) was not a predictor in the reported analyses (only disclosed conditions are included). Discussion -The authors speculate about the relative roles of partisan affiliation and perceived threat in face impressions. Again, their data should allow them to analyze such relative effects. Why are these analyses unavailable? Perhaps, the studies are underpowered for such analyses, but if that is the case, the authors should at least comment on that. Then, I would recommend not interpreting the results in terms of their parallel to a perceived threat (as there are no direct analyses, these interpretations are too speculative) or, more ideally, conducting a third study to test the relationship between perceived threat and face perception directly. Typos P 7. “would the complement” p. 21 “conservates” Reviewer #2: In this article, the authors show that providing political identification information shifts initial evaluations (experiment 1) and updating (experiment 2) of explicit competence/likability ratings. In my review, I have tried to adhere to the following guidelines of plos one “Unlike many journals which attempt to use the peer review process to determine whether or not an article reaches the level of 'importance' required by a given journal, PLOS ONE uses peer review to determine whether a paper is technically rigorous and meets the scientific and ethical standard for inclusion in the published scientific record.” In my opinion, the present paper clearly meets the above standard of being technically rigorous and ethical. I go through each of the PLOS one criteria point-by-point, then conclude with some final thoughts 1. The study presents the results of primary scientific research. 2. Results reported have not been published elsewhere. The present manuscript clearly meets the above 2 criteria 3. Experiments, statistics, and other analyses are performed to a high technical standard and are described in sufficient detail. The present manuscript appears to follow all best practices of complex mixed effects models, and I was encouraged that they used random effects for both subject and stimuli (as suggested by the recent literature that they cite). I have no doubts of the technical integrity of their findings. If anything, they err on the side of reporting too much, and I think they could move some of the less important tables or statistics to a supplement (e.g., it’s probably not necessary to fully report how republicans and democrats vary on political ideology), and I think by carefully considering which analyses are central to their point they could streamline their paper. 4. Conclusions are presented in an appropriate fashion and are supported by the data. They are. I would suggest three changes for Figure 1 to increase clarity: first, include what -1 and +1 standard deviation on political ideology refers to (so readers don’t have to scroll all the way back to the methods). Second, I think the graph may be clearer if they used facets only for likability vs. competence, and not for political ideology, which could become the x variable, with condition becoming the grouping variable (so, in other words, use aes(x = ideology, color = disclosure)). I think this would make all the values much closer together and easier to compare. Finally, consider including significance bars to highlight which cells are significantly different from one another. 5. The article is presented in an intelligible fashion and is written in standard English. The article is quite intelligible. 6. The research meets all applicable standards for the ethics of experimentation and research integrity. I have no reason to doubt the ethicality of the present research 7. The article adheres to appropriate reporting guidelines and community standards for data availability. The article exceeds these, having both data and code already available. I was further encouraged that the posted code appears to be quite clean and commented well, which is uncommon (especially before a manuscript is accepted). Together, I believe the present article meets the standards of PLOS One as I understand them. Some final suggestions to the authors for future research that may be of importance to the field: First, if the authors have reaction times from the first study, it would be interesting to do a drift-diffusion model on the reaction times to see if the presence of ideology is shifting (a) the starting point bias, (b) the rate of accumulation, or both. This might further get at the mechanism of what is going on (i.e., are participants simply requiring less evidence to say the politically consistent individual is competent/likable, or are they accumulating evidence more steeply from individuals who share their ideology?). Second, the updating question is interesting, and I would be interested to see it followed up with (a) implicit measures, particularly if they deviate from explicit measures, and (b) looking at more nuanced cases, such as finding someone switched political parties. It seems that learning about party affiliation, and how that shifts evaluations, is a potentially fruitful area of future research. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
PONE-D-22-08989R1Disclosing Political Partisanship Polarizes First Impressions of FacesPLOS ONE Dear Dr. Cassidy, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Only minor issues remain. Please submit your revised manuscript by Oct 17 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Peter Karl Jonason Academic Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: As I mentioned in my previous review, I enjoyed reading this work and think that it offers some valuable contributions to the work on impression formation and updating. I have now read the revision and the authors’ responses to both reviewers. I appreciate that the authors addressed the reviewers’ inquiries overall. I can now see their analytical approach in a much clearer way, and I believe the manuscript improved as a result. There are, however, a few standing issues for me before recommending the work for publication: 1. I appreciate that the reviewers added new analyses to examine the direct relationship between partisan disclosure effects and perceived partisan threat. The results suggest that, consistently across the two studies, the partisan threat effect is always in line with the perceiver ideology effects. That is, in Study 1, conservative ideology does not bring disclosure effects, nor does perceiving Democrats as more threatening (which correlates with conservative ideology). In Study 2, conservative ideology does bring disclosure effects, and so does perceiving Democrats as more threatening. However, I am concerned that the authors interpret these results as if they are showing the partisan threat is the mechanism underlying the disclosure effects. For instance, when responding to Reviewer 1 Comment 13: “By including this direct test of the relation between perceived partisan threat and partisan disclosure effects on face impressions, we can more confidently suggest that perceiver ideology affects partisan disclosure effects on face impressions to the extent of threat perceived from different partisan groups.” Also, when they talk about their studies’ particular contribution on p. 4, they suggest that past research has not examined the mechanism, which implies their research does: “[…] one limitation is that they do not explain why this polarization occurs.” My concern about this interpretation is for two reasons. First, I would recommend avoiding making any claims about the mechanism without an experimental design or at least testing the mediation (ideally longitudinally but at least in a cross-sectional exploratory test). Second, it seems like some of their findings indeed contradict a mechanism interpretation. If the partisan threat is the mechanism underlying the disclosure effects on liking a particular face, shouldn’t partisan threat impact participants' liking of a particular (ingroup vs. outgroup) face similarly across liberal and conservative perceivers? That does not seem to be the case in Study 1. Perceiving Democrats as more threatening (mostly conservative participants, given the significant correlation) does not impact face perception, whereas perceiving Republicans as more threatening (mostly liberal participants, given the significant correlation) does. It seems like, even when a conservative perceiver feels threatened by a liberal (and they do to some degree, given the significant correlation), they do not report liking a liberal face less than a conservative face. As the threat effects are interpreted in comparison to different targets (perceiving a face from an ideological group as opposed to faces from other groups) rather than independently (the degree to which the perceiver ideology relates to the perception of threat from a certain target), it seems to me that these nuances in the findings do not get the attention they deserve. All in all, I think the authors should refrain from emphasizing threat as a mechanism in their framing. 2. Given the above interpretation, their findings suggest that threat may not fully explain conservative participants’ indifference to faces in Study 1. Then what does? I think the authors should elaborate more on the potential alternative explanations other than the perceived threat. For example, can conservative participants be less attentive to the choosing task used in Study 1 and pick who is more likable basically randomly? Relatedly, I think the revised manuscript would benefit from discussing the inconsistent findings across the two studies a bit more in-depth in the general discussion. 3. The authors described the findings in the general discussion: “the more conservative participants in Experiment 2 may have been more likely to outwardly derogate democrats” (p. 31). Yes, that is what their findings already suggest, but why can this be the case? Again, the general discussion seems to lack a discussion for potential reasons. More minor issues: -“Prior work supports disclosed partisanship as more generally polarizing face impression.” (p. 4): The baseline for comparison is missing here (“more” than what?): -I think partisan cues, and especially the labels used in this work, are not minimal cues as claimed on p. 6. They indeed seem very salient. -“Although people accurately detect political partisanship from faces…” (p. 7): this sentence reads a bit too deterministic; they "can" detect, or they "tend to" detect? -It seems odd to drop a participant’s data entirely for not reporting their age in Study 1. What is the rationale behind this decision? Were the results replicated when all participants were included in the analyses (at least this one participant who did not fail the attention and manipulation check)? -How many task versions were there in total (including counterbalancing versions)? That would be nice to include in the main text (although one can probably figure it out by browsing the data files). -This interpretation was not clear to me: “Main effects of Disclosed Label Veracity (reflecting fewer selected Democrats with accurate versus inaccurate labels) and Perceiver Political Ideology (reflected fewer selected Democrats with higher perceiver conservatism) emerged. Note that the Disclosed Label Veracity effect likely reflects the ideological skew of the sample” (p. 16). Checking the distribution reported in the supplementary materials, the sample seems to be a little bit skewed towards Democrats. But wouldn’t we expect the opposite of these results were due to this skewness (i.e., more selected Democrats with accurate labels)? Perhaps the authors can clarify that part a bit more in the text. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 2 |
|
Disclosing Political Partisanship Polarizes First Impressions of Faces PONE-D-22-08989R2 Dear Dr. Cassidy, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Peter Karl Jonason Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: |
| Formally Accepted |
|
PONE-D-22-08989R2 Disclosing Political Partisanship Polarizes First Impressions of Faces Dear Dr. Cassidy: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Peter Karl Jonason Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .