Peer Review History
| Original SubmissionFebruary 14, 2025 |
|---|
|
-->PONE-D-25-07395-->-->Registered report: Stress testing predictive models of ideological prejudice-->-->PLOS ONE Dear Dr. Thompson, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. -->-->As you will find in the reviewer comments, both reviewers find merit in the research presented and both agree that the data may provide a useful contribution to the existing literature. However, both reviewers have concerns that we would like to see addressed in a final version of the paper. Because you have already published a Registered Report Protocol, you will likely need to address these comments to the best of your ability in the Discussion of your paper. Note that Reviewer 1 defers to me in a comment regarding the use of p values in addition to model fit. I recommend that you defer to the planned analyses as indicated in your published protocol (which includes the use of p values). Please feel free to reach out to me if you have any questions regarding the comments; I know it can get tricky when aspects of the paper are already published. Please submit your revised manuscript by Sep 05 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:-->
-->If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols . We look forward to receiving your revised manuscript. Kind regards, Corey Cook Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. In your cover letter, please confirm that the research you have described in your manuscript, including participant recruitment, data collection, modification, or processing, has not started and will not start until after your paper has been accepted to the journal (assuming data need to be collected or participants recruited specifically for your study). In order to proceed with your submission, you must provide confirmation. 3. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. Reviewers' comments: Reviewer's Responses to Questions -->Comments to the Author 1. Does the manuscript adhere to the experimental procedures and analyses described in the Registered Report Protocol? If the manuscript reports any deviations from the planned experimental procedures and analyses, those must be reasonable and adequately justified.--> Reviewer #1: Yes Reviewer #2: Yes ********** -->2. If the manuscript reports exploratory analyses or experimental procedures not outlined in the original Registered Report Protocol, are these reasonable, justified and methodologically sound? A Registered Report may include valid exploratory analyses not previously outlined in the Registered Report Protocol, as long as they are described as such.--> Reviewer #1: Yes Reviewer #2: Yes ********** -->3. Are the conclusions supported by the data and do they address the research question presented in the Registered Report Protocol? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. The conclusions must be drawn appropriately based on the research question(s) outlined in the Registered Report Protocol and on the data presented.--> Reviewer #1: Yes Reviewer #2: Partly ********** -->4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.--> Reviewer #1: Yes Reviewer #2: Yes ********** -->5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.--> Reviewer #1: Yes Reviewer #2: Yes ********** -->6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. (Please upload your review as an attachment if it exceeds 20,000 characters)--> Reviewer #1: This was a paper extending some previous work building predictive models of prejudice toward various groups. I very much support iterative progression of these more formal modeling approaches, and I thought the adaptation to relative differences, another very common operationalization of prejudice in the field, was an important extension. Generally the authors find additional support for the ideology model over some other specifications. Thoughts and concerns follow: Major concerns: Because this paper is basically extending the Brandt 2017 model, some of these questions and critiques necessarily apply to that work as well. It's a little odd because that work is published and established, but still think it makes sense for the authors to engage with some of the questions in the present work. -One thing I've always wondered with these models is about construct validity, or how much this "rating of ideology" is capturing the same or similar latent construct of prejudice. On their face, certainly distinct to me. But to the extent this variable is capturing "the extent to which i see these various groups on my team or not", it's less surprising to find such a strong link with how much I like those groups. -A related concern is that these models depend on ratings on a bipolar liberal to conservative scale. Such a scale would just not work in the majority of countries in the world in which the political spectrum can't be reduced to one dimension. If our goal is purely non-explanation based prediction in the US, that's totally fine of course. But makes it more challenging to argue this is a causal model of the more universal force of prejudice, or it would have to be tweaked in some way. I guess this critique hinges on how the authors are interpreting this model, and I'd like to see that spelled out. I've grouped these two together because they have perhaps been engaged with by the authors elsewhere, and I think could be handled here by some mention in the General Discussion. -Regarding the modeling, I found the use of demographic controls conceptually problematic. If the goal is to predict prejudice from ideology, status, choice, and anything else, why should these be adjusted for the demographics of the participant? To adjust for gender, for example, is to say that men are just inherently more prejudiced toward xyz groups in a way that can't be explained or captured by other psychological variables, and I just don't believe that to be true. Plus it changes the very spare model (ideology) to be a lot more conceptually clunky, like who knows what other psychological baggage is being captured by "man" and "educated", but my guess is quite a bit. How much are results contingent on the inclusion of these demographic controls? -My largest and perhaps controversial philosophical concern was the use of null hypothesis testing for model selection, and looking at p-values as evidence. I just didn't find it appropriate for the goals here. Models with the lowest average MSE simply fit the data best, regardless of whether they fit "significantly" better than other models with higher MSEs. Data science world would focus only on model fit, and this feels like a psych/data science mashup, employing both frameworks for evidence simultaneously. I'd defer to the editor, but I wouldn't be averse to cutting the p-value framework from the paper entirely, and relying only on model fit to determine the best models (and it would result in an identical conclusion from the line already being walked in the general discussion). -What is an "ideologically relevant group" exactly? There needs to be a clear definition for this, rather than just sounds ideologically relevant to the authors and readers. I'm sympathetic in some respects, yes, those groups certainly seem ideologically relevant. But some of the groups not included in this bundle also seem ideologically relevant, and when I start squinting, many of these groups seem like they could also be ideologically relevant. A clear definition would sort this out, especially because the authors tenatively posit on page 25 that prediction success is better for these groups. Without some explanation, feel this risks being a "just so" story. -On page 29, the authors mention they combined explicit and reaction time measures. What does this mean? How were they combined. Some folks consider implicit vs. explicit measures of prejudice quite distinct, so this feels odd. Minor concerns: -Intro of 2nd para- the prejudice literature has offered far more than just 3 predictors of prejudice. Maybe can revise this sentence to suggest Brandt identified three in his models. The entire field has prob put out at least 50. -A line in the General Discussion mentions this is the "first time" predictive models were built for relative prejudice. While this is possibly true for the Brandt ideological prejudice models, it's not in general. Hehman & Neel psych review recently had some predictive models of prejudice with relative measures (and non-relative measures, a nice parallel to the current work and Brandt 2017). In general, I don't always find lines in papers noting "first" are so important, since that's somewhat obviated as the point of the paper, and could just be adjusted or trimmed. -Lot of measures, would be great to have a correlation table between everything somewhere, apologies if I missed this. I appreciate the opportunity to review this interesting work. Reviewer #2: This paper presents a “stress test” of models for predicting the ideology-prejudice association, comparing models that incorporate the perceived ideology, status, and choice in group membership of various target groups. I have focused this review on the results, conclusions, and adherence to the initial registered report protocol, given that the methods and research question have been reviewed previously. Overall, the manuscript adheres to the procedures and analyses described in the protocol, and exploratory analyses are clearly marked. The authors were careful not to place too much weight on exploratory analyses relative to pre-planned analyses. However, my main concern with the manuscript is that the results of one of the exploratory analyses drastically alter the overall conclusions that I believe can be drawn from the manuscript. Despite this analysis not being pre-planned, I believe the changes to the results are sufficiently large that the authors should temper their conclusions in the paper to take them into account. I expand on this point below. If the conclusions and discussion can be revised to better incorporate the results of this analysis, I believe this manuscript will be well-suited for publication and provide an important contribution to the literature. In my reading, the exploratory analysis reported on page 25 very significantly alters the results and warranted conclusions of the paper. The authors find that when they exclude targets groups that are defined by ideology (e.g., Democrats and Republicans), there are no differences among the predictiveness of the various models (including the null model). This seems highly significant to me because the claim that perceived ideology predicts prejudice is only theoretically meaningful or interesting insofar as it is about groups not explicitly defined by ideology. It seems trivial to say that the biggest predictor of how much Republicans like Democrats (and vice versa) is their perceived ideology – in fact these seem more like control groups than target groups to me. The results of the exploratory analysis indicate that the advantage of the ideology models over the status, choice, and null models is solely due to the inclusion of these groups that are explicitly defined by ideology. However, the authors’ discussion, conclusion, and abstract still make strong statements about the benefit of the ideology model(s) over the other models, which I believe are unwarranted given these results. I believe these conclusions and interpretations should be revised to reflect the fact that ideology provides no better prediction than the other models for the groups that are not defined only by ideology. On a separate note, the logic of the analyses was not always intuitive, and I believe in some places could benefit from additional examples or explanation. At the top of page 20, for example, the authors discuss calculating a predicted ideology-prejudice association for the group “Black people”. Here, it would be very helpful to walk the reader through what each of the values represents – what does the final estimate of -0.49 in this context mean? Further, the authors at times refer to predicting prejudice and at other times to the ideology-prejudice association, and I think this distinction could be clarified and highlighted more. As I understand, the main purpose of the study is to predict the association between ideology and prejudice; however, on page 20, for example, the authors seem to be working with actual prejudice values rather than associations (“we subtracted the actual ratings of prejudice across groups from the estimates from the equations, squared these values, and then found the average”). Additional explanation would be helpful here – why are actual prejudice values rather than associations being predicted here? Finally, at times the distinction between the participant’s ideology and the perceived ideology of the group became confusing, and I would encourage the authors to explicitly state which of the two they are referring to when they use the word “ideology” in order to reduce cognitive load for the reader. Below are a few additional minor questions: 1. Could the authors expand on the choice to use mean squared error as a metric to compare models, rather than a metric that takes into account the number of parameters included in the model? 2. Were all the demographic control variables in the models used to get the observed ideology-prejudice associations the same as the ones used by Brandt (2017) to get the predicted ones? ********** -->7. PLOS authors have the option to publish the peer review history of their article (what does this mean? ). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy .--> Reviewer #1: No Reviewer #2: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org . Please note that Supporting Information files do not need this step.--> |
| Revision 1 |
|
<div>PONE-D-25-07395R1-->-->Registered report: Stress testing predictive models of ideological prejudice-->-->PLOS ONE Dear Dr. Thompson, Thank you for your resubmission to PLOS ONE. I think you've done an excellent job addressing the reviewers' concerns, and I think your final manuscript will make a nice contribution to the journal and to the literature. Please consider your manuscript conditionally accepted (although I had to select 'minor revision' to solicit final edits). Once you submit a clean, unmasked version, I will formally accept the manuscript. (One pedantic side note: in a few places you reference "this data." Please correct to "these data" as "data" are plural; I shouldn't care, but I hound my Methods students about this point, so I felt compelled to note it here.) You can ignore some of the boilerplate instructions below (e.g., a rebuttal letter), which pertain specifically to resubmissions. But please let me know if you have any questions or issues. Please submit your revised manuscript by Oct 30 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:-->
-->If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols . We look forward to receiving your revised manuscript. Kind regards, Corey Cook Academic Editor PLOS ONE Journal Requirements: If the reviewer comments include a recommendation to cite specific previously published works, please review and evaluate these publications to determine whether they are relevant and should be cited. There is no requirement to cite these works unless the editor has indicated otherwise. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org . Please note that Supporting Information files do not need this step. --> |
| Revision 2 |
|
Registered report: Stress testing predictive models of ideological prejudice PONE-D-25-07395R2 Dear Dr. Thompson, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact billing support . If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Corey Cook Academic Editor PLOS ONE |
| Formally Accepted |
|
PONE-D-25-07395R2 PLOS ONE Dear Dr. Thompson, I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team. At this stage, our production department will prepare your paper for publication. This includes ensuring the following: * All references, tables, and figures are properly cited * All relevant supporting information is included in the manuscript submission, * There are no issues that prevent the paper from being properly typeset You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps. Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. You will receive an invoice from PLOS for your publication fee after your manuscript has reached the completed accept phase. If you receive an email requesting payment before acceptance or for any other service, this may be a phishing scheme. Learn how to identify phishing emails and protect your accounts at https://explore.plos.org/phishing. If we can help with anything else, please email us at customercare@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Corey Cook Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .