Peer Review History
Original SubmissionOctober 20, 2022 |
---|
PONE-D-22-29025Headlines Win Elections: Mere Exposure to News Media Alters Voting BehaviorPLOS ONE Dear Dr. Pfister, Thank you for submitting your manuscript to PLOS ONE. I am very sorry about the dely. It was a hard task finding enough experts to review the manuscript. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please note that you are required to make substantial changes. It is not an easy task, and I can't guarantee the manuscript will be accepted in the next round. However, I do believe it has merits, and I hope you will decide to revise and resubmit. ============================== The reviewers gave some very important insights. Please follow their advice and revise the manuscript accordingly. In specific, please offer a more substantiated and elaborated theoretical framework (and explain why we need to examine negative and positive mentions, why is there a reason to assume that the results will differ, and how your framework and results align with what we know about the well studied phenomenon of "bad stronger then good"). Also, please refer to the boundary conditions and limitations of your research, especially those about external validity. ============================== Please submit your revised manuscript by Jun 09 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Guy Hochman, Ph.D. Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Please amend your current ethics statement to address the following concerns: a) Did participants provide their written or verbal informed consent to participate in this study? b) If consent was verbal, please explain i) why written consent was not obtained, ii) how you documented participant consent, and iii) whether the ethics committees/IRB approved this consent procedure. 3. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: I have read “Headlines Win Elections: Mere Exposure to News Media Alters Voting Behavior” which reports four experiments testing the effects of incidental newspaper headline mentions of fake candidates Jones and Smith followed by the outcome variable of dichotomous voting preference calculated with a chi-square test (“supplemented by Bayesian proportion tests”—which I am admittedly unfamiliar with). I was excited to read this article. The title and abstract make it seem fascinating as a study and tremendously impactful, with serious ramifications for democracy, public opinion, media effects, press bias, campaign strategy, cognitive susceptibility to mass-mediated exposure, persuasion, etc. I like the structured, detailed, basic approach to testing mere exposure to news headlines, and the rigorous testing and analysis of this subject matter amazed this reader. The paper never mentions priming, but that would seem to be the theoretical psychological process at work in prompting people to parrot whatever name they see in a headline as the name they choose on a ballot. The first study looks at the impact of neutral and positive news items. The second looks at negative news items (which did not differ to a statistically-significant degree; despite participants seeming to indicate that the negative coverage was more memorable). The third and fourth studies looked at mention frequency, finding that people voted for a name they had been exposed to and would not vote for a name they had not been exposed to. The findings indicate that even if media coverage were impartial—providing the slightest balance—giving practically equitable space on news pages to one candidate could have stark impact on election outcomes. The findings are thus severely scary for the reality of political media, which are obviously biased in partisan leanings, suggesting that election outcomes could be decided simply by the slant of media and it could be as simple as which candidate garners the most frequent positive name recognition in headlines, not even as complex as issues or policy stances or ideological purity or legislative records or biography or scandals or electoral experience. The big takeaway, as stated on p. 16, is: “mere exposure only predicted voting behavior when content was neutral or positive, but not when it was negative.” And that is a surprising result. The ramification for campaign strategy thus seems that politicians should do their best to garner positive headlines in friendly outlets and worry less about negative depictions of their opponent(s) and worry less about negative things being said about themselves but instead try to emphasize positive elements that even their detractors may agree with. These findings seem logical, I guess, but I can certainly also think of anecdotal exceptions. And psychologically we all know that bad is stronger than good. All participants were drawn from mTurk, which is unfortunate. This decision is problematic in the current context because the participants want to be rated favorably in order to maintain their status on the platform and to be paid by the study sponsor. Hypothesis guessing is thus rampant. One could envision, therefore, the participants assuming that they are expected to select a candidate for the fake voting which lines up with their exposure. Mere exposure can thus become tautological in a validity threat. The decision to solicit mTurk participants from all over the world (e.g., about 20% of the participants in Study 1, and about 15% in 3, were from India; and 5 people in Experiment 4 were from Brazil and it is practically guaranteed that those participants do not speak English and would not be voting for candidates named either Jones or Smith [but are indeed probably registered voters—because it’s the law in Brazil that everyone must vote in every election]) also seems unfortunate, casting such a wide net as to make the fake electorate even less realistic as readers of this newspaper voting for either Jones or Smith. Most problematic, of course, is that we don’t know if the mTurkers are registered voters. Ecological validity of a study trying to appraise voter behavior is flawed if the respondents do not engage in voting behavior. The control and randomization exhibited in the experimentation are pristine. The replication component, though, is questionable, if minor tweaks are made to each presentation of the stimulus and then it’s yet another handful of mTurk respondents participating. The ecological validity is questionable, which is this reader's biggest issue with the paper. Little in the experiment was realistic. Participants were told to “imagine living in a fictional country called ‘Rifaa’ ” and then they cast fake votes between unreal politicians. The goal of the pursuit is generalizable voting behavior in the real-world (p. 7), yet by its nature participants are providing precisely nothing realistic if from the beginning they are told to imagine living in a fictional country. HOWEVER, if our goal is to test the effects of incidental exposure to media reports, and our population of interest is the electorate who hardly pays attention to political discourse and is horribly misinformed about elections—yet votes and wields the power to control election outcomes—then one could argue that this design is indeed realistic. Moreover, if incidental exposure has this sort of influence then we can only imagine the power of, for example, the New York Times to trumpet one candidate’s name positively over another candidate’s name, and for that headline coverage to prod voters who are favorably inclined toward a candidate to crystalize attitudinally and behaviorally, with the media truly having incredible power to rig elections if they were so desirous. (And indeed we hear reports of social media platforms doing this very type thing with partisan users, not to mention journalists getting caught trying to put their hands on the scales.) However, the mere exposure, as I understand the stimuli, was comprised of 40 exposures. I follow the logic of making it higher than 10, and I see the meta-analysis citations, but if a voter is really that ignorant of a campaign and unfamiliar with candidates then it seems contradictory that they would sustain upwards of 40 exposures to a given candidate—not to mention that it seems implausible that they would spend this much time flipping through the pages of newspapers. A deflating admission appears on p. 15: “Even though such effects come with clear boundary conditions – they will not affect partisan voters.” Most voters are partisan voters. And we are at an historical apex of partisan tension—with such animus toward the outgroup and blind allegiance of favoritism toward the ingroup. The exclusion of partisan consideration in this series of studies is thus an omission presenting a tragic limitation of the findings. This limitation is magnified when one considers in reality voters may expose themselves to media reports with glowing positive coverage of their co-partisan candidate and rotten negative coverage of their opposing candidate, and that frequency would then drive their behavior through self-selection bias. The distractor task element of the experiments—having respondents think they are putatively paying attention to stock picking amidst the fake newspaper spread—was admirable. And giving the participants immediate feedback on their investment decisions seemed inventive and comprehensive in the distractor task. Good work. I like it when an experiment tests realism—as much as an artifactual stimulus can be realistic—and indeed a study of voting behavior affected by media exposure seems more realistic if participants are reporting responses based on incidental exposure as opposed to explicit/overt instructions prompted by researchers. As a sidenote, this study reminded me of research that has shown that public opinion polling results have the power to be self-fulfilling prophecies—with implications for rigging polls to boost or depress election turnout. The material presented on pp. 24-25 is incredible. I cannot imagine how much time and work must have gone into compiling, calculating, and reporting those data. The presentation is impressive and very well-intentioned. However, what it really boils down to, in the eyes of this reader, is unnecessary and uninterpretable. Notice that Figure A cannot be interpreted as meaning anything, nor can Figure C. They are so small, with the colors all mingling together. I would save space in the manuscript by eliminating that content from the paper. The material is also unnecessary. We don’t need to be told that some candidates get far more media attention than others (in her own book about the 2016 election, “What Happened,” Clinton has no shortage of pages committed to complaining that Trump got more press coverage than her), or that some candidates sustain more negative press coverage than others (and HCR complains in “What Happened” that she seemed to draw more negative coverage for numerous alleged misdeeds, such as deleting 30,000 e-mails of international/national security import from when she was Secretary of State and saved them on a private computer server which may have been hacked by foreign actors). I would perhaps save that content for some other project rather than expending it on this manuscript, and hopefully in future usage more space can be committed to it so it can be larger to be able to read and for readers to be able to make sense of. One of the citations in the References – Hopmann et al. (2010) https://www.tandfonline.com/doi/abs/10.1080/10584609.2010.516798 which assessed news bulletins and daily survey data from a national election in Denmark -- may have already reported what was found in the present paper: “it is found that the more visible and the more positive the tone toward a given party is, the more voters are inclined to vote for this party.” Typos: p. 4 – “Information from such media outlets has a different quality than information from political campaigns, because it is less likely to be perceived as biased and direct persuasive attempt.” – I think you meant to have the word “a” in there: *as a biased* pp. 10–11 – I don’t think you mean “datasets”; I think you mean *participants* or *respondents* p. 11 – “An additional six participants did not to disclose”; delete the word ‘to’ p. 12 – “participants who reported not to have attended the news items at all”; missing a word, perhaps ‘to’—thus: “attended to the news items” In Figure S2 we see an example of the stimulus, a fake news headline, stating “Jones inviting townsmen to get creative” – which is presumably a positive headline for candidate Jones. I wonder how much ecological validity this has. Political candidates do not typically glean media coverage during an election season for touting that painters get creative; the media tend to focus on “horse race” coverage, or on issue rollouts, or other markers of electability; albeit as noble as it seems to aspire for this sort of uplifting campaign coverage in a newspaper. I had such high hopes for this paper. I really wish the participants had been real voters—not mTurkers from the U.S. and India and Brazil and whereever else—and that the design had been more realistic of actual media exposure to actual politicians in a real election. I cannot praise the authors enough, though, for all the comprehensive and thoughtful and robust work that has gone into this project; amazing work, really. Reviewer #2: The manuscript “Headlines Win Elections: Mere Exposure to News Media Alters Voting Behavior” reports four preregistered web studies that explore the effect of media exposure on voting behavior. In all studies, participants were cued into a voter perspective and were given the choice in a mock election between a name that they had frequently encountered when browsing through a series of fictitious news webpages and a name that they had only encountered infrequently (or not at all). Experiment 1 presented neutral to positive headlines, Experiment 2 employed decidedly negative news coverage, whereas the remaining experiments (3 and 4) drew on strictly balanced valences. The results indicated that exposure predicted voting behavior in a subsequent mock election, with a consistent preference for frequent over infrequent names, except when news items were decidedly negative (Experiment 2). In addition, the authors found (Experiments 3-4) that the participants’ differences in activity ratings between the frequent and the infrequent name constituted a significant predictor of voting behavior. I found this topic to be an interesting and valuable line of research with important practical implications. Specifically, the manuscript makes a unique contribution to the state of the art by examining the mere exposure effect with a well-designed (and creative) manipulation and under control settings that simulated voting behavior. In addition, the experiments also contribute to what is known about the boundary condition of this effect, and its potential mechanisms. That said, I think that some points in the theoretical and empirical sections of the manuscript need to be clarified and further developed. I believe the authors should give further attention to the points below to increase the impact of the paper and its accessibility to a wider audience. 1. The introduction is focused and concise, but I think the authors should elaborate further on previous explanations of the mechanisms underlying the mere exposure effect and their relevance to the current study and hypotheses. 2. The authors chose to report on all the four experiments in one section (e.g., one results section for all the experiments). As a reader I found it difficult to understand the differences between the experiments (in fact I only figured it out at the end of the results section and after careful reading of the caption to Figure 1). The authors should thus include an overview of the experiments much earlier, which describes the order of the experiments, and explain more clearly the differences, the specific contribution, and the theoretical rationale for each experiment. . 3. Generally, I think the method used in this study is original and clean. However, one main concern is that the frequent name appeared in 95% of the headlines in Experiment 1-3 but in all headlines in Experiment 4. The 100% vs. 0% exposure in Experiment 4 creates a very extreme situation that clearly differs from the examples in the reality described in Figure s1B on p. 24 (Trump vs. Clinton). Adding an experiment which includes different levels of certainty for the frequent name (e.g., 55%, 65%, 75%, 85%, 95%) would strengthen the internal and external validity of the findings and be more informative in terms of its threshold and boundary conditions. 4. At the end of the discussion, the authors write: “This effect is partly mediated by an impact on implicit personality theories about the candidates’ competence”. However, the results only showed a weak correlation with activity ratings and no correlation with assertiveness ratings. In addition, if the authors want to justify using a mediation model they should subject this model to a specific statistical test for mediation. 5. Finally, do the results of this laboratory study bear out the claim made in the title “Headlines Win Elections”? I am afraid that this conclusion is too strong since this control lab simulation does not capture some of the most essential features of elections. It is very unusual for all the information on one candidate to be natural or positive or that this candidate will appear 100% of the time compared to 0% for his or her opponent. Even in election races with less prominent political candidates such as municipal elections, voters have more information than names such as pictures of the candidate which have been found to be a very strong predictor (e.g., Todorov et al., 2005). Thus, while the authors clearly demonstrated the potential contribution of mere exposure, the conclusion and title should be toned down. 6. Minor comments – Please check the References (e.g., item 21 lists the name of the first author but no coauthors). 7. Please consider inserting more figures and analyses from the SM into the main text. For instance, Figure s1B p. 24, and the main results of the samples after they were selected by specific variables. Overall, I believe this work will make an important contribution to the literature, but there are still some concerns that should be addressed. I wish the authors good luck in pursuing this interesting line of research. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: Yes: Shahar Ayal ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
Revision 1 |
Headlines win elections: Mere exposure to fictitious news media alters voting behavior PONE-D-22-29025R1 Dear Dr. Pfister, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Guy Hochman, Ph.D. Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: I have read through the authors' responses to the reviewers' comments. I am satisfied. I suggest the manuscript proceeds. Thank you Reviewer #2: I served as Reviewer 2 in the original submission and have now read its revised version. The authors have done a very good job on the revision, and all of my comments have been addressed satisfactorily. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: Yes: Dr. Shahar Ayal ********** |
Formally Accepted |
PONE-D-22-29025R1 Headlines win elections: Mere exposure to fictitious news media alters voting behavior Dear Dr. Pfister: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Guy Hochman Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .