Peer Review History
| Original SubmissionSeptember 30, 2020 |
|---|
|
PONE-D-20-30828 Can we detect conditioned variation in political speech? Two kinds of discussion and types of conversation PLOS ONE Dear Dr. Sloman, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Although both reviewers were positive about your manuscript, they raised a number of important issues that should be considered in a revision. I summarize the main points below, but if you do submit a revision you should address all of the reviewers' comments. First, your main argument hinges on the distinction between semantic and statistical information. As reviewer 2 notes, however, that distinction is not clear-cut and some approaches equate the two. So this needs to be clarified and I think editing and changes in terminology (as suggested by Reviewer 2) is probably the way to go, although there may be other possibilities. Second, the description of the methods and the reporting of the results requires a major overall. In many places it is difficult to tell how the analyses were conducted and in other places it is unclear what the results mean. For example when you say mixed model you should provide more information regarding the nature of that model (e.g., what were the random and fixed effects?). And for all reported effects you should follow APA guidelines and report the df and effect sizes. Reviewer 2 has multiple, detailed comments in this regard and you should respond to all of them. Third, both reviewers encourage greater contextualization of your work and encourage additional speculation regarding the meaning of your findings, especially in the context of single words vs. phrases and sentences. Although this is less critical, these comments do need to be addressed. Please submit your revised manuscript by Dec 19 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Thomas Holtgraves, Ph.D. Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Please note that according to our submission guidelines (http://journals.plos.org/plosone/s/submission-guidelines), outmoded terms and potentially stigmatizing labels should be changed to more current, acceptable terminology. In order to avoid conflation between gender and sex, "female” or "male" should be changed to "woman” or "man" as appropriate, when used as a noun. 3. Please upload a new copy of Figure 3 as the detail is not clear. Please follow the link for more information: " ext-link-type="uri" xlink:type="simple">https://blogs.plos.org/plos/2019/06/looking-good-tips-for-creating-your-plos-figures-graphics/" 4. We noted in your submission details that a portion of your manuscript may have been presented or published elsewhere. "No. The authors have a separate project on how people respond to the valence of political speech, which is cited in the submitted manuscript as [38] Sloman SJ, Oppenheimer D, DeDeo S. One Fee, Two Fees; Red Fee, Blue Fee: People Use the Valence of Others’ Speech in Social Relational Judgments; in prep. Some data for this paper were collected simultaneously with data for the accompanying manuscript, and are available at the same public repository. Because the research questions are distinct and the stimuli were analyzed separately we do not consider this a related manuscipt, but are happy to provide the working paper to editors upon request." Please clarify whether this [conference proceeding or publication] was peer-reviewed and formally published. If this work was previously peer-reviewed and published, in the cover letter please provide the reason that this work does not constitute dual publication and should be included in the current manuscript. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: I Don't Know ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: This paper presents interesting work how even subtle linguistic cues can be informative of a speaker’s political ideology. While participants perform better than chance at assessing a speaker’s party identity, effects are generally quite small although significant. This may be expected given that these studies use as stimuli single words that are isolated from any other cues or context. In all four studies, the U.S. Congressional Record is used to identify words that may be diagnostic of a speaker’s party identity. However, Congressional speeches are only one form of political language, and probably the one that many people are not exposed to. To their credit, the authors compare against presidential debate corpora, but only 1,421 of the 2,408 of the polarized words that overlap between the Congressional Record and presidential debates have the same estimated polarity. While they manage to find effects even with this limited overlap, it would be informative to look at sensitivity to these statistical signals across different contexts (e.g. news media, social media, or even looking at public debate data vs. Congressional speeches). In the introduction, “gun laws” vs. “gun control” is one example of how political identity might be inferred from language. However, the studies here consider only single words. Looking at bigrams or trigrams might be informative here, allowing for examples like the above or “tax reform” vs. “tax simplification”, mentioned in the discussion, to be included. My understanding of semantic equivalence is murky here, since these bigrams refer to the same policies, though the individual words “laws” and “control” clearly convey different semantic information. Given that the authors write that “gun control” and “gun laws” “contain the same semantic information” (line 37) I assume these sorts of phrases would be considered semantically equivalent. It may also be interesting to consider whether words that are linked to policies are particularly informative. On a broader level – is there some higher-order factor that connects words that are indicative of political affiliation? Whether that’s words referring to policies, something about complexity/analytic thinking (see Jordan Pennebaker, 2017), or something else entirely. While I think the authors successfully illustrate their hypothesis that meaningful speech variation by political party exists, a potential next step could be looking at how different terms may be informative in different time periods. One of the item pairings that seemed particularly interesting was “wall”/“barrier” – this seems like it would only be indicative in the time of Trump’s campaign/election. The transcript analysis was limited to 2012-2017 – was there a particular reason for choosing this time frame? It seems important to explain this choice given that the results are likely sensitive to variations over time. Reviewer #2: This paper reports the results of three studies (as well as a wealth of supporting analyses) to examine whether lay perceivers correctly discern whether given words are more frequently said by Democrats or by Republicans. I thoroughly enjoyed the paper and find it to bring an exciting approach to an area of widespread interest. My main concern centers on the clarity and consistency of the methods – as I elaborate below, the reporting of the methods and results started out on a strong foot but quickly became quite complex in reporting. As a result, even with a background in these kinds of analyses, I was lost as to what exactly the authors were doing statistically to support their conclusions. This makes it difficult for me to evaluate whether the technical approach is sound. Even in Study 1, when the authors report “an ANOVA on a mixed model”, I had no context for this analysis: what was the mixed model? What was the baseline model that they were doing the ANOVA comparison against (presumably this was the model comparison approach, but again, it wasn’t clear to me). Another point of confusion, for illustration: From the results presented on line 399, it seems the R word recoverability was greater than the recoverability for D words? Is this true? That is, it seemed that the R result was further from the midpoint than the D result. Separate one-sample t-tests comparing effect sizes would inform this interpretation. If it is true, then please offer a discussion or speculation as to why this might be the case. Finally, a third major point of confusion in the results centered on the figures: although I appreciated the effort made to clearly describe the graph, I really struggled to wrap my head around exactly what was being visualized. More legends in the figure may be helpful. Please take a second pass at these visualizations to generate other possible ways of representing the results to make them more interpretable for the general audience of PlosOne. I found Figure 3 to be more interpretable than the others, so perhaps using that one as a model for the others will be helpful for the author. A substantial revision to the writing will be required to streamline and sign post (i.e., using numbering or “first… second… third”, etc.) exactly what analyses are being done. Further, the authors may want to be much more systematic in the analyses they do for each study. As is typically in the behavioral sciences, I would recommend starting with descriptives and basic inferential tests (e.g., how many words were correctly classified, was that number significantly different from 0 from a one-sample t-test), and then moving on to one clear mixed-effects model that examines both item-level and person-level variance. The authors know their data much better than I (obviously!), so they will know exactly which model is most clear and compelling for their data. Present that model first and clearly (and consistently across studies), and then lay out the remaining additional analyses that probe secondary questions. My second major suggestion concerns the terminology used. The authors spend a good amount of time in the introduction and throughout the manuscript emphasizing the dissociation between “statistical” and “semantic” information. But, unless we know the very precise definition of “statistical” that the authors are referring to, the more vague term of “statistical” makes this distinction hard to understand. If we use the lay concepts of “statistical” as referring to frequency in the environment, then “semantic” information can also be inferred from “statistical” information (indeed the “distributional hypothesis of semantics” from Firth and others would argue that semantic information is almost entirely driven by statistical information). My suggestion is (1) to really elaborate and emphasize the examples first and only after to point out that the exact same semantic concept is being represented in idiosyncratic ways (gun laws vs. gun control or you guys vs yinz); and (2) to consider alternative terminology. Statistical is perhaps too vague a term to capture this precise meaning. Is it better perhaps to say statistical dialect? Really it’s conditioned variation on the basis of group membership, so why not just use that terminology throughout (“conditioned variation over and above semantics” makes more sense to me than “statistics over and above semantics”) Finally, I have a number of line-by-line small points where I was confused by the writing or results: 1. I was very confused in my first reading of the abstract - What does “direction of statistical information” mean? I don’t think a lay reader would understand this language since “statistical information” hasn’t been defined. Perhaps wording like “follows the trends of the ground-truth data” would be more clear? Alternatively, the paragraph starting on line 27 could be re-reported in a condensed form in the abstract to clarify my concern. The definitions in that paragraph were well described. 2. Line 53: Spell this point out for your reader - what exactly do you see as the clear relevance of this work? 3. For all the reported numbers of R and D words (e.g., lines 136-38), is the author merely reporting classification based on the SIGN of the effect size (not the MAGNITUDE)? Were these distributions of R and D words significantly different from a null distribution around zero? Throughout, I think it would be helpful to have more reporting of the distributions and magnitudes of effect sizes. 4. What does PKL stand for exactly? A sentence or two more is needed in this description. The log-odds description is great but then the author jumps into an altogether different approach (or so it seems); can the description be streamlined to only focus on the metric that is of most interest (and then the remaining metric is moved to supplemental?) 5. Lines 174: Please report proportions of words rather than exact raw values. 1421 words is how many of the original set? 6. Please justify the point of Study 2 much earlier - either by laying out the general program of studies and/or by including a short intro to the study before the methods. As I was reading through the methods of Study 2, I didn’t know the point of what I was reading. Similarly, it would be helpful to revise the title of this Study to be more informative (e.g., something referring to trying to disentangle the role of semantics and using cosine similarities to do so). 7. Line 287: How many pairwise comparisons was this in the end? 8. Line 320: One cannot “strongly” reject the null hypothesis in a Frequentist framework - if the author wants to discuss magnitude of evidence, I’d suggest going Bayesian… 9. Line 322: The lack of significance for item-level analyses here is likely because of limited power (assuming the author is aggregating at levels of items? This is not clear to me, see my comments above about clarifying your methods!). 10. Also, throughout, for all models and t-tests, please report df and effect sizes (e.g., Cohen’s ds), as per APA format 11. Why not look at sentence-level effects? These single words out of context are difficult to interpret. More justification is needed than what is given (i.e., that language learning and change occurs at the word level); the paper is not interested in language learning or change, so why is that justification relevant? The word level may be a good start, but it would be interesting to extrapolate beyond and think about the implications for speech patterns more generally. As the authors show, groups of words provide a much stronger signal than a single word; presumably this would be even more true if they were constructed as sentences or as paragraphs… 12. And finally, I’d love to see the author speculate a bit about downstream consequences of this effect on persuasion and influence? Is speech tailored to your own party more likely to persuade you? This seems particularly pertinent in a time of such political polarization and “fake news” across party lines - would be exciting to see this mentioned. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Nina Wang Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
PONE-D-20-30828R1 Can we detect conditioned variation in political speech? Two kinds of discussion and types of conversation PLOS ONE Dear Dr. Sloman, I have read your revised manuscript (PONE-D-20-30828R1"Can we detect conditioned variation in political speech? Two kinds of discussion and types of conversation") as well as a review of the manuscript provided by one of the reviewers (Reviewer 2) of your original submission. The reviewer and I both found your revision responsive to concerns raised in the initial round of reviews. There are, however, a few additional tweaks to be made before your manuscript can be accepted for publication. First, please address all of the minor issues raised by Reviewer 2. Second, because your samples consist of MTurk workers I would like you to mention some of the limitations (e.g., non-naivety and trustworthiness) of using participants from this platform. Third, for study 2, state in the method section that participants provided judgments of 100 word pairs, 12 of which were for a different study. Fourth, please add a sentence within the manuscript that provides a reference to the location of your data and code (and please make sure that the studies in this manuscript are aligned with what is contained on your GitHub page). That’s it. Once you submit a revision I’ll review it and make a determination. Note that I won’t be sending this out again for review. Please submit your revised manuscript by Mar 01 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Thomas Holtgraves, Ph.D. Academic Editor PLOS ONE [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #2: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #2: As a previous reviewer of this paper, I appreciated the authors attention to my earlier comments. Their extensive revisions have greatly strengthened the clarity of the methods, the clarity of the conceptual distinction between information of sense/conditioned variation in words (I agree with this terminology!), and the overall contribution of the work. I have a few remaining minor comments for consideration: 1. I might have missed this the first time, but I was surprised by the fact that the correlation between log-odds for Congressional and Presidential speeches was only a small to medium correlation. Although it is significant (which can provide some validation of the method), it also points to the variation in Congressional vs. Presidential speeches. A single sentence or two elaborating on these differences would be helpful, either in the main text or SM. Where is this variation coming from? 2. Relatedly, which corpus is actually better reflection of the signals that lay perceivers are learning from? Presumably the Presidential speeches, as the authors themselves note. So why did the authors not use that as the primary corpus of interest for the authors initial generation of word stimuli? I understand the points the authors made to justify using the Congressional speeches, but if their goal is to look at lay perception of politically conditioned variation, then doesn’t it make sense to look at the most “lay” corpus for word differences? I also realize it’s impossible to go back and change this now, but it might help to address this in a footnote as a limitation or direction for future work. 3. Line 260 “to reject the null that this value was greater than or equal to…” – is this inclusion of “greater than” correct? It seems like you rejected the null that the value was equal to the indifference point. The same issue for the Democratic stimuli reported in the following paragraph. Please double check or clarify. 4. Thank you for the additional clarity about the mixed model and effect size estimation. 5. Line 373 – where did the choice of N = 88 pairs come from? Was there some principled cut-off for cosine similarities? Would choosing only the highest cosine similarity pairs weaken the effect even further? Such a finding would suggest that differences in word sense do, in fact, have some impact on the discrimination of R vs. D words. 6. Another alternative for why Republican discriminability may be higher (in addition to the frequency of exposure to speech as the authors mentioned) is that lay perceivers may have stronger stereotypes about Republicans than about Democrats. In other words, they may have clearer notions about the traits and cognitions of Republicans than Democrats. My intuition is that, especially during Trump’s presidency, lay perceivers see Republicans as a more extreme social group than Democrats. Another speculation to play with, perhaps? 7. The figures are all much improved! ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 2 |
|
Can we detect conditioned variation in political speech? Two kinds of discussion and types of conversation PONE-D-20-30828R2 Dear Dr. Sloman, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Thomas Holtgraves, Ph.D. Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: |
| Formally Accepted |
|
PONE-D-20-30828R2 Can we detect conditioned variation in political speech? Two kinds of discussion and types of conversation Dear Dr. Sloman: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Thomas Holtgraves Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .