Peer Review History
Original SubmissionDecember 4, 2020 |
---|
PONE-D-20-38206 When research is me-search: Researcher interests affect laypeople’s trust in science depending on their pre-existing attitudes PLOS ONE Dear Dr. Altenmüller Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. VI have now received three reviews of your MS. All three reviewers see merit in the research presented in the paper. Two reviewers recommend publication after minor revision, whereas the third has recommended major revision, in particular in relation to the statistical analysis applied. In view of the comments of reviewer two, I am recommending major revision and would like to invite you to revise and resubmit the MS. As well as addressing al of the reviewers comments, please address the issues associated with the regression analysis raised by reviewer 2 in your resubmission. I have received three reviews of your MS. All three reviewers see merit in the research presented in the paper. Two reviewers recommend publication after minor revision, whereas the third has recommended major revision, in particular in relation to the statistical analysis applied. In view of the comments of reviewer two, I am recommending major revision and would like to invite you to revise and resubmit the MS. As well as addressing al of the reviewers comments, please address the issues associated with the regression analysis raised by reviewer 2 in your resubmission. Please submit your revised manuscript by 1st June 2021 If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Lynn Jayne Frewer, MSc PhD Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: No Reviewer #3: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: I like this line of research very much and think it's useful and largely well done. I wish you had a more diverse and larger sample but ... maybe next time. It's not necessary, it'd just make the results (especially in study 1 where you don't have a lot of non-LGBTQ friendly respondents. Some concerns I have ... 1. Ecological validity for the 'non-affected' condition: It's not clear to me why anyone would ever say that they're studying something because they don't identify with it. I think, more likely, someone might just NOT say why they're studying something and just describe what they're studying with no context. Alternatively, a scientist might say they study something because (a) it personal affects them, or (b) it's just an interesting set of puzzles, or (c) both. 2. It's also probably worth making clear that disclosing that research is personally affecting the respondent is conceptually different from saying that a respondent is motivated by benevolence. In this regard, I worry a bit that both respondents are kind of making a (weak) benevolence claim that might be attenuating the results. As future research, you might consider research that adds a clear "and my motivation is to help this community ..." message as a condition. I also don't really understand the last sentence in either condition. They seem fairly vague. 3. Is 'affected' the right word? 'Interested' (as in conflict of interest) seems closer but that word may have too much baggage. But 'affected' has baggage too and we're all 'affected' by research in these areas. Maybe 'involved'? 4. I do not understand your argument about how the second study rules out social identity protection. More generally, I don't know that social identity protection and attitude protection are necessarily incompatible given that my identity might make it more likely that I hold certain evaluative beliefs (aka the basis of attitudes). And the fact you're vegan doesn't mean that veganism is a core part how you identify. If you want to get at identity protection, I'd want more evidence than this. I think you'd be better just to talk about motivated reasoning in a more general sense. Some technical suggestions ... 1. You sometimes present percentages with two decimals, which is essentially four decimals (inasmuch as 21.34% = .2134), but your sample is less than a 1K people. This would seem to suggest that you're trying to be more precise than is reasonable. Id' just stick to two decimals throughout unless you have a good reason. 2. Does it make sense to call your behavioral trust measure 'credibility'? I worry a bit about this because there's so much variability in the trust literature when it comes to what we call the various constructs. In this regard, I note that credibility is often conceptualized as competence (even going back to Hovland in 1951). I think it might be safer just to talk about 'behavioral trust as willingness' to be vulnerable as that fits with the Roger/Davis/Schoorman model that the Muenster epistemic inventory is derived. 3. It'd be great if your tables and figures were a bit more descriptive such that they can 'stand alone'. In this regard, you might add sample sizes as well as variable range. The figure, in particular, are impossible to understand without referring to the text. In those cases, a detailed note is one important step but it'd also help to better label your axes and use a readable font. 4. Does it really make sense in the second study to include the bit about the overall field? In this regard, is vegan research really 'the field'? Why wouldn't the field be vegetarian studies? Or food studies? Or nutrition? Or biological science? You define field so narrowly that I'm not sure what this additional element adds. 5. You note that the effect sizes for expertise were smaller than the effect sizes for integrity/benevolence but can you really say that? I don't recall you testing the size of the difference, noting that the estimates have error associated with them such that they could easily overlap. Reviewer #2: Thank you for making your data and analysis scripts openly available. The code seems to run fine and I can reproduce the results. Method. Sample size 1. I appreciate that you took the effort to provide a power analysis, but post-hoc power is not particularly informative. What the reader needs to know is: given a specific sample size (N = 314), what is the smallest effect one can detect? In this case, it is r = .16 80% of the time, or r = .20 95% of the time. Or, since you’re using means comparisons, this is roughly a d = .32 80% of the time, or a d = .41 95% of the time. [I used the pwr() package for this]. I recommend adjusting this description to provide this information. Results What I would like to see presented are the partial r correlations from the regression. It seems that, from your description, the most informative predictor here is ‘participant positive attitude’ and that needs to be compared to the other IV. Yes, there is an interaction, but the slopes are all positive in figure 1a and 1b, suggesting it’s simply a difference in strength rather than a manipulation of the direction of the effect. This can be seen in the figures (which are nice by the way, but please upload higher quality versions). I recommend reporting the full regression model in a table along with partial correlations and standardised coefficients rather than only the raw coefficients in text.’ It would really be nice to have all the results in one table so that they can be easily compared across studies: I suggest including standardized betas and partial correlations. ‘Evaluation of the field’. It’s claimed that participants positive attitudes towards veganism moderate evaluations such that the effect changes direction, and the text points to Figure 2c. I think this must be a mistake: the slopes and CIs reported in-text are simply opposite of each other (eg b = -.43 and b = .43). Further, the graphs 2c depicts only negative slopes. Figure 2a, in contrast, shows slopes with opposing directions. Please double check this. The adjusted R squared for trustworthiness is tiny: R = .05. I’m not sure what to make of these effects, leaving me wondering about the value of this model. The size is similar between both studies, suggesting this is probably accurate. Again, without partial R values, it’s hard for the reader to compare the relative strength of these effects and judge their merit. The adjusted R-squared values for credibility and evaluation of the field are upwards of R = .20, which is more convincing (and this is noted in the discussion). In the Study 2 discussion, the authors suggest that “The moderation…persisted when controlling for self-identification as a vegan… this has to be interpreted carefully.” I suggest you remove this caution or ignore the effect: either interpret something or don’t. There is no carefulness when you report an effect with an accompanying p-value. If you are not convinced that this result is not a fluke then simply describe your data. Either way, please remove this claim of caution. Finally, you note that the expectations of neutrality was affected by the manipulation and claim this effect is “small” (d = .22). This is actually more of an average effect size in social/personality studies (see Funder & Ozer, 2019 and Gignac & Szodorai, 2016). I wouldn’t be so quick to dismiss it. Additionally, earlier on in the manuscript (under Main effect of being affected) you interpret a similarly sized d = .25 at face value. If you have determined an alpha level of .05 as your criteria (which you did, and you also can see that smallest effect size based on the power analysis), you should stick to that criteria throughout and simply interpret the effects. Finally, I admit that I am a bit puzzled by the overall pattern here: participants were less critical when Dr Lohr was affected; they saw results as more credible when Dr Lohr was affected; they trusted the results more when Dr Lohr was demonstrably biased? What would happen if you asked the participant to rate whether the researcher is biased? General Discussion This is again an issue in the general discussion: you claim that ‘when participant attitudes were less positive, this pattern flipped…” but this pattern did not flip. We simply see a reduction in strength. Conclusions In sum, I think the key variable here is participant attitudes towards the topic. I’m not convinced of the moderation effect here for several reasons. First, it does not reverse the direction of the effect: this is key for the claim Second, the moderation itself is small. There’s not much of a difference between any two given slopes (with the exception of Figur 2a). Third, the whole idea strikes me as strange: participant attitudes drive the effect always in a positive direction. Even when the researcher admits to being biased, participants don’t evaluate him more negatively. Plus, ‘bias’ itself wasn’t measures as far as I saw. I think this is an interesting set of studies but it needs a bit more data to rule out some possible effects and I think the interpretation needs to be revised. I’d like to see these points addressed in a revision and I hope my comments are helpful in that regard. References Funder, D. C., & Ozer, D. J. (2019). Evaluating effect size in psychological research: Sense and nonsense. Advances in Methods and Practices in Psychological Science, 2(2), 156-168. Gignac, G. E., & Szodorai, E. T. (2016). Effect size guidelines for individual differences researchers. Personality and individual differences, 102, 74-78. Reviewer #3: This paper examines how peoples’ attitudes toward a topic influence their evaluation of research and researchers’ trustworthiness - in the context of research topics which have personal relevance to the researchers. I believe the aim of this study is very interesting and timely, and a great addition to the research; the study is well done. Also positive is the osf documentation and preregistration of study 2. While I am quite enthusiastic about the paper, I also have a few issues to remark: Abstract: Reading your abstract for the first time, I was a bit confused whose attitudes and interests you were talking about when and what you were measuring – Maybe lead the reader straighter to the point, instead of mirroring your introduction (even though the intro read quite smoothly)? Methods and Results: • Please report the results of the factor analyses justifying the scales for testing your hypotheses, can be in the supplement as well. • p values should be reported in accord with APA reporting guidelines, not p < .05. • Clearly state what you are testing when comparing the groups with attitudes +- 1SD from mean. “A regression was performed…” or as appropriate. • Table 1,2 and probably a question of taste: I don’t know of a convention in which Cronbach’s alphas are displayed in the diagonal of a correlation table. Since they’re conceptually something different than correlations, I would suggest moving them to a single column. • I don’t see a theoretical reason why you should test trustworthiness, and then the two scales separately, would you please justify this? (I think this should be avoided, if there is no indication from theory, and since the results are interdependent anyway. Reminds me of MANOVA) Discussion (and maybe Introduction): Finally, and probably most importantly, I am a bit unsure about your interpretation of the main effect on trustworthiness following the experimental variation – you discuss two ideas, and only in the “future research” section: a) there is a preference for anecdotal evidence vs. other types of evidence, i.e. someone affected has some type of special access to a research topic, and b) disclosing being affected / transparency signals benevolence (by the way, in line with intentionalist models of communication). Don’t your results – when dividing the trustworthiness measure into expertise and benevolence/integrity provide some (sure, provisional) evidence that allow you to dive deeper into these explanations in your argumentation? I believe your explanations of this effect could be argued more thoroughly. To that point, while I think it is alright that you only pose an exploratory research question in the introduction, here, you actually almost only provide anecdotal evidence yourselves to argue that question (which you may want to be clear about). ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
Revision 1 |
When research is me-search: How researchers' motivation to pursue a topic affects laypeople's trust in science PONE-D-20-38206R1 Dear Dr. Altenmuller We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Lynn Jayne Frewer, MSc PhD Academic Editor PLOS ONE |
Formally Accepted |
PONE-D-20-38206R1 When Research is Me-Search: How Researchers’ Motivation to Pursue a Topic Affects Laypeople’s Trust in Science Dear Dr. Altenmüller: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Lynn Jayne Frewer Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .