Peer Review History
| Original SubmissionMay 24, 2021 |
|---|
|
PONE-D-21-17063 Remapping the foundations of morality: Well-fitting structural model of the Moral Foundations Questionnaire. PLOS ONE Dear Dr. Zakharin, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Sep 05 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Peter Karl Jonason Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: No Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Review of “Remapping the foundations of morality”. After review, I think the paper requires revision. Plos offers minor or major revision, I don’t know what those mean, as an editor I stick with accept, revise or reject. The strength of the paper is its empirical analyses, but it requires substantial re-framing and engagement with the literature. So, while I’m supportive of revision for certain, there’s some work to be done. I think it is appropriate to disclose my identity given I’ve contributed directly to this area, so as to be transparent for any potential COI (Pete Hatemi). When Kevin Smith and I began looking into MFT, we found it both elegant and interesting, and started with the idea to provide evidence for genetic influences and one causal path for genetic influences on political attitudes. The data came out the other side; almost none of what Haidt and co proposed about the measures appeared to be empirically supported. And here is where the current paper seems really out of touch with the literature, almost treating the question of the MFQ validity as a open question, when in fact, in dozens of studies, it has been shown to be invalid. So it is not true to say that “the predictive validity of the MFQ has been generally supported, especially in the domain of politics”. Rather just the opposite. What has been supported are correlations between the two, that’s about it. So, let’s go back to the beginning. Kevin and I were perhaps among the first wave of people to find that the MFQ does not reliably factor into 5 dimensions in 10 populations (Smith et al- which btw your paper cites incorrectly), but at best 2, the structure differed by country (US and AU). This paper was not a critique and in fact we had no interest in finding a different factor structure. I suggest a reread of it. Then came Iurino and Saucier and many others who hit this more definitely (see Harper and Rhodes and Davis et al who find MFT is not valid across non US/white populations, and just the other day Hadarics Márton’s paper, then of course, the heterogeneity in MFTs within and across ideologies identified by Frimer; then finding that measure is not stable across time (2 years= Smith et al), and then the findings that MFT appears to be caused by, rather than cause, the traits it is proposed to influence (for attitudes or political orientation see Hatemi Crabtree and Smith , Kivikangas et al, Ciuk, Everett et al., Strupp-Levitsky et al (Jost), and Márton and Kende among many others . Finally it’s not heritable (Smith et al.). In this, I think the paper needs to redo the front end. The above papers are not critiques, so it a bit disingenuous to frame them as such. Rather, most of them were studying a trait of interest, and in so doing found out the predictor variable (MFQ) was garbage. If the current paper wants to restructure the MFQ then it has to do two things. First it has to meaningful engage the some 20+ papers that can’t replicate the proposed factor structure. The comparison of fits in the paper seem highly selective. So, a more clear and thorough review of MFQ/MFT’s serious shortcomings is needed to properly situate the paper and compare your factor structure with the 2, 5, 7, and other outcomes. Simply ignoring the works above, or attempting to frame them as critiques on the side is not good science in my view. It is not that a handful of scholar’s challenge MFT. It is that the most serious empirical explorations of the MFQ, with large samples and good data, find no evidence to support most of MFT’s claims, regarding the measure. Doing this can be easily done. Read the papers, compare your approach and results, and place them in context. The second thing, will take less leg work but a bit more thought. What the current paper proposes is that Haidts MFT is simply wrong. One cannot simply just restructure the MFQ into different domains, without then updating the theory. A read of Haidt’s book here is critical –The MFQ is the proposed measure of Haidt MFT theory. It has specific logic, organizing principles evolutionary roots etc that link the domains to each other and the MFQ questions. Certainly one can simply data drill, as Kevin and I did in the Human Nature paper without any theory. In this third paper Hatemi and Smith, we ran more EFT’s on the MFQ just to satisfy a reviewer to see if we could make the thing heritable, but that was in an SI. Here the paper is centered around offering a different factor structure to the data that conflicts with the theory as written. What are you competing hypotheses? What parts of your findings invalidate Haidt’s theory? What part support it ? What parts of Haidt’s theory require modification based on your results, what part require abandonment? This is important because if one wanted to use this new formulation as a means to lets say, run different analyses, like behavior genetic ones for example, then this should be the paper to address those questions. Otherwise it is simply a data exercise – which is not a bad thing, but just limited in what it offers. So, yes, I’d like to see this paper in print. My suggestions Read the lit, reframe the paper by engaging it more appropriately, mainly it is not that there are critiques, but rather Haidt’s theory and measure simply don’t hold up in a lot of the data. It is well established that the factor structure certainly is not supported. In most every large and nationally rep study its doesn’t work- though it seems to work with Haidt and Grahams student and internet samples. So, the question is what is the actual measure doing , what is the ideal factor structure and if it’s not what the theory proposes what does this say about the theory? Based on your proposed structure- is it still moral foundations? One important and serious concern. You need permissions to use other peoples replication data for anything other than replication. Posting data for replication is for replication. Here you are using replication data for novel purposes. Since you used my data and I was never asked, I’m suspecting you did the same for others. It is an ethical question to take replication data in the manner you’re using it. Smart move here is to ask. It is very low cost , the price of an email and usually results in goodwill. If you dont, you risk your paper getting retracted. Not a smart play in my view. But risk is certainly a trait with individual differences and variation is genetically influenced at that. Feel free to take my comments print them out and use them for TP, or to improve the paper as you see fit. Hopefully they help. P Reviewer #2: This is one of the empirically strongest papers on the Moral Foundations Questionnaire, with strengths that the paper itself accurately touts. But there are some caveats and weaknesses of which the authors seem unaware. 1. The samples are – except for Study 5, the smallest sample – based on predominantly Western respondents (in fact, predominantly Anglo-American). Page 16 claims that study 1 identified a ‘reliable basic structure’ but this might be true only for certain populations. In general, the paper needs more caveats about potential inapplicability to non-Western populations. I don’t think we should be having a few Chinese university attendees representing (standing in for) the entire non-Western world as is done here. 2. With respect to the one non-Western sample, the paper fails to note important details (the sample was evidently almost 80% female) and there are some questions. What universities were the respondents from and how Westernized were they? Why was this particular sample (among all non-Western samples administered the MFQ) chosen, and did that involve cherry-picking the sample most likely to be supportive of the select model? 3. One could also wonder if there is some degree of cherry-picked subjects in studies 1-4. The description of the study 1 sample does not rule out that these are all moral-psychology enthusiasts who volunteered. The study 2 was highly motivated – found their way to a website on moral psychology. Study 3 and 4 samples were from MTurk, but might be called ‘cream of the crop’ MTurkers based on the selection criteria; that’s well and good unless it means these results are dependent on using ‘elite questionnaire-response professionals’ and are ungeneralizable to a typical population. What percentage of MTurk participants meet the criteria specified, and were any MTurk participants recruited but then eliminated (in order to make favorable results more likely)? (A general take-home message might be that the intended structure of the MFQ is fragile, hard to locate in your typical noisy data.) 4. The models employed are entered in a particular order that was apparently set a priori (is there a pre-registration to verify this?), but the question is raised whether, when all is said and done, one (or more) of them might be unnecessary. That is, perhaps by backward removal of the element adding least, a more parsimonious outcome might be reached. This is analogous to the forward versus backward methods in stepwise regression. Related questions: How does it make sense to enter a hierarchical model (which includes the five-factor model) before one enters the five-factor model, and what is the difference between ‘two-factor model’ and ‘binding and individualizing’ foundations in, for example, Table 8. 5. Following that same thread: The conclusion (though unstated) would seem to be that one can use the MFQ, but must really analyze it or make sense of its scores in a complicated and onerous manner. That is perhaps a problem for construct validity. An important area of future implications would be this: What do these results suggest about how to construct a better moral ‘foundations’ measure (e.g., one that would not be so profoundly affected by method variance, or by the pull of an underlying two-factor model and of political leanings and of acquiescence or general morality vs. amorality? In other words, how can the measurement of moral foundations be cleaned up in these regards? Smaller matters: -The middle paragraph on page 26 leads to considerable head-scratching. The first sentence is incomprehensible. As for the 2nd, why is social dominance picked out so prominently among all possible alternatives? What is the ‘alignment’ referred to there? It would seem that one interpretation of the results is that ‘tilt’ and binding-individualizing make independent contributions because one cannot reduce the latter entirely to tilt (as some treatments of the topic have implied in the past), there being both liberal and conservative ways of endorsing binding AND of endorsing individualizing morality. -In Table 8 it is noteworthy that moving from model 3 to model 4 yielded a huge improvement in fit, which was not the case in other samples. Does this say something about how Chinese respondents characteristically handle morality or this questionnaire? -Page 13 identifies a set of three MFQ items and labels them as sanctity, but it is hard to apply this interpretation to the third item mentioned (about roles for men and women). One could just as easily label this factor as ‘traditional gender roles/expectations’ (that also would fit 2 of the 3 items). -Page 13-14 separate out a patriotism factor from a loyalty factor, but it would seem more informative to label one of them as loyalty to country and the other as loyalty to family/team/group (i.e., to smaller-scope entities). In reference to same on page 24, it is implied that Herbert Simon differentiated patriotism from loyalty, but this seems unlikely given how Simon’s viewpoint is stated. Clarity needed. -Until the very last part of the paper, there is a tendency to label one latent variable -- created by constraining all items to load positively on it – as General Morality although an equally plausible interpretation is Acquiescence. Would be better to mention both possible interpretations from the beginning. -In Table 1, the “(modified)” notation for the best-fitting model for the Iurino and Saucier (2018) paper needs some explanation. -The tendency to overestimate one’s personality traits is (on p. 12) mischaracterized as halo effect. Halo effect is more commonly used to refer to overestimation of someone else’s positive traits. Doing it to yourself is self-enhancement or social desirability bias. -It seems that some text on pages 17-18 is redundant with what was said before in the paper, this needs a check. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
Remapping the foundations of morality: Well-fitting structural model of the Moral Foundations Questionnaire. PONE-D-21-17063R1 Dear Dr. Zakharin, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Peter Karl Jonason Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: First, well done on asking to use other’s data; Kevin let me know you did. I’d suggest you list the grants that funded all of the data you used in the acknowledgements; whether you wish to thank the PI’s is of course up to you. I consider reviews to be suggestive. Ultimately it is up the authors to decide what to put in their papers. I only give hard rejects when the analyses or understanding of the literature is so wrong, that there is no hope for meaningful or valid contribution. Overall, I’m supportive pf publication, as I was originally. That said, I am disappointed in the minimal revision made and I think the paper undersells some major points. So yes publish, it is a fine empirical paper. Whether you want it to be a better paper, that’s always a choice between investing more time vs. just get it out it. I do think it is a missed opportunity to not engage Haidt’s theory here. If your main takeaway is that the factor structure of the MFQ is not the 5 dimensions that Haidt argues for, and the items don’t fit as advertised into their subdimensions, then you achieve that. But despite valiant efforts, this means that half of MFT is not supported by the measures or data. This seems a rather important point and one the paper appears unwilling to engage. If the factor structure is not valid, then the theory or measures are not valid. As it stands there is now a mismatch between MFT and MFQ. The current paper sidesteps this question, but in my view this should be the paper to actually engage it. A single sentence, of let someone else do it, seems both a missed opportunity and also a sin- well Lindon would say that’s too strong a word, and I agree but I don’t have his vocabulary, so going with it. But I’ll ask you this, what other paper would this be addressed in? Empiricism without theoretical segment has value but is limited. Read the book, go at it straight on, remains my suggestion. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No |
| Formally Accepted |
|
PONE-D-21-17063R1 Remapping the foundations of morality: Well-fitting structural model of the Moral Foundations Questionnaire. Dear Dr. Zakharin: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Peter Karl Jonason Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .