Peer Review History
| Original SubmissionSeptember 24, 2021 |
|---|
|
PONE-D-21-30871Test-retest reliability of the HEXACO-100PLOS ONE Dear Dr. Henry, Thank you for submitting your manuscript to PLOS ONE. We invite you to look at the reviewer's suggestions and think whether they could be used to improve the article. Please submit your revised manuscript by Dec 16 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Frantisek Sudzina Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability. Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized. Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access. We will update your Data Availability statement to reflect the information you provide in your cover letter. 3. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide. 4. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well. 5. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Thank you for the opportunity to review this paper. I see it as being both interesting and very important to researchers using the HEXACO 60 and 100 personality inventories. I also really enjoyed the analysis of and discourse about the item-level data. I believe that this paper is likely to become the ‘go-to’ paper for people wishing to cite evidence of the reliability of these measures. Overall, I have only two relatively minor suggestions/thoughts. 1. If the authors have more information they can share about the sample (e.g., country of origin, education levels), I strongly encourage them to add this information to the Participants sections, if there is space. My thinking here is that these details may be important for future researchers who, for example, conduct a similar study but receive different results and wish to understand why. 2. After reading the third paragraph on page 13, which pondered the effects of items’ contextualisation levels on reliability, I wondered whether contextualisation levels would be predictably positively associated with rTT but negatively associated with α. For example, whether a person likes poetry _today_ is probably very strongly associated with whether they like it tomorrow, next week, next year, and so on (high rTT). But it’s not hard to imagine there would exist plenty of people who love poetry but are indifferent to, say, classical music or ancient ruins, thus the contextualised items contribute negatively to alpha. I concur with the authors’ speculation that more generic/less contextualised items (e.g., a hypothetical item, “I like artistic things”) may undermine rTT, for all the reasons the authors mentioned (e.g., what artistic things are they thinking about in that moment? Have they enjoyed/not enjoyed a recent artistic experience?). And I could see how such an item would positively influence alpha, as the generic item would represent, to some extent, any of the specific/contextualised items in the same scale. Anyway these are just thoughts; I do _not_ insist the authors should to include them in their revision. Reviewer #2: Review PONE-D-21-30871: “Test-retest reliability of the HEXACO-100” Many thanks for the opportunity to review this manuscript, which provides - once again - evidence that alpha reliability is a less optimal parameter than test-retest reliability. Based on my reading of the manuscript, I have a few suggestions and comments: 1. One of the open questions for me is what the most optimal time period is to establish test-retest reliability. The authors chose 12 days (please provide mean and SD of number of days or even hours between the two ratings; and please check whether the individual number of days has an effect on r(tt)!), but I’m not sure whether this is the optimal time period and what is actually the most optimal time period for personality questionnaires. That is, in the introduction and the discussion, I would like the authors to explain a bit more, based maybe on memory research (which, of course, also shows large individual differences) and based on the traitedness of a construct and the possible time frame for changes to occur, what kind of time frame would be most optimal to establish r(tt). 2. With respect to the above time frame, the findings can also be used to comment on McCrae’s (2015) approach to distinguish trait, method, specific, and error variance components. McCrae notes that specific variance is obtained by subtracting alpha from r(tt), but in most cases this would yield a negative specific variance in the current study. As McCrae notes: “By definition, [...] specific variance in an item is not shared by other items in the scale, so it detracts from alpha. However, in retest designs, the same items, with the same specific variance, are readministered, and they may elicit the same response. Item-specific variance could thus account for the fact that retest reliability is greater than alpha, especially if we also assume that method variance is stable over short intervals.” (McCrae, 2015, p. 2) That is, McCrae’s formula implies that the time period between two measures of the same construct should depend on the specific variance (i.e., if there is more specific variance, the time period should be longer, because else r(tt) is bound to be greater than alpha. I’d love the authors to comment on this. Note: I must admit there are notable problems with McCrae’s approach, something that is long overdue being commented on. 3. I wondered about the criteria to establish whether an item is a ‘good’ item. One could argue that both r(ca) and r(tt) are important, and not just r(ca). But how to weigh these is - to me - an open question. Logically, r(tt) is a necessary, but not sufficient, condition for r(ca) (i.e., a highly temporally stable item may not be observable, and thus have a low r(ca), whereas r(ca) may be a sufficient condition for r(tt) (if items are really observable and there is high r(ca), by necessity there is a high r(tt)). But the question is whether you only want to have observability criteria (or other criteria aligned with r(ca), e.g., ‘item domain’, see De Vries et al., 2016) properties in a personality questionnaire. I would love to see the authors make a statement about this in the discussion and maybe even suggest which (24? 48?) items would provide the most suitable short measure of the HEXACO-100 (with coverage of each facet) according to their criteria. 4. Last but not least, I would love the authors to make the title a bit more informative about the implications of the manuscript, especially with respect to the importance of test-retest reliability and the fact that alpha reliability should be less often used as a measure of reliability. As a final note, please refrain from using the term ‘internal consistency’ and/or explain that it is a misnomer, because alpha does not measure internal consistency (with thousands of items, any scale has a high alpha, but can have practically zero internally consistency). See Sijtsma (2009); just call it ‘alpha reliability’ or ‘internal reliability’. McCrae, R. R. (2015). A more nuanced view of reliability: Specificity in the trait hierarchy. Personality and Social Psychology Review, 19(2), 97-112. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: Yes: Reinout E. de Vries [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
Test-retest reliability of the HEXACO-100 - and the value of multiple measurements for assessing reliability PONE-D-21-30871R1 Dear Dr. Henry, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Frantisek Sudzina Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: |
| Formally Accepted |
|
PONE-D-21-30871R1 Test-Retest reliability of the HEXACO-100 – and the value of multiple measurements for assessing reliability Dear Dr. Henry: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Frantisek Sudzina Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .