Peer Review History
| Original SubmissionMarch 17, 2021 |
|---|
|
PONE-D-21-08819 Testing the psychometric properties of the Swedish version of the EPOCH measure of adolescent well-being PLOS ONE Dear Dr. Maurer, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript if you are able to address the points raised during the review process. A part of the solution might be finding terms better describing what was measured. Please submit your revised manuscript by Jul 03 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Frantisek Sudzina Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Please specify whether you obtained consent from parents or guardians of the minors who may have participated in this study. 3. Please note that there are several instances where the McDonald’s values are not formatted correctly and therefore appear as a blank box. In your revision, please ensure that these are replaced with the correct symbol. Thank you for your attention. We look forward to hearing from you. 4. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For more information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. In your revised cover letter, please address the following prompts: a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent. b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. We will update your Data Availability statement on your behalf to reflect the information you provide. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: No ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: No ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: No Reviewer #2: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: I reviewed the manuscript entitled "Testing the psychometric properties of the Swedish version of the EPOCH measure of adolescent well-being" for PLOS ONE (PONE-D-21-08819). I read the manuscript word-for-word and have provided comments and suggestions that I hope will improve the manuscript. I am a clinical psychology post-doc at Kent State University with expertise in well-being measurement. My name is David Disabato and I can be contacted by the authors with any questions about this review (ddisab01@gmail.com). Introduction: 1) The authors refer to the reliability of factors at various points in the introduction section. Reliability refers to observed scores and not latent factors; latent factors have (theoretically) perfect reliability. For example, the authors say on page 5 “They found that the five-factor model was reliable even across different time points, and each factor had high internal consistency”. Instead it should say something like, “The five EPOCH subscale scores were found to have high internal consistency”. 2) I am not sure what the authors mean when they say “… but its consistency over time was low with a range from r = .12 to r = .21.” Consistency can mean different things in psychometrics: How is that term being used here? Are those correlations test-retest coefficients? If yes, how much time had passed between repeat assessments? 3) I thought it was strange that the authors were referring to specific criterion questionnaires in the introduction section. Personally, I think it makes more sense to talk about constructs (e.g., adverse mental health) in the introduction section, and wait to talk about specific measures until the measures section (e.g., DASS-21). Obviously, for a psychometric paper you do talk about the measure of psychometric interest in the introduction. But I think the criterion measures should wait until the methods section. Methods: 4) I would like to see more information about study recruitment and administration. How were the 852 students from these 3 schools recruited? Was the study encouraged by the school administration? What percentage of the total student body at each school chose to participate (I assume not everyone volunteered)? It says the surveys were distributed via email and completed online. How many students from each school were emailed the online survey – all of them or only a subset? Were they administered at school or at home? How long did it take participants on average to complete the full survey? 5) In the EPOCH part of the measures subsection, the authors should include a sentence telling the reader that the internal consistency of the EPOCH in this sample will be reported in the results section. As the reader, I am used to seeing the internal consistency reported in the measures section and I was confused by it not being there. 6) I would recommend placing the procedures subsection first in the methods section as it lays the foundation for understanding all other aspects of the study methodology. As the reader, I was left confused after the participants section wondering where the information about the procedure was going to be reported. 7) The type II error rate for the chi-square test of model fit is a problem in small samples, not large samples. I think the authors mean to say that the chi-square statistic tends to reject even great fitting models with negligible misfit in large samples. 8) The authors should provide a definition of McDonald’s omega for the reader and how it differs from Cronbach’s alpha as some readers may not be familiar with that index of internal consistency. Results: 9) The Little MCAR test cannot determine if the data were truly missing completely at random. It can only tell you if the *measures in your dataset* are related to missingness, but that says nothing about the infinite number of other variables not measured in your study. You can certainly report the results of Little’s MCAR test, but I think the authors should replace the statement that says “Thus, the data were missing completely at random” with something like “Thus, the missing values were not related to any variables in our study”. 10) Since there is almost no missing data (<.01%), preliminary analyses about missing data don’t seem very important. Instead, I would like to see correlations, means, and standard deviations of the subscale scores as a Table 1 (as currently a Table 1 does not exist in this paper for some reason). I noticed the authors report the means and standard deviations in Table 2, so I would then move them to this new Table 1. I think it would also be worth including a supplemental table of the EPOCH items correlations, means, and standard deviations as those could be used by other researchers to replicate and extend the CFA analyses the authors have done; however, I will leave that up to the authors. 11) How the authors report the chi-square difference tests of model comparisons is confusing to me. The chi-square difference values appear to look like the alternative model’s chi-square statistic, but then slowly realized that those were the chi-square difference values. I think it would be easier for the reader if the authors first presented the chi-square statistics (and model fit indices) for each model and then follow-up that up with separate sentences about model comparison and the chi-square difference tests. 12) It is not clear what the coefficients are in Figure 1. I assume they are standardized factor loadings and latent correlations, but that needs to be explicitly stated in the caption. 13) If the authors are going to use the EPOCH total score, then they really should be discussing the higher-order factor model as an alternative, acceptably fitting factor model, as that is the factor model behind the total score. Right now, the results and discussion are written as if the higher-order factor model was rejected, which then implies a total score would not be used – only subscale scores. However, that is not what the authors do when moving to criterion validity. Therefore, the authors either need to talk about the higher-order factor model as an alternative, acceptable factor model or remove the analyses with the EPOCH total score. (It does makes sense to me to have an EPOCH total score given the internal consistency of the total score was very high.) 14) The authors don’t need to repeat the Cronbach’s alphas and McDonald’s omegas in Table 2 if they already have them in the text. I would put them in one or the other location, but no need for both. 15) There are some typos in the notes for Table 2. Discussion: 16) Again, there is psychometric language about factors that should be about subscale scores. For example, on page 13 “The factors also showed good criterion validity”. The criterion validity analyses were done with the EPOCH observed subscale scores, not the latent factors in a full SEM, so the language in the discussion should refer to subscale scores, not latent factors. 17) I am not sure why the authors refer to optimism as a “positive emotion-based subscales” in the discussion section. Optimism is a cognitive construct by almost every definition I have read and the items in the EPOCH all refer to cognitive beliefs and not emotions. They need to either clarify what they mean or remove this statement. 18) The authors should remove the reference to “mood disorders” when talking about the DASS-21 in the discussion on page 14. Both depression and anxiety are sometimes referred to as involving problems with affect or mood, but the term “mood disorder” is reserved to depression and not anxiety in the mental health literature. 19) I find it strange that the authors are interested in showing no gender differences across the EPOCH subscales. Why is that important to the psychometrics? It would be important to show measurement invariance across gender in the factor models, but a gender difference for the subscale scores doesn’t say anything about the psychometrics of the questionnaire. Especially, since there is evidence for gender differences in well-being across measurement methods (e.g., women tend to be higher on both negative emotions and positive emotions; Nolen-Hoeksema & Rusting, 1999). Nolen-Hoeksema, S., & Rusting, C. L. (1999). Gender Differences in Well-Being. Well-being: Foundations of hedonic psychology, Chapter 17. 20) The same argument goes for age. If the authors are concerned about the EPOCH being psychometrically biased due to gender or age, then they should test for measurement invariance. For example, Kern et al. (2016) tested for measurement invariance across gender. I noticed the authors cite Tim Brown’s CFA book – chapter 7 goes over measurement invariance across demographic groups if they are interested in doing this. Unless they authors test for measurement invariance across age, then they cannot state that “they are invariant with age among adolescents and changes in scores reflect changes in psychological functioning rather than changes in maturation” (pg. 14). 21) I think the authors need to point that that a limitation of the criterion validity tests is that all criterion were measured with the same method as the EPOCH measure (i.e., self-report). Therefore, there is a good chance that the criterion validity correlations are over-estimates of validity and inflated due to shared method variance. Future research should include non-self-report criterion. Reviewer #2: Thank you for the opportunity to review the manuscript, “Testing the psychometric properties of the Swedish version of the EPOCH measure of adolescent well-being.” The purpose of this manuscript was to translate the EPOCH measure into Swedish. Participants were 846 Swedish adolescents who completed a self-report questionnaire in May 2020. Strengths: I appreciate research efforts to translate English measures into other languages to increase access to different populations. Psychological science certainly has an issue of recruiting diverse samples, including studies that involve languages other than English. The sample size was sufficiently large and analyses are straightforward. Nonetheless, this manuscript has noteworthy limitations. These are detailed below. 1) Several broad sweeping assumptions are made in the introduction. The authors describe “hedonia” and “eudaimonia” as two distinct forms of well-being. However, there are now several studies from different research groups showing that these two types of well-being—at least as we currently measure them—overlap quite a high degree, with some people suggesting they represent the same type of well-being. At a minimum, the authors should acknowledge that there is considerable debate about whether or not these represent two distinct forms of well-being and present relevant factor analytical work. 2) Hedonic well-being is described as “emotional well-being” and eudaimonic well-being is described as “well-being resources not tied to emotions.” There are three issues with this conceptualization (in addition to the issue noted above). First, the widely adopted model of hedonic well-being is Diener’s subjective well-being model (SWB), which is composed of the presence of positive emotions, lack of negative emotions, and life satisfaction. Diener and others have amassed a corpus of work demonstrating the utility of this model. (And in the introduction of the present study, the authors use Diener’s model to describe hedonia.) Yet, negative emotions and life satisfaction are surprisingly absent from the EPOCH measurement model. Second, it is incorrect to claim that eudaimonic is “not tied to emotions.” For example, Laura King has shown that meaning in life, arguably a core component of “eudaimonic well-being”—although also surprisingly absent from the EPOCH measure—is correlated with (and often influenced by) positive affect. Can a “type” of well-being really be devoid of or distinct from emotions? Third, the lack of a strong, clear, and cohesive definition of eudaimonic well-being opens up the possibility for any “positive” or desirable psychological variable — as judged, often arbitrarily, by the researcher — to be called “well-being.” For example, why was environmental mastery from Ryff’s widely adopted psychological well-being (PWB) model excluded from the EPOCH model presented here? A 2016 review identified 99 self-report measures of “well-being” containing nearly 200 different dimensions. Without concise definitions, theoretically informed models, and some consistency in measurement, the term “well-being” runs the risk of becoming meaningless. 3) A related but separate point: I am struggling to see how optimism and perseverance are components of well-being. Decades of research would suggest that these are personality traits. 4) P. 4: “However, Kern et al. argued that measuring well-being in adolescents should be based on well-being categories that are less abstract than those in the PERMA profiler (14), which was more suitable for adults.” Can the authors clarify what is meant by “abstract” and elaborate on which constructs were replaced and why? I do not entirely follow the logic. 5) The EPOCH measure is based on the PERMA collection of variables, for which there is limited empirical support. To date, few studies have examined the factor structure of PERMA, and some have found that it highly overlaps with other, more well-established models. Stronger rationale is needed for selection of this variable set. 6) The five components of the EPOCH measure are described as “flourishing” in the results section but “well-being” elsewhere in the manuscript. Please use consistent terminology to prevent confusion. 7) The race/ethnicity breakdown of participants is missing from the demographics section. 8) P. 15: “We might have included more positive well-being indicators in this study to obtain greater nuance with the criterion validity. However, since the data were based on a larger project, only a couple of comparison indicators were collected.” I appreciate the authors being forthcoming in this limitation. However, it does raise significant questions about the validity of this measure. It seems that at a minimum, an assessment of criterion validity should include a comparison with another measure of well-being. 9) I do not see Table 1, only Table 2. Based on the text, it seems that Table 1 would include the bivariate correlations? ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: David Disabato Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
Testing the psychometric properties of the Swedish version of the EPOCH measure of adolescent well-being PONE-D-21-08819R1 Dear Dr. Maurer, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Frantisek Sudzina Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: |
| Formally Accepted |
|
PONE-D-21-08819R1 Testing the psychometric properties of the Swedish version of the EPOCH measure of adolescent well-being Dear Dr. Maurer: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Frantisek Sudzina Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .