Peer Review History
| Original SubmissionMarch 27, 2025 |
|---|
|
Dear Dr. Janelt, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. ============================== Thank you for your manuscript. I have invited three very professional reviewers, which have indicated several areas for revision. Please carefully address these, and I will be happy to consider your revised paper for publication. ============================== Please submit your revised manuscript by Aug 08 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols . We look forward to receiving your revised manuscript. Kind regards, Paweł Larionow, Ph.D. Academic Editor PLOS ONE Journal requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. We note that there is identifying data in the Supporting Information file < Supporting Information.zip>. Due to the inclusion of these potentially identifying data, we have removed this file from your file inventory. Prior to sharing human research participant data, authors should consult with an ethics committee to ensure data are shared in accordance with participant consent and all applicable local laws. Data sharing should never compromise participant privacy. It is therefore not appropriate to publicly share personally identifiable data on human research participants. The following are examples of data that should not be shared: -Name, initials, physical address -Ages more specific than whole numbers -Internet protocol (IP) address -Specific dates (birth dates, death dates, examination dates, etc.) -Contact information such as phone number or email address -Location data -ID numbers that seem specific (long numbers, include initials, titled “Hospital ID”) rather than random (small numbers in numerical order) Data that are not directly identifying may also be inappropriate to share, as in combination they can become identifying. For example, data collected from a small group of participants, vulnerable populations, or private groups should not be shared if they involve indirect identifiers (such as sex, ethnicity, location, etc.) that may risk the identification of study participants. Additional guidance on preparing raw data for publication can be found in our Data Policy (https://journals.plos.org/plosone/s/data-availability#loc-human-research-participant-data-and-other-sensitive-data) and in the following article: http://www.bmj.com/content/340/bmj.c181.long. Please remove or anonymize all personal information, ensure that the data shared are in accordance with participant consent, and re-upload a fully anonymized data set. Please note that spreadsheet columns with personal information must be removed and not hidden as all hidden columns will appear in the published file. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? -->?> Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available??> The PLOS Data policy Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English??> Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** Reviewer #1: I would like to thank you for inviting me to review this manuscript. I have read the article carefully. While it covers an interesting topic, some concerns lead me to recommend major revisions. My reasons for this decision are as follows: ABSTRACT The title mentions the nomological network, but the abstract does not explicitly address it. Including a brief explanation of how the ECS fits within the broader theoretical framework or its relationships with other constructs would enhance the depth of analysis. While factor models are mentioned, a brief explanation of the dimensionality and what specific dimensions are being assessed could provide clearer insights into the scale's structure. Adding a sentence about the practical implications or applications of the ECS-Total and ECS-Short could help readers understand the significance of the research beyond its theoretical contributions. More details on the selection process and rationale for the items included in the ECS-Short could be beneficial, especially considering its development is a key aspect of the study. INTRODUCTION While the introduction provides a solid foundation, it could benefit from a deeper exploration of prior studies concerning the ECS's psychometric properties in different cultural contexts to strengthen the rationale for the current study. The introduction could clarify the distinction between ECt and other components of empathy more explicitly, potentially by providing more examples or discussing the implications of these distinctions in greater detail. The introduction briefly mentions the nomological network but does not delve into what this entails specifically in the context of emotional contagion. Expanding on this concept could provide a clearer understanding of its relevance and importance. Including references to relevant interdisciplinary research (e.g., from neuroscience or sociology) could deepen the analysis and demonstrate the broader implications of studying emotional contagion. DISCUSSION While the discussion provides a good analysis of the ECS, it could benefit from a more detailed comparison with other emotional contagion measures or related scales to highlight its unique contributions or advantages. The discussion could be enhanced by elaborating on the broader implications of the findings for the field of psychology, particularly in understanding emotional contagion and its measurement. Including potential practical applications of the ECS and ECS-Short in various settings (e.g., clinical, organizational) would extend the discussion's relevance. While the analysis of dimensionality is thorough, further clarification on the theoretical justification for choosing the bifactor model over others could strengthen the argument. The discussion could integrate more recent studies or theories to support or contrast the findings, providing a more comprehensive literature backdrop. Reviewer #2: Overall. Thank you for this very fascinating series of emotional contagion scale (ECS) studies, which may offer significant contributions both theoretically (factor structure and short form) and practically (a German adaptation). Most notably, the bi-factor model fit and mimicry item short form are theoretically compelling. Also, confirmation of emotional contagion as a 'trait' adds to the theoretical database. Practically, having documented a German language adaptation may be of practical value to clinical and research practitioners. Below I offer some thoughts about your research and manuscript, which you may consider for revision. Title. The full title is excessively descriptive and yet inadequately describes your research. Your research is very extensive, so describing it in a title seems challenging. Given the theoretical value of your studies, not to mention the German adaptation, I recommend using the short title emphasizing 'validation', a 'German' version, and 'a brief form'. Also, the full title refers to 'nomological network', which is I think may get lost in the manuscript as you do not identify any of your studies as such. Keywords. The keywords are ideal for identification of the studies. Abstract. The abstract provides research details (theoretical purpose, design, findings), but presumes some valuable implication. Though limited with respect to words, a comment about theoretical and practical value may enhance the abstract, even if it requires abbreviating other parts. Introduction. Overall. The theoretical framework is clearly provided. Essential constructs and variables are identified along with the purpose(s) of the current research. 1. Comment. When introducing the current research, the section 'The Present Studies' states the purpose is ". . . aimed at validating the German version of the ECS, including its factor structure, nomological network, longitudinal measurement invariance, and temporal stability." Developing and validating a brief version is described as a second purpose. In your introductory paragraph you reference and describe 'three studies'. You then proceed with presentation of each study 1, 2, and 3, each complete with statement of purpose, Method, Measures, Results, and Discussion. 2. Suggestion. The presentation of these studies as described above is confusing. There are 3 studies, each essentially valuable theoretically, yet the actual purpose of each is not clear until the reader has completed reading the entire series of studies. I suggest reconsider presentation of the studies and corresponding results. (a). Prior to presenting the actual studies, in the general introduction to the studies (The Present Studies), make clear briefly what each of the three study purposes are. Maybe have each study be subheaded with the purpose and design/methods. Then, have a Results section, with subheaded studies, for each a brief restatement of purpose followed by the results. Then, have a Discussion section, again, each study subheaded with corresponding Discussion. (b). One confusion pertains to your summary tables, e.g., Table 1, Table 2, each of which have the results for the different Studies, though they are presented within Study 1. My suggestion is basically a reorganization of studies such that the studies 1, 2,3 are nested with (a) purposes and methods, (b) Results, and (c) Discussion. Methods. Overall. Your methods are generally complete with ample description of the models and modeling procedures. 1. Recommendation. While you may be applying generally accepted procedures used in cited research, consider offering a citation of standards for language adaptation of clinical measures and corresponding research designs, e.g., International Test Commission (ITC) and standards for measurement language adaptation. https://www.intestcom.org/. 2. Recommendation. Study 1 forms used a 4-point Likert response scale, Study 2 forms expanded the response scale to a 5-point Likert. This is a nontrivial change to the forms, one which may go beyond simply obtaining higher internal consistency. At least, some discussion may be needed to explain the process underlying this finding. Furthermore, changing the response format may in fact change the measured construct. Fitting an item response model to the data to understand the response process would be very valuable, particularly for the brief form of mimic items. Note, I am not necessarily suggesting this for this set of studies as a revision, but without question, more attention is called for. 3. Comment. For the convergent and discriminant validity studies, beyond a visual inspection of the measurement correlation matrices with respect to the hypothesized covariances, a more rigorous structural equation model should be fit for hypothesis testing. On the other hand, given the obtained data, fortunately for the researchers, the 'interocular traumatic test' suffices, i.e., you know it when it hits you between the eyes. Again, however, an SEM model could have been fitted to the matrices to test the hypotheses. One of countless relevant citations is, Raykov, Tenko (2011). Evaluation of convergent and discriminant validity with multitrait-multimethod correlations. British Journal of Mathematical and Statistical Psychology, 64, 38-52. 4. Recommendation. Along with validity results, I recommend always providing point reliability estimates with associated standard errors. These would be estimated as parameters using structural equation modeling procedures. Results. Overall. Results from analyses per study are well described. As noted above, the presentation is sometimes confusing and required several readings to fully appreciate the purpose and results of the series of studies. The abbreviated 'brief' form is most interesting and from some perspective may constitute the most fascinating and certainly original contribution. 1. Suggestion. Table the results for each study independently as described above. Discussion. Overall. There are many findings to be discussed. You return to your research purpose with interpretation of your findings. 1. Comment. The long title mentions nomological network, and while the validity studies may be intended to complete this inquiry, the findings are not discussed as such, i.e., nomological network. This would constitute an important theoretical contribution and findings should be conceptualized in this term. 2. Recommendation. Highlight the brief form as an original contribution. 3. Comment. The implications of psychometric research are sometimes treated as 'needless-to-say'. In this case, you have much to say about implications and should include them. Writing. Overall. This is a well written manuscript, and was a pleasure to read. Thank you. Reviewer #3: Review Journal name: PLOS ONE Manuscript number: PONE-D-25-13429 Manuscript title: Assessing emotional contagion: Dimensionality, nomological network, longitudinal invariance, and 1-year stability of the German emotional contagion scale and development of a brief version Date received: May 22, 2025 Date of review: June 23, 2025 Suggestion: Major revision Comments to the Authors Thank you for the opportunity to review this manuscript! The manuscript seems promising and especially the development of the presented brief scale could be interesting for future studies. However, it is a very complex paper with a very large number of reported findings and potential limitations. Thus, it seems necessary that the writing and reporting is particularly consistent, transparent, and understandable! Overall, I see a number of problems that could be addressed in a major revision. Title: Emotional contagion represents an interpersonal process between two or more individuals. The ECS aims at assessing in individual's susceptiblity to emotional contagion. Hence, "assessing emotional contagion" is misleading and incorrect. Generally, the name of the ECS scale was misleading from the beginning (it does not measure the process emotional contagion, even if Doherty says so). Further, the reported scale does not contain all items of the ECS. For these reasons, I am not convinced that the reported scale should adapt the initially misleading name ECS. The short version, thus, does not aptly represent a short version of the original ECS, but instead rather seems to represent a "mimicry version" of your modified ECS (German version) which could be an interesting addition for future studies. Abstract: To me it seems that a lot of terms are brought up here that are used ambiguously in different areas of research (mimicry, synchronization). For the purpose of clarity, it seems helpful to stick to the central topic of the manuscript (susceptibility to emotional contagion) and prevent the misconception of assessing emotional contagion with the ECS. It seems unusual to use abbreviations in the abstract. To avoid misunderstandings, better report sample sizes of the three studies separately. The findings on the models' fit are misleadingly reported. The fit indices did not support "satisfactory fit" for both the 1F+CE and the BF model. Introduction: Again, emotional contagion represents an interpersonal process between two or more individuals. This process should be clearly delineated from an individual's susceptiblity to emotional contagion which seems to be what you aim to investigate in this paper. Please review your use of emotional contagion throughout the manuscript and be more precise about the concept you are investigating. Further, the conceptual ambiguity of the concept empathy should be discussed and both emotional contagion and susceptibility to emotional contagion should be delineated from empathy and/or different empathy-subfacets/concepts. You are almost exclusively citing Hatfield and colleagues. Other references should be included on emotional contagion and susceptibility to emotional contagion. Mimicry has been conceptualized very differently in psychological research regarding socio-emotional processes (eg Hess or Dimberg etc) which should be discussed alongside Hatfield's use of the term. Explain/justify why you use a model from the context of psychopathy and not a more general model given the non-clinical nature of your samples. Throughout the introduction, many terms are brought up but not defined, eg empathic concern, emotion awareness. This makes it hard to follow for the reader. Consider briefly defining these concepts. The selection of discrete emotions in the ECS should be discussed more critically. They do not represent any commonly reported model. Discuss general critique regarding discrete emotion models. Discuss the ambiguity and overlap of some of the items, eg fear with stress. Other self-report measures of an individual's susceptibility to emotional contagion should be discussed, eg emotion-specific or positively-negatively valenced scales. The use of a very large number of abbreviations makes it hard to read and follow, eg EC, PT, FT, IRI, etc. Consider dropping the abbreviations and writing the full words instead. Explain/justify why you still consider your scale(s) a version of the ECS after excluding 3 items only based on your theoretical (but very reasonable) critique on the love items? Can your scale really be called an ECS scale then? Explain why you kept the rest of the discrete emotions items? The ECS short rather seems to be a subscale entailing mimicry items and not just a short version. Overall, the idea of a short ECS-mimicry scale is interesting, eg to assess a more salient and observable aspect of the complex process of ec, but this scale could, thus, be more aptly called some sort of mimicry-version of the modified ECS (without the love items), eg susceptibility to mimic emotional expressions. Study 1: Explain why the TEQ was not used in study 1. Alpha = .05 seems to be too high given the number of correlations due to the problem of multiple testing and alpha inflation. Clarify whether power analyses are in the supplement, if not please add them. Add example items to the descriptions of the measures (in all studies and for all measures). Be consistent in using either joy or happiness (eg page 11, line 233). Why did you use only one subscale of the FAB scale? For model comparison, the AIC should be used. The analyses, results, and discussion should be revised accordingly. In general, be more detailed and explicit about the standards for fit indices you are applying. Specify the cutoffs for the fit indices more precisely (with references). Discuss the model fit parameter of the models 1F and 1F+CE more thoroughly and more critically. TLI for the BF model is below the cutoff for a good model fit (.95) according to Hu & Bentler (1999), this should be stated and discussed more clearly! The fit of the ECS short is not acceptable! this should be stated and discussed more clearly. Alpha and omega for both scales are low, this should be discussed. Provide references for the claim that a response scale comprising 4 answers can/should be considered ordinal. Correlations should be summarized in the text, "largely in line" is not sufficient here. The table's format is crooked and out of line. What is the last column? In the sample description you say 192 individuals filled out the survey, here you report up to 301 for study 1? Which number is correct? The title of the table says study 1 and 3, but the column "study" says 1 and 2. Were the measures normally distributed? Please report more detailed descriptive statistics for all measures in all studies. I am having a hard time with the not-preregistered "predictions". Explain and justify why the analysis of study q (and study 2 ) apparently were not preregistered? How do you tackle the uncertainty that arises from the fact that these analyses were not preregistered? This needs to be discussed. Study 2: Explain more precisely why nursing staff was chosen, eg also very high share of women? Explain/justify why data from 2016 is used. Was the 2016 study also approved by the ethics committee? Clarify the intervention conducted in the overarching project. Add sample information for sample 2 and 3 (eg gender, age, etc) and consider using a table for all three samples. Clarify (for all studies and all measures), whether only the answers 0 and 4 (or 5) were presented to the participants with words or all answers (i.e., 0, 1, 2, 3, 4, 5). Please provide all answer options that were presented. Again, for model comparison, the AIC should be used. The analyses, results, and discussion should be revised accordingly. Again, TLI is below the cutoff for a good model fit (.95) according to Hu & Bentler (1999), this should be stated and discussed more clearly! Discuss the TLI value > 1 for the ECS short in study 2. Was ordinal alpha and omega calculated here? Explain why you consider 4 response options ordinal and 5 continuous (enough)? Summarize the descriptive statistics and psychometric properties in the text and report them in a table instead of the supplement. Discuss the model fit of the configural invariance model more thoroughly (RMSEA and CFI are not good) and also report SRMR and TLI. Justify your choice of delta CFI as index for invariance (why not delta chi square or delta RMSEA)? Be consistent in your use of the terms, eg residual or strict invariance, in the text and tables. Be consistent in how you report "rel. int. 05, ...", either in parenthesis or after a comma. Justify why it seems more acceptable to you to modify the original response scale (4 options) than to keep it, given that internal consistency only "increased slightly". Study 3: Does the ethics approval only pertain to study 3? Were study 1 and 2 also approved by the ethics committee? Please clarify. Explain why the preregistration was registered on april 8 and the data collection started in april 4 2024, but it says "preregistered before data collection" in the text. Discuss the large share of women in the sample (in all three studies). Consider conducting sensitivity analyses for all studies regarding participants' gender. Clarify whether all response options were presented to the participants with words or only the extremes (for all measures). Add example items for all measures in all studies. Again TLI below .95 is not excellent. Report the AIC and compare the models accordingly (for all studies and model comparisons). In the preregistration it says that one item was reformulated and tested. I cant find anything on that analysis step in the manuscript. Please clarify. The potential influence of low alpha values (eg as low as .19) on the correlations should be discussed. Better report and discuss confidence intervals for all correlations in all studies. Overall, consider summarizing results and discussion in a combined "Results & Discussion" section (for all studies). General Discussion: The dimensionality section of the discussion needs to be revised: The model fit of the 1F+CE model was not "close to the threshold for a good fit in studies 1 and 3" and the TLI was not excellent either in study 2. The model fit of the BF model was only excellent in study 2, but not in study 1 and 3 (TLI < .95). The findings should be discussed accordingly. The model comparisons should be based on the AIC. The results should be reported transparently and discussed accordingly. In the convergent and discriminant validity section of the discussion, a concluding remark is missing regarding your evaluation of the scales' validity. The longitudinal analyses section of the discussion might need to be revised depending on the revised findings, especially regarding the missing model fit indices. Emotional contagion is not a personality trait (line 631). It is an interpersonal process. What you are investigating is an individual's susceptibility to emotional contagion. In line 632, "Hatfield et al." seems to be wrong. The limitations section of the discussion is very brief: Discuss the risk of alpha inflation due to multiple testing for all studies more thoroughly. Discuss the samples' homogeneity in all three studies more thoroughly (women, profession, etc). Discuss the different time points of the three studies with respect to the COVID-19 pandemic more thoroughly (before, during, and after the pandemic) and discuss the potential impact of the time point of study 2 (2016) being 6/8 years before the time points of studies 1 and 3. Discuss the lack of preregistrated analyses in study 1 and 2. Include practical implications of your research in the discussion. Language and Style: Some commas are missing. Sometimes blank spaces are missing between parentheses. APA level 2 headings are written in title case. Paragraphs should be indented consistently. Sentences should not start with numbers. ********** what does this mean? ). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy Reviewer #1: No Reviewer #2: Yes: Paul Yovanoff Reviewer #3: Yes: Anton Marx ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org . Please note that Supporting Information files do not need this step.
|
| Revision 1 |
|
Dear Dr. Janelt, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Sep 25 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.
If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols . We look forward to receiving your revised manuscript. Kind regards, Paweł Larionow, Ph.D. Academic Editor PLOS ONE Journal Requirements: If the reviewer comments include a recommendation to cite specific previously published works, please review and evaluate these publications to determine whether they are relevant and should be cited. There is no requirement to cite these works unless the editor has indicated otherwise. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed Reviewer #3: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions??> Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: (No Response) ********** 3. Has the statistical analysis been performed appropriately and rigorously? -->?> Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: (No Response) ********** 4. Have the authors made all data underlying the findings in their manuscript fully available??> The PLOS Data policy Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: (No Response) ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English??> Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: (No Response) ********** Reviewer #1: The authors have addressed all the reviews and comments, and I believe that the paper, in its current form, is of sufficient quality to be published in the journal PLOS One. Reviewer #2: Overall. Thank you for the careful and extensive response to reviewer comments. As with the initial submission, I find your research very interesting and quite likely valuable theoretically and practically. I appreciate the theoretical framework and justification for the three research studies. Overall, the research in my opinion is measurement development and validation with strengths and weaknesses. But, the missing analyses do not render your findings and interpretations indefensible. I do have lingering comments and questions regarding (a) formatting, and (b) unsubstantiated interpretations. Title. Comment. The Full Title is excessive, the Short Title is preferred and sufficiently accurate. Keywords. Comment. Because your measures focus on 'mimicry', I would consider including 'mimicry' as a keyword. Rather than validity, reliability, and factor structure, consider dropping 'factor structure', and/or replace all terms with 'psychometrics'. Abstract. The abstract is sufficiently comprehensive. Reference to three studies is made, but only Study 2 is mentioned specifically. The acronym 'sECS' is used, but not defined. 1. Recommendation. Do not mention any of the three studies, specifically. Consider not mentioning specific findings, e.g., "The correlation pattern . . . and longitudinal invariance." Perhaps it suffices to state that three studies using psychometric modeling and construct validation procedures concluded emotional contagion is essentially unidimensional, with secondary dimensions. Using these findings . . . a short 'mimicry' scale was developed and validated. And so on . . .. Introduction. Overall. The theoretical framework is clearly provided. Essential constructs and variables are identified along with the purpose(s) of the current research. 1. Comment. You start with the constructs 'emotional contagion' (and components including 'mimicry'), and 'empathy', and 'susceptibility to emotional contagion'. It is the susceptibility to emotional contagion as a personality trait that underlies the argument that measurement of 'mimicry' is meaningful. 2. Comment. You start with the constructs 'emotional contagion' (and components including 'mimicry'), and 'empathy', and 'susceptibility to emotional contagion'. It is the susceptibility to emotional contagion as a personality trait that underlies the argument that measurement of 'mimicry' is meaningful. The transition to a measure of 'mimicry' as though it is a measure of 'susceptibility of emotional contagion' is critical. This is made most clear in two places. First, you state, "Specifically, SEC can be defined as "the tendency to automatically mimic and synchronize expressions, vocalizations, postures, and movements with those of another person's and consequently, to converge emotionally". (lines 121-123) Second, the section 'Content examination of the ECS items' explains, "The ECS contains items tapping the basic process of mimicry." (line 207). 3. Suggestion. Make more clear why 'mimicry' measurement is relevant, if not identical to susceptibility of emotional contagion. Perhaps the statement above (comment 2) intends this, but the transition to 'mimicry' as the focal measure is lost when you persist with the SEC scaling discussion in 'The present studies' section. Regarding your selection of mimicry items, you state, ". . . they capture mimicry, the first state of SEC, which can be considered to reflect SEC in the most basic way." (lines 271-272). You do further explain the significance of mimicry, but it may be helpful to do this in the theoretical framework when justifying development of a 'mimicry' measure. The Series of Three Studies Collectively. Methods. Overall. Your methods are generally complete with ample description of the models and modeling procedures. 1. Comment. Study 1 uses the four-point Likert scale, Studies 2 and 3 use a five-point Likert scale as described, and a sum of item responses is the total scale score for both the ECS and Mimicry measures. This modified item response scaling is critical for reasons you mention. One analysis you did not do, which would be interesting, is the study of the responses using the so called 'neutral' response. As suggested in prior comments, an item response modeling of the item responses would be interesting and you do mention this in your Discussion. You use this item response format as one possible explanation for the reliability and correlations obtained in Studies 2 and 3, relative to the relatively low values in Study 1. 2. Comment. For each study you have a section 'Internal consistency'. Consider 'Reliability' rather than 'internal consistency', even though you do use coefficient alpha. The focus of the section is on 'reliability' and the specific index. Perhaps expand the discussion to focus on item and total score reliability. Results. Overall. Results from analyses per study are well described and interpretation are accurate. I simply wish you had completed more item level analyses given what you think are critical implications of the item response format (4, 5 point Likert scale). Discussion. Overall. There are many findings to be discussed and you were quite thorough. One topic I found questionable pertained to the sample/populations you studied. Irrespective of whether you are sampling students or nurses, the actual psychometrics for ECS and mimicry should reasonably be invariant. I cannot imagine why factor structure, correlations, etc. would vary across these populations unless the measures are unreliable, which they actually are, i.e., unreliable. The low reliability is your best explanation for the variance across these populations. You did obtain acceptable convergent and discriminant validity evidence, which is somewhat unexpected given the low reliability, particularly in Study 1 with the Mimicry measure. Writing. Overall. The manuscript is very carefully written. 1. Major Recommendation. My primary concern pertains to the formatting within the presentation of the three studies. The bold headings and subheadings make it very difficult to appreciate the nested content organization. I suggest uses a heading and subheading formatting that clearly differentiates subheads/content. 2. Comment. Often technical terms are inconsistent with conventional nomenclature. Here are few examples. (a). line 175 'obstructing the fit' (maybe terms such as 'attenuating', 'diminishing' the fit) are more conventional. (b) line 220 'love should be counted as a basic emotion' (love should be considered . . .) (c) line 234 'version a psychological meaning' (version a construct validation) 3. Comment. There are occasional spelling errors, for example, (a) page 12 line 266 spelling error 'suceptiblity'. Reviewer #3: (No Response) ********** what does this mean? ). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy Reviewer #1: Yes: Mohsen Khosravi Reviewer #2: Yes: Paul Yovanoff Reviewer #3: Yes: Anton Marx ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org |
| Revision 2 |
|
Validation of the German Emotional Contagion Scale and development of a mimicry brief version PONE-D-25-13429R2 Dear Dr. Janelt, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact billing support . If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Paweł Larionow, Ph.D. Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: |
| Formally Accepted |
|
PONE-D-25-13429R2 PLOS ONE Dear Dr. Janelt, I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team. At this stage, our production department will prepare your paper for publication. This includes ensuring the following: * All references, tables, and figures are properly cited * All relevant supporting information is included in the manuscript submission, * There are no issues that prevent the paper from being properly typeset You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps. Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. You will receive an invoice from PLOS for your publication fee after your manuscript has reached the completed accept phase. If you receive an email requesting payment before acceptance or for any other service, this may be a phishing scheme. Learn how to identify phishing emails and protect your accounts at https://explore.plos.org/phishing. If we can help with anything else, please email us at customercare@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Paweł Larionow Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .