Peer Review History

Original SubmissionJanuary 21, 2021
Decision Letter - Ali Montazeri, Editor

PONE-D-21-02214

Re-evaluating randomized clinical trials of psychological interventions:

Impact of response shift on the interpretation of trial results

PLOS ONE

Dear Dr. Verdam,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Mar 27 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.
  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.
  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Ali Montazeri

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Your ethics statement should only appear in the Methods section of your manuscript. If your ethics statement is written in any section besides the Methods, please delete it from any other section.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: No

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: I Don't Know

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Reviewer #1: Re-evaluating randomized clinical trials of psychological interventions: Impact of response shift on the interpretation of trial results.

The study re-analyses data of published randomized controlled trials (RCTs) investigating the effectiveness of psychological interventions targeting different health aspects, to assess the occurrence of response shift, the impact of response shift on interpretation of treatment effectiveness, and the predictive role of clinical and background variables for detected response shift.

- My major question is about the process of including RCTs to evaluate them in this study. How this was done ? what was the inclusion and exclusion criteria for selecting the studies. Did the authors search systemically through the Data bases to find appropriate ounces? Please, explain the process clearly.

-The methodology of study was written without specific structure and without coherence understandable way.

-I found that the study included Internet Based RCTs, please mention Internet based in Title, aims, and as inclusion criteria.

- I suggest the authors to describe data collection approaches. were they based on Internet or not. Please explain in detailed.

Reviewer #2: The manuscript investigated a pivotal issue in the field, which needs further attention. The presence of response shift is studied in three different psychological interventions. The findings concerning both the results and the interpretation are discussed. The manuscript is well-written and well-organized. Various aspects of concerns that I wondered about have been addressed. The methodology is presented clearly. And the study findings are discussed sufficiently. The implications are explained straightforward. Future directions are provided.

However, the first issue that might be worth noting is that although the sample sizes of individual RCTs are small to an almost equal degree, (especially for the treatment groups, 86, 45, and 54), the specifications of the measurement models are to some extent different. Therefore, it is encouraged to include a description of appropriate sample size for each model rendering the study power to be satisfactory.

In this respect, Christopher Westland has provided a functional way to determine an appropriate sample size based on the model specification (https://doi.org/10.1016/j.elerap.2010.07.003). Moreover, Wolf et al (2013 PMC4334479) has urged that how the strength of factor loading, among other factors, can affect the sample size requires for the power to be satisfactory. The authors may want to appropriately discuss (or explain in the methods sections) whether the individual sample sizes and their corresponding model specifications might affect the study results since, for example, 1) the model specifications are different, i.e. number of observed variables, 2) the measurement models differed in terms of some sample-size/power-related factors, such as the strength of factor loading.

Secondly, as it is reflected in the results of model fits, the differences in the modeling might be another source of concern that needs to be addressed. That is, the model of CBT for insomnia included ordinal indicators while in the other two models the aggregated scores are used. It could affect the presence of any evidence concerning the first study aim (i.e. the presence of response shift) since aggregating the ordinal scoring blur the measurement errors, hence the underlying assumptions of SEM. In addition, it is not clear that whether the applied methodology representing the “shift” differs in the different modeling strategies (i.e. ordinal versus aggregated indicators). Importantly, the CBT for insomnia model has used a unidimensional measurement model, while the other two models are bifactor models in nature. That is, the measured indicators in which the response shift is assumed to occur indicate the sub-scales, which altogether compose a higher-order construct. Thus, the authors are encouraged to base the measurement models on ordinal indicators and repeat the analysis using the subscales separately to see any noticeable findings and to maintain the applied methodology consistent for all models. Otherwise, more description is needed to address how the applied methodology could represent the response shift in interval-continuous indicators that were not directly asked from the participants (For example, Relation with God/higher order, and Depressed affect).

Thirdly, the applied scales were primarily used Likert-type scaling. The authors are encouraged to discuss how the scaling may affect the results, and address the other types of scaling such as semantic differential scaling.

Best of Luck,

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Revision 1

PLOS ONE Journal Requirements:

Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming.

Response: We have found and changed an error in the file naming Figure 1 (it was named as supporting information but should be included in the main manuscript). We did not find further deviations from PLOS ONE’s style requirements and trust that was all that was referred to here.

Your ethics statement should only appear in the Methods section of your manuscript. If your ethics statement is written in any section besides the Methods, please delete it from any other section.

Response: We have now integrated the Ethics statement into the Methods section

________________________________________

Review Comments to the Author

Reviewer #1: Re-evaluating randomized clinical trials of psychological interventions: Impact of response shift on the interpretation of trial results.

The study re-analyses data of published randomized controlled trials (RCTs) investigating the effectiveness of psychological interventions targeting different health aspects, to assess the occurrence of response shift, the impact of response shift on interpretation of treatment effectiveness, and the predictive role of clinical and background variables for detected response shift.

- My major question is about the process of including RCTs to evaluate them in this study. How this was done? what was the inclusion and exclusion criteria for selecting the studies. Did the authors search systemically through the Data bases to find appropriate ounces? Please, explain the process clearly.

Response: The studies that were selected for the analyses in the current paper are a convenience sample, which was previously stated on page 7, first line in the Introduction section. The studies were selected based on the following criteria: 1) the RCT results indicate effectiveness of the psychological intervention; and thus 2) response shift may have occurred; 3) total number of included patients > 100; and 4) availability of clinical and background information that can be included as explanatory variables. We now included this information in the Method section on page 7 (first paragraph).

-The methodology of study was written without specific structure and without coherence understandable way.

Response: The method section of the paper indeed deviates from that of a single sample study. Since we need to describe three datasets (based on data from four RCTs) that were used for the analyses, we adopted a similar structure for each dataset, including the description of “Patients”, “Intervention”, “Primary Outcome”, and “Predictors”. After the description of the datasets follows a paragraph on “Statistical Analyses” that describes the methodology used to investigate the three objectives of the study applied to all three datasets. In doing so, we believe that the chosen structure is clear and coherent.

-I found that the study included Internet Based RCTs, please mention Internet based in Title, aims, and as inclusion criteria.

- I suggest the authors to describe data collection approaches. were they based on Internet or not. Please explain in detailed.

Response: It is our aim to investigate the occurrence of response shift after a psychological intervention. Although three of the four included studies are indeed internet-based studies, this was not a specific selection criterion. We now include the selection-criteria of the included studies in the Method section on page 7 (first paragraph) to avoid confusion. In the Method section we now specify for each study whether assessments were performed online or via pen and paper. Since the way the interventions are delivered – face-to face or via internet – is not relevant to our purpose, we rather not include the term internet in the title nor aims as it may confuse and distract.

Reviewer #2: The manuscript investigated a pivotal issue in the field, which needs further attention. The presence of response shift is studied in three different psychological interventions. The findings concerning both the results and the interpretation are discussed. The manuscript is well-written and well-organized. Various aspects of concerns that I wondered about have been addressed. The methodology is presented clearly. And the study findings are discussed sufficiently. The implications are explained straightforward. Future directions are provided.

Response: Thank you for your positive remarks regarding our manuscript and your suggestions for improvement.

However, the first issue that might be worth noting is that although the sample sizes of individual RCTs are small to an almost equal degree, (especially for the treatment groups, 86, 45, and 54), the specifications of the measurement models are to some extent different. Therefore, it is encouraged to include a description of appropriate sample size for each model rendering the study power to be satisfactory.

In this respect, Christopher Westland has provided a functional way to determine an appropriate sample size based on the model specification ((https://doi.org/10.1016/j.elerap.2010.07.003). Moreover, Wolf et al (2013 PMC4334479) has urged that how the strength of factor loading, among other factors, can affect the sample size requires for the power to be satisfactory. The authors may want to appropriately discuss (or explain in the methods sections) whether the individual sample sizes and their corresponding model specifications might affect the study results since, for example, 1) the model specifications are different, i.e. number of observed variables, 2) the measurement models differed in terms of some sample-size/power-related factors, such as the strength of factor loading.

Response: The reviewer raises an important issue with regard to the differences in sample-size and model specification and the effects on power. We agree with the reviewer that an evaluation of the power to detect a meaningful level of model misspecification (for the specification of the measurement model) and the power to detect response shift effects is important. However, power calculation in the context of SEM are complicated. In order to address this issue we now do include RMSEA-based power calculations for the specified measurement models (page 15, final paragraph) and chi-square based power calculations for the tests on response shift (page 16, first paragraph). We include these calculations in an appendix and discuss the issue of power (calculations) in more detail in the Discussion section (page 32).

Secondly, as it is reflected in the results of model fits, the differences in the modeling might be another source of concern that needs to be addressed. That is, the model of CBT for insomnia included ordinal indicators while in the other two models the aggregated scores are used. It could affect the presence of any evidence concerning the first study aim (i.e. the presence of response shift) since aggregating the ordinal scoring blur the measurement errors, hence the underlying assumptions of SEM. In addition, it is not clear that whether the applied methodology representing the “shift” differs in the different modeling strategies (i.e. ordinal versus aggregated indicators). Importantly, the CBT for insomnia model has used a unidimensional measurement model, while the other two models are bifactor models in nature. That is, the measured indicators in which the response shift is assumed to occur indicate the sub-scales, which altogether compose a higher-order construct. Thus, the authors are encouraged to base the measurement models on ordinal indicators and repeat the analysis using the subscales separately to see any noticeable findings and to maintain the applied methodology consistent for all models. Otherwise, more description is needed to address how the applied methodology could represent the response shift in interval-continuous indicators that were not directly asked from the participants (For example, Relation with God/higher order, and Depressed affect).

Thirdly, the applied scales were primarily used Likert-type scaling. The authors are encouraged to discuss how the scaling may affect the results, and address the other types of scaling such as semantic differential scaling.

Response: We agree with the reviewer that the differences in the modelling of the different datasets may have impacted results. It is true that when investigating response shift at the subscale level, possible response shift at the item-level may be obfuscated. For example, response shifts at the item-level may cancel each other out at the subscale level. There are two reasons for using the modelling strategies as described. First, using item-level analyses for the personal meaning and depression datasets would have led to a substantive increase in the complexity of the model that – considering the limited sample size – would have caused problems with convergence. Second, and more importantly, we wanted the level of analyses to be consistent with the level of analyses as in the published RCTs and/or commonly used interpretation of the measurement scales. That is, the insomnia questionnaire is usually interpreted as an overall score in relation to individual item scores (consistent with the one-factor model of the item-scores), whereas the depression scale and personal meaning questionnaire are usually interpreted as an overall score in relation to the subscale scores (consistent with the one-factor model of the subscale-scores). To better address the limitations of the differences in technical specification of the models, and our reasons to do so, we have now added a paragraph in the Discussion section (pages 32-33).

Decision Letter - Ali Montazeri, Editor

Re-evaluating randomized clinical trials of psychological interventions:

Impact of response shift on the interpretation of trial results

PONE-D-21-02214R1

Dear Dr. Verdam,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Ali Montazeri

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #2: I would like to thank you the authors for being responsive to my comments. I have found this study interesting and informative to the filed. The current version of the manuscript merits publication if the manuscript is found to adhere to the Journal’s style.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: No

Formally Accepted
Acceptance Letter - Ali Montazeri, Editor

PONE-D-21-02214R1

Re-evaluating randomized clinical trials of psychological interventions: Impact of response shift on the interpretation of trial results

Dear Dr. Verdam:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Professor Ali Montazeri

Academic Editor

PLOS ONE

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .