Peer Review History

Original SubmissionAugust 4, 2021
Decision Letter - Isabelle Durand-Zaleski, Editor
Transfer Alert

This paper was transferred from another journal. As a result, its full editorial history (including decision letters, peer reviews and author responses) may not be present.

PONE-D-21-25251Economic evaluation of the Target-D platform to match depression management to severity prognosis in primary care: a within-trial cost-utility analysisPLOS ONE

Dear Dr. Lee,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses all the points raised during the review process, with particular attention given to the econmic comments of reviwer 1 .

Please submit your revised manuscript by Dec 06 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.
  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.
  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Isabelle Durand-Zaleski

Academic Editor

PLOS ONE

Journal requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please update your submission to use the PLOS LaTeX template. The template and more information on our requirements for LaTeX submissions can be found at http://journals.plos.org/plosone/s/latex.

3. Thank you for stating the following in the Acknowledgments Section of your manuscript:

“The authors would like to thank all the patients, family physicians, and clinics who took part

in Target-D; and the many research assistants who assisted with data collection. The data used

to develop the clinical prediction tool were collected as a part of the diamond project which

was funded by the National Health and Medical Research Council (NHMRC project ID:

299869, 454463, 566511 and 1002908). We acknowledge the 30 dedicated family physicians,

their patients, and clinic staff for making the diamond study possible. We also acknowledge

staff and students at the School of Computing and Information Systems at the University of

Melbourne for early work that informed the presentation of the e-health platform as well as the

focus group participants that provided feedback on early versions of the Target-D materials.

Finally, we thank staff at the former Melbourne Networked Society Institute (MNSI) who were

funded to build the Target-D website.”

We note that you have provided funding information that is not currently declared in your Funding Statement. However, funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form.

Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows:

“Target-D was funded by a grant from the National Health and Medical Research Council (NHMRC project ID: 1059863). The funding organisation had no role in the design and conduct of the study; collection, management analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.”

Please include your amended statements within your cover letter; we will change the online submission form on your behalf.

4. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability.

Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized.

Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access.

We will update your Data Availability statement to reflect the information you provide in your cover letter.

5. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For more information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions.

In your revised cover letter, please address the following prompts:

a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent.

b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories.

We will update your Data Availability statement on your behalf to reflect the information you provide.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: I Don't Know

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The paper aims to assess cost-effectiveness of the target-D intervention against usual care. Mental health and primary care are both very important subjects for public health. Evaluation of e-health is also of prime importance. The work is original but more methodological precisions were needed.

Main concerns:

1) A micro-costing approach was used to estimate the cost of the target-D intervention except for the clinician-guided iCBT program (moderate prognostic group) (page 5, lines 112-113: “The average cost between two Australia clinician-guides iCBT programs was applied to participants in the moderate prognostic group”). Why?

2) Some costs appeared to be research-induced (e.g. p5, lines 105-106 “personnel time to approach individuals in the GP waiting room involved one minute per encounter”). Please clarify

3) The drop-out rate was very important for the RUQ data and the AQoL-8D data. Not enough information is given to the lector on how the missing data were treated. Bootstrapped data need to be stratified by treatment arm. The process of missing data need to be tested, not assumed (page 6, line 140: “assuming data were missing at random”). Tests should be presented and discussed. If data were not MAR, extensive sensitivity analyses (scenarios) on costs and quality-of-life utilities need to be conducted (not only complete case analysis, valid if and only if missing data were MCAR). Please give reference if this issue was treated in a previous paper.

4) Two GLM had been estimated, one for costs (link=log; family=gamma) and another for utility scores (link=identity; family=Gaussian). In the paper, it is not clear if these models were re-estimated for each bootstrapped sample or only once. If ICER is computed from estimated coefficients, how the ratio was converted into a difference for costs? How potential correlation between errors terms of the QALY equation and the cost equation were taken into account in the analysis?

5) Concerning the QALY equation, was the value at baseline systematically included among covariates? (not clear page 6, lines 147-149: “All GLM models were estimated with and without adjustment for several baselines specified in the study protocol- i.e. baseline PHQ-9 score (not QALY, as requested in guidelines),general practice and prognostic group”)

6) It could be interesting to present details on cost provided by microcosting for each level of intervention. Page 8, lines 191-194, be more affirmative “This was likely due to the high-cost nature of collaborative care delivered to participants in the severe group”.

7) Acceptability curves could be estimated for the 3 prognostic groups.

8) The conclusion (page 19, line 338-341) was very strong, not really in line with methodological issues mentioned page 18, lines 318-319 and 325-327

Reviewer #2: The authors present an economic evaluation of the Target-D intervention, based on resource utilization information collected during a clinical trial of Target-D versus usual care in Melbourne, Australia. Results are presented both from a health sector perspective and a societal perspective. Authors conclude that Target-D likely has good value for health care decision makers. The manuscript is well written. I only have a few minor recommendations for the authors.

1. line 43: authors state that health sector and societal costs were "comparable" between trial arms at 3 and 12 months. Authors should replace "comparable" with "not significantly different" since authors did not do a specific test for equality of the costs.

2. Authors should provide the number of control and intervention participants in each of the prognostic groups (minimal/mild, moderate, severe), rather than relying on readers to go to the published paper on trial results to get this information. This can likely just be put in the text in lines 177-178.

3. lines 208 (note under Table 1), 217 (note under Table 2), and 235 (note under Table 3): "partcipants" should be "participants"

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Revision 1

Editor #1:

E1.1 – 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf

and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

We have sought to comply with the PLOS ONE style templates the best we can.

E1.2 – 2. Please update your submission to use the PLOS LaTeX template. The template and more information on our requirements for LaTeX submissions can be found at

http://journals.plos.org/plosone/s/latex.

For now, we have opted to submit our manuscript using Word document files. We will be more than willing to provide a submission in the PLOS LaTeX template following the acceptance of our paper.

E1.3 – 3. Thank you for stating the following in the Acknowledgments Section of your manuscript: “The authors would like to thank all the patients, family physicians, and clinics who took part in Target-D ... Finally, we thank staff at the former Melbourne Networked Society Institute (MNSI) who were funded to build the Target-D website.” We note that you have provided funding information that is not currently declared in your Funding Statement. However, funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form. Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Please include your amended statements within your cover letter; we will change the online submission form on your behalf.

We have modified the acknowledgements to remove all references to funding information. Please see the amended text below.

The authors would like to thank all the patients, family physicians, and clinics who took part in Target-D; and the many research assistants who assisted with data collection. The data used to develop the clinical prediction tool were collected as a part of the diamond project (NHMRC project ID: 299869, 454463, 566511 and 1002908). We acknowledge the 30 dedicated family physicians, their patients, and clinic staff for making the diamond study possible. We also acknowledge staff and students at the School of Computing and Information Systems at the University of Melbourne for early work that informed the presentation of the e-health platform as well as the focus group participants that provided feedback on early versions of the Target-D materials. Finally, we thank staff at the former Melbourne Networked Society Institute (MNSI) who built the Target-D website.

E1.4 – 4. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability. Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized. Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access. We will update your Data Availability statement to reflect the information you provide in your cover letter.

5. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For more information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions.

In your revised cover letter, please address the following prompts:

a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent.

b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories.

We will update your Data Availability statement on your behalf to reflect the information you provide.

Ethics approval for the clinical trial underlying the submitted manuscript was obtained through the University of Melbourne. We have contacted Ms Hilary Young, Secretary for the Medicine and Dentistry Human Ethics Sub-Committee (HESC) at the University of Melbourne, to provide us with guidance on this matter (Phone: +61 3 8344 8595, Email: hilary.young@unimelb.edu.au). We have attached a copy of the resulting correspondence. To summarise, we have been notified that the plain language statement of the original study does not provide a contingency for study participants to provide informed consent for any prospective data sharing, particularly given that collected data is potentially identifying and involves sensitive personal information. As such, we are unable to release the data as part of any minimum dataset. We have amended our Data Availability Statement to read:

Ethical restrictions prevent the sharing of potentially sensitive data provided by study participants over the course of the Target-D clinical trial (Australian New Zealand Clinical Trials Registry ACTRN12616000537459). No contingency was included in the original plain language statement for study participants to provide informed consent to any prospective sharing of their personal data, whether as part of a minimum dataset or another form. For all enquiries regarding the Target-D clinical trial and the underlying dataset, please contact the chief investigator, Prof Jane Gunn (j.gunn@unimelb.edu.au). For general enquiries regarding the ethics approval of the trial, please contact the Office of Research Ethics and Integrity (OREI) at The University of Melbourne (HumanEthics-Enquiries@unimelb.edu.au).

As a compromise to our inability to provide a minimum dataset, we have now included an additional supplementary appendix (S2 Appendix) that contains summary metadata on all in-scope data variables, alongside a copy of the Stata do-file. The following sentence has been added to the end of the ‘Statistical analysis’ section of the Methods in lines 175-176:

A summary list of all variables included in the statistical analysis is provided in S2 Appendix, alongside the Stata do-file used to implement the statistical analysis.

Reviewer #1:

The paper aims to assess cost-effectiveness of the target-D intervention against usual care. Mental health and primary care are both very important subjects for public health. Evaluation of e-health is also of prime importance. The work is original but more methodological precisions were needed.

Main concerns:

R1.1 – 1) A micro-costing approach was used to estimate the cost of the target-D intervention except for the clinician-guided iCBT program (moderate prognostic group) (page 5, lines 112-113: “The average cost between two Australia clinician-guides iCBT programs was applied to participants in the moderate prognostic group”). Why?

There is no definitive, gold standard unit cost for clinician-guided iCBT in Australia. Instead, we have available two alternative unit costs that provide a low-to-high range of possible values. The point value used in the base case analysis comprised the average of the low and high values. This is the rationale for why we performed a subsequent sensitivity analysis analysing the impact of adopting the highest unit cost ($222 per person). In response to this comment, we have added the following sentence to improve clarity on lines 116-117:

In this instance, the two programs represent a low-to-high range of possible unit cost values for clinician-guided iCBT in Australia. A subsequent sensitivity analysis was done to test the impact of using the highest unit cost, rather than the average.

R1.2 – 2) Some costs appeared to be research-induced (e.g. p5, lines 105-106 “personnel time to approach individuals in the GP waiting room involved one minute per encounter”). Please clarify

We made sure to include costs that would occur in routine practice and to exclude research-related costs. Resource use involving the research assistants (e.g., approaching individuals in the GP waiting room or periodic check-in phone calls) will need to be performed by similarly qualified staff if the intervention were to be implemented as part of routine practice. We have added a sentence to lines 120-122 to clarify this point:

It is anticipated that all costed activities described above that involve research assistants will likely require similarly qualified staff to facilitate the implementation of the intervention as part of routine practice.

R1.3 – 3) The drop-out rate was very important for the RUQ data and the AQoL-8D data. Not enough information is given to the lector on how the missing data were treated. Bootstrapped data need to be stratified by treatment arm. The process of missing data need to be tested, not assumed (page 6, line 140: “assuming data were missing at random”). Tests should be presented and discussed. If data were not MAR, extensive sensitivity analyses (scenarios) on costs and quality-of-life utilities need to be conducted (not only complete case analysis, valid if and only if missing data were MCAR). Please give reference if this issue was treated in a previous paper.

We thank the reviewer for encouraging us to be clear in how we have chosen to address the problem of missing data. In response to this comment, we have now added a new section to the supplementary materials, ‘Supplementary Text S5. Analysis of missing data mechanisms’. In this supplement, we have provided a detailed exploration of missing data patterns and an empirical rationale for why we have concluded that there is sufficient evidence to infer that missing utility/cost data can be considered missing at random (as opposed to missing not at random). Based on these analyses, we identified several baseline sociodemographic variables that were associated with the likelihood of missing utility/cost values. We have consequently included these variables as adjustment covariates in the multiple imputation analysis, which was also updated. All results based on these methodological refinements have been amended accordingly.

We have modified text on lines 144-150 to read:

Multiple imputation methods were implemented in Stata to account for missing data that were deemed missing at random following several exploratory analyses presented in Supplementary Text 5 in S1 Appendix. Missing cost and outcomes data were imputed 100 times using multiple imputation by chained equations (MICE), with predictive mean matching and adjustment for baseline covariates associated with data missingness – i.e., trial arm, clinic, age, gender, highest level of education and having visited a psychologist/counsellor in the past 12 months.

R1.4 – 4) Two GLM had been estimated, one for costs (link=log; family=gamma) and another for utility scores (link=identity; family=Gaussian). In the paper, it is not clear if these models were re-estimated for each bootstrapped sample or only once. If ICER is computed from estimated coefficients, how the ratio was converted into a difference for costs? How potential correlation between errors terms of the QALY equation and the cost equation were taken into account in the analysis?

The reviewer is justified in their call for further descriptive detail on the methods used to implement bootstrapping. We confirm that the ICER was computed using estimated GLM coefficients of the difference in mean costs and the difference in mean QALYs. In the original analysis, we adopted a resampling method that encompassed, ‘bootstrapping nested in multiple imputation’. Since then, we have encountered recommendations by Brand et al., 2019 (doi: 10.1002/sim.7956) and Prof Andy Briggs who collectively advocate, ‘single imputation nested in bootstrapping’. Based on these recommendations, we have revised our analytic approach and modified our description of the methods/results accordingly.

The text on lines 161-173 has now been amended to read:

Incremental cost-effectiveness ratios (ICERs) were calculated as the difference in mean costs between the intervention and control arms divided by the difference in mean QALYs. ICERs were calculated by study perspective (health sector and societal), follow-up period (3 and 12 months) and, for the subgroup analysis, by prognostic group (total, minimal/mild, moderate and severe). A resampling method comprising single imputation nested in bootstrapping [17] was used to quantify the impact of input parameter uncertainty around the resulting differences in mean costs/QALYs and the mean ICERs. This method works by generating a single call to the MICE procedure to produce a complete dataset with which to analyse GLMs of costs/QALYs within each bootstrap resample. Following the generation of 1,000 bootstrap resamples, the bootstrap percentile method was used to estimate 95% confidence intervals (95% CI) around the differences in mean costs/QALYs and the mean ICERs [18]. The intervention was considered cost-effective if the resulting ICER was less than the Australian willingness-to-pay threshold of A$50,000 per QALY [19-21].

Furthermore, we have now included an additional supplementary appendix (S2 Appendix) that contains both the Stata do-file and a summary list of data variables. The following sentence has been added to the end of the ‘Statistical analysis’ section of the Methods in lines 175-176:

A summary list of all variables included in the statistical analysis is provided in S2 Appendix, alongside the Stata do-file used to implement the statistical analysis.

R1.5 – 5) Concerning the QALY equation, was the value at baseline systematically included among covariates? (not clear page 6, lines 147-149: “All GLM models were estimated with and without adjustment for several baselines specified in the study protocol- i.e. baseline PHQ-9 score (not QALY, as requested in guidelines),general practice and prognostic group”)

The reviewer has made an important critique of our methods. Our initial analysis did not adjust for baseline utility scores derived using the AQoL-8D measure as we were narrowly focussed on reproducing the primary outcomes analysis, which made adjustments for the baseline PHQ-9 score. Moreover, we had a priori postulated that baseline PHQ-9 scores would be (in theory) highly correlated to baseline AQoL-8D scores. Following the reviewer’s comment, we have made a decision to re-analyse QALY outcomes after making an additional adjustment for baseline AQoL-8D scores.

In the methods, we have amended lines 155-159:

All GLMs were estimated with and without adjustment for several baseline covariates specified in the study protocol ‒ i.e., baseline PHQ-9 score, general practice and prognostic group [7]. Baseline AQoL-8D scores were also included as an additional baseline covariate for GLMs involving QALY outcomes.

The results that are reported in Table 3 now reflect these changes. Additionally, the first footnote to the results presented in Table 3 on line 253 has also been amended to reflect the addition of the baseline AQoL-8D score as a baseline covariate.

R1.6 – 6) It could be interesting to present details on cost provided by microcosting for each level of intervention. Page 8, lines 191-194, be more affirmative “This was likely due to the high-cost nature of collaborative care delivered to participants in the severe group”.

Detailed costs encompassing the microcosting approach are presented for each intervention level in Supplementary Table S3. We have amended to sentence on lines 212-214 to read:

This was likely due to the high-cost nature of collaborative care delivered to participants in the severe group (see Supplementary Table 3 in S1 Appendix for detailed costs).

R1.7 – 7) Acceptability curves could be estimated for the 3 prognostic groups.

We appreciate the reviewer’s suggestion here. However, we have opted not to present in-depth results for the three prognostic groups (i.e., cost-effectiveness planes or cost-effectiveness acceptability curves) given that these are subgroup analyses that are underpowered to detect statistically significant differences, particularly when compared to the aggregate findings. We have amended text in lines 162-165 to emphasise the fact that the analysis of prognostic groups encompasses a subgroup analysis:

ICERs were calculated by study perspective (health sector and societal), follow-up period (3 and 12 months) and, for the subgroup analysis, by prognostic group (total, minimal/mild, moderate and severe).

R1.8 – 8) The conclusion (page 19, line 338-341) was very strong, not really in line with methodological issues mentioned page 18, lines 318-319 and 325-327

We have amended the relevant texts on lines 359-362 and lines 365-366 to soften conclusions drawn based on our study findings. These texts now read as follows:

The results of this study suggest that stepped care may be dominant when compared to usual care. Additionally, study findings appear to support the existing literature by suggesting that stepped care for depression can deliver improved clinical outcomes without increasing costs.

The Target-D intervention is likely to represent good value for money and provides indicative support for further development of digitally supported mental health care.

Reviewer #2:

The authors present an economic evaluation of the Target-D intervention, based on resource utilization information collected during a clinical trial of Target-D versus usual care in Melbourne, Australia. Results are presented both from a health sector perspective and a societal perspective. Authors conclude that Target-D likely has good value for health care decision makers. The manuscript is well written. I only have a few minor recommendations for the authors.

R2.1 – 1. line 43: authors state that health sector and societal costs were "comparable" between trial arms at 3 and 12 months. Authors should replace "comparable" with "not significantly different" since authors did not do a specific test for equality of the costs.

We thank the reviewer for this suggestion and have amended the Abstract text accordingly (see line 43).

R2.2 – 2. Authors should provide the number of control and intervention participants in each of the prognostic groups (minimal/mild, moderate, severe), rather than relying on readers to go to the published paper on trial results to get this information. This can likely just be put in the text in lines 177-178.

We have added this information to lines 194-198 at the beginning of the Results section, as requested by the reviewer.

R2.3 – 3. lines 208 (note under Table 1), 217 (note under Table 2), and 235 (note under Table 3): "partcipants" should be "participants"

We thank the reviewer for spotting this mistake. The spelling of this word has now been corrected in all relevant locations.

Attachments
Attachment
Submitted filename: 00b_Target_D_Response_to_Reviewers.pdf
Decision Letter - Isabelle Durand-Zaleski, Editor

PONE-D-21-25251R1Economic evaluation of the Target-D platform to match depression management to severity prognosis in primary care: a within-trial cost-utility analysisPLOS ONE

Dear Dr. Lee,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the 2 minor points raised during the review process:

1) As the effectiveness difference between arms is very small (non significant), I should be preferable to estimate the 95% CI for ICER using the Fieller's method instead of the bootstrap method. The former is less sensitive to misinterpretation of the CI bounds than the latter [see https://www.iresp.net/wp-content/uploads/2018/12/Siani-article-3.pdf] 2) In the QALY equation, the utility score at inclusion should be included as covariate (not baseline AQoL-8D score) [see Willan and Briggs, Statistical analysis of cost-effectiveness data, Statistics in Practice, Wiley, page 24-25]

Please submit your revised manuscript by Apr 21 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.
  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.
  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Isabelle Durand-Zaleski

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

Additional Editor Comments (if provided):

You have addressed most of the reviewers' comments, I recommend that you take into account the 2 suggestions of reviewer 1 in your final version.

1) As the effectiveness difference between arms is very small (non significant), I should be preferable to estimate the 95% CI for ICER using the Fieller's method instead of the bootstrap method. The former is less sensitive to misinterpretation of the CI bounds than the latter [see https://www.iresp.net/wp-content/uploads/2018/12/Siani-article-3.pdf]

2) In the QALY equation, the utility score at inclusion should be included as covariate (not baseline AQoL-8D score) [see Willan and Briggs, Statistical analysis of cost-effectiveness data, Statistics in Practice, Wiley, page 24-25]

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: (No Response)

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: (No Response)

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: (No Response)

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: (No Response)

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Thank you to the authors for adressing all my comments on the previous version of the paper.

I have two (marginal) left comments :

1) As the effectiveness difference between arms is very small (non significant), I should be preferable to estimate the 95% CI for ICER using the Fieller's method instead of the bootstrap method. The former is less sensitive to misinterpretation of the CI bounds than the latter [see https://www.iresp.net/wp-content/uploads/2018/12/Siani-article-3.pdf]

2) In the QALY equation, the utility score at inclusion should be included as covariate (not baseline AQoL-8D score) [see Willan and Briggs, Statistical analysis of cost-effectiveness data, Statistics in Practice, Wiley, page 24-25]

Reviewer #2: (No Response)

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Revision 2

Reviewer #1:

Thank you to the authors for adressing all my comments on the previous version of the paper. I have two (marginal) left comments :

R1.1 – 1) As the effectiveness difference between arms is very small (non significant), I should be preferable to estimate the 95% CI for ICER using the Fieller's method instead of the bootstrap method. The former is less sensitive to misinterpretation of the CI bounds than the latter [ see https://www.iresp.net/wp-content/uploads/2018/12/Siani-article-3.pdf ]

We thank the reviewer for suggesting Fieller's theorem as a potentially more accurate method by which to estimate the 95% confidence interval for the mean ICER; under the scenario when the expected value of the difference in effectiveness outcomes between trial arms approaches zero (as occurs in our study). We have read through the article by Siani et al. (2003) and note that: (1) Fieller's theorem has the potential to produce more accurate 95% confidence interval bounds with excellent coverage over the 95% confidence region - based largely on simulations analysed in the quoted study; and (2) 95% CIs produced using the non-parametric bootstrap percentile method can potentially lead to confidence bounds with a marginally higher coverage of the target 95% confidence region (i.e., 97% coverage of the mean ICER). We conducted a quick search of the literature and struggled to find studies that have replicated the findings of Siani et al., (2003). This makes it difficult to confirm the veracity of the phenomena identified by Siani et al. (2003). Even so, we concede that the potential for the bias identified by Siani et al. (2003) remains.

If the aforementioned bias were to transpire, then we contend that such imprecision in the estimation of 95% confidence bounds will not have a material impact on the interpretation of our study findings. This is due largely to the bootstrap resampling results which indicated a high degree of uncertainty around the expected value of mean ICERs estimated across all base case and subgroup analyses (i.e., bootstrap resamples for mean ICERs consistently covered all four quadrants of the cost-effectiveness plane). As such, the 95% confidence bounds produced by the bootstrap percentile method did not approach the nominated WTP threshold of A$50,000 per QALY. This was especially true when the lower and upper 95% confidence bounds spanned the South-East ('dominant') and North-West ('dominated') quadrants of the cost-effectiveness plane. In summary, any (comparatively small) imprecision around the 95% confidence bounds presented in Table 4 is expected to be inconsequential when compared to the extreme range of lower and upper 95% confidence bounds.

In response to this comment, we have added a note to Table 4 stating that:

The mean difference in QALYs between trial arms was observed to approach zero across all base case and subgroup analyses. This can potentially lead to the lower and upper bounds of a 95% confidence interval, derived using the bootstrap percentile method, encompassing a marginally higher coverage than the target 95% confidence region (e.g., 97% coverage of the mean ICER) [22]. Even so, any resulting imprecision in the estimation of the 95% confidence bounds will likely be inconsequential to the interpretation of study findings given the wide range of ICER values that were consistently observed between the lower and upper confidence bounds (e.g., confidence bounds ranging between 'dominant' and 'dominated'). This reflects the high degree of uncertainty observed across mean ICER values for all base case and subgroup analyses; with bootstrap resamples consistently spanning all four quadrants of the cost-effectiveness plane.

R1.2 – 2) In the QALY equation, the utility score at inclusion should be included as covariate (not baseline AQoL-8D score) [see Willan and Briggs, Statistical analysis of cost-effectiveness data, Statistics in Practice, Wiley, page 24-25]

We apologise to the reviewer for using imprecise terminology that has, in turn, led to this instance of semantic confusion. When we used the term 'baseline AQoL-8D score', our intended meaning was 'baseline AQoL-8D utility weight' – i.e., utility weights estimated based on scoring the AQoL-8D multi-attribute utility instrument. In response to this comment, we have changed all instances of 'AQoL-8D score(s)' to 'AQoL-8D utility weight(s)'.

Attachments
Attachment
Submitted filename: 00b_Target_D_Response_to_Reviewers.pdf
Decision Letter - Isabelle Durand-Zaleski, Editor

Economic evaluation of the Target-D platform to match depression management to severity prognosis in primary care: a within-trial cost-utility analysis

PONE-D-21-25251R2

Dear Dr. Lee,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Isabelle Durand-Zaleski

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Formally Accepted
Acceptance Letter - Isabelle Durand-Zaleski, Editor

PONE-D-21-25251R2

Economic evaluation of the Target-D platform to match depression management to severity prognosis in primary care: a within-trial cost-utility analysis

Dear Dr. Lee:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Isabelle Durand-Zaleski

Academic Editor

PLOS ONE

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .