Figures
Abstract
Background
Randomized-clinical trials (RCTs) are the gold-standard for comparing health care interventions, but can be limited by early termination, feasibility issues, and prolonged time to trial reporting. Adaptive clinical trials (ACTs), which are defined by pre-planned modifications and analyses that occur after starting patient recruitment, are gaining popularity as they can streamline trial design and time to reporting. As adaptive methodologies continue to be adopted by researchers, it will be critical to develop a risk of bias tool that evaluates the unique methodological features of ACTs so that their quality can be improved and standardized for the future. In our proposed methodological review, we will develop a list of risk of bias items and concepts, so that a risk of bias tool specific to ACTs can be developed.
Methods and analysis
We will perform a systematic database search to capture studies that have proposed or reviewed items pertaining to methodological risk, bias, and/or quality in ACTs. We will perform a comprehensive search of citation databases, such as Ovid MEDLINE, EMBASE, CENTRAL, the Cochrane library, and Web of Science, in addition to multiple grey literature sources to capture published and unpublished literature related to studies evaluating the methodological quality of ACTs. We will also search methodological registries for any risk of bias tools for ACTs. All screening and review stages will be performed in duplicate with a third senior author serving as arbitrator for any discrepancies. For all studies of methodological quality and risk of bias, we will extract all pertinent bias items, concepts, and/or tools. We will combine conceptually similar items in a descriptive manner and classify them as referring to bias or to other aspects of methodological quality, such as reporting. We will plan to generate pertinent risk of bias items to generate a candidate tool that will undergo further refinement, testing, and validation in future development stages.
Ethics and dissemination
This review does not require ethics approval as human subjects are not involved. As mentioned previously, this study is the first step in developing a tool to evaluate the risk of bias and methodological quality of ACTs. The findings of this review will inform a Delphi study and the development of a risk of bias tool for ACTs. We plan on publishing this review in a peer-reviewed journal and to present these findings at international scientific conferences.
Citation: Staibano P, McKechnie T, Thabane A, Olteanu D, Nanji K, Zhang H, et al. (2024) Methodological review to develop a list of bias items for adaptive clinical trials: Protocol and rationale. PLoS ONE 19(12): e0303315. https://doi.org/10.1371/journal.pone.0303315
Editor: Sathish Muthu, Orthopaedic Research Group, INDIA
Received: April 29, 2024; Accepted: September 11, 2024; Published: December 12, 2024
Copyright: © 2024 Staibano et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: No datasets were generated or analysed during the current study. All relevant data from this study will be made available upon study completion.
Funding: The author(s) received no specific funding for this work.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Evidence-based medicine has revolutionized the development of clinical practice guidelines and decision making in healthcare [1]. Randomized-controlled trials (RCTs) are the gold-standard for comparing the effectiveness and safety of novel healthcare interventions [2]. Conventional RCTs, however, can be burdened by high costs, early termination due to feasibility issues, and an overly rigid design that does not permit adjustments for unforeseen challenges [3]. These issues are amplified in surgical trials and as such, the annual number of published surgical trials remains stagnant [4, 5]. As a response to these challenges, researchers have begun using adaptive trial designs, which allow for dynamic protocol changes after beginning patient recruitment. Adaptive clinical trials (ACTs) utilize at least one pre-planned interim analysis to modify the protocol of an ongoing trial while maintaining integrity and validity of the data collected [6]. Trial adaptations performed following an interim analysis include sample size re-calculation, editing the number of treatment arms, amending allocation ratios, and/or terminating a trial early for success or lack of efficacy. Adaptive designs can improve the trial running process by optimizing patient recruitment, combining clinical trial stages, minimizing sample size, and accelerating time to trial analysis and reporting [6]. For instance, the TAILoR trial of telmisartan in HIV employed an interim analysis at half maximal patient recruitment and dropped the most ineffective medication dosage group based on a pre-specified efficacy threshold [7]. Moreover, adaptive designs streamlined the clinical trial process during COVID-19 by optimizing the number of therapies evaluated and minimizing the number of patients enrolled in each trial [8]. Adopting ACTs when prolonged RCTs are impractical may also reduce necessary funding, thereby overcoming barriers to conducting trials in developing nations [9]. Stakeholders, however, report that adaptive trial designs remain nebulous with practical barriers, including high bias potential, ethical concerns, and a lack of knowledge dissemination amongst trialists [10]. Other challenges include the need for a robust network of researchers and biostatisticians to ensure that the, often complex, trial protocol is well-planned and has undergone rigorous statistical stimulation prior to beginning patient recruitment [11]. In addition, statistical software required for adaptive methods is limited in its accessibility and expensive. Clinicians, researchers, and funding agencies are not well-versed in adaptive design terminology and practices, nor is there proper standardization in adaptive trial reporting [12].
In 2020, CONSORT published an extension for adaptive trials to guide ACT reporting [13]. These guidelines ACT-specific methodological components such as pre-planned interim analyses and sample size estimation (and re-estimation) descriptions [13]. In conjunction with CONSORT reporting guidelines, a validated risk of bias tool developed in a similar manner to the Cochrane risk-of-bias 2.0 tool [14], may improve the design of ACTs and the quality of future meta-analyses combing ACTs. Risk of bias tools are designed for specific study designs (e.g. RCTs) and help to promote methodological transparency and reproducibility while minimizing bias, so results can be accurately interpreted and soundly applied to patient care. For conventional RCTs, there exist several tools and checklists to guide reporting or evaluate quality and risk of bias (Table 1) [14–27]. There is no existing risk of bias tool to evaluate the methodological limitations of ACTs, which is of particular importance due to the potential for ACTs to be impacted by bias if not soundly designed [28]. It is for this reason that we have decided to embark upon creating a novel risk of bias tool to improve the quality of future ACTs and meta-analyses of ACTs.
Adapted from Lunny et al. (2021) [31].
Our proposed methodological review has two main objectives: (1) to identify and describe any current risk of bias items, tools, or checklists specific for ACTs and (2) to compile a list of risk of bias items and concepts that can be used to develop a risk of bias tool for ACTs. We will develop our risk of bias tool for ACTs in accordance with the framework described by Whiting et al. (2017) [29].
Materials and methods
Study design
We present a protocol to describe the rationale for performing a methodological review of ACTs and generate a list of risk of bias items and concepts related to ACTs. We will follow the methodological framework proposed by Whiting et al. (2017) and Sanderson et al. (2007) [29, 30]. This protocol was written with guidance from a methodology review protocol published by Lunny and colleagues, who set out to create a novel risk of bias tool for network meta-analyses [31]. As described by Lunny and colleagues, subsequent steps in creating a risk of bias tool will include (1) a Delphi survey and panel to select, refine, and compile bias items into a single candidate tool; (2) a pilot test to further refine the proposed tool; and (3) a knowledge translation strategy to disseminate the final risk of bias tool [31]. With regards to the Delphi study, we will plan to first distribute a knowledge survey to methodologists and content experts to gather their opinion on an ACT risk of bias tool and how it should be utilized and disseminated. Next, a pre-selected steering panel will generate a candidate list of risk of bias items that will be distributed in multiple rounds to a Delphi panel that will rate the utility of including each item and/or concept in the candidate tool. Pilot testing will include the evaluation of tool useability, efficiency, and comprehension among content experts in adaptive methodologies and trial design. Our final knowledge translation strategy will include publication and presentation of the final tool, housing the tool in an accessible website, and providing training sessions and webinars to future tool users [29]. These steps will be further addressed in future studies as we progress through this framework in developing this proposed ACT risk of bias tool. We did perform this methodological review protocol in accordance with the PRISMA-P checklist [32].
Eligibility
There will be two types of studies included in this methodological review (Table 2). Study type 1 will be studies that describe items and/or concepts related to bias, reporting, or methodological quality of ACTs. We will retain all items related to methodological bias and/or reporting as they may be able to be translated into a risk-of-bias tool. Study type 2 will be studies that assess the methodological quality, or risk of bias, of ACTs using criteria that focus on methodological features specific to ACTs. Both study types will be analyzed with the goal of collating bias items and/or concepts. We will also review and gather related items/concepts from any published risk of bias tools or reporting quality tools used for conventional RCTs (Table 1). We will include all articles with any publication status and written in any language. We will focus on methodological studies of ACTs and so, we will not be evaluating published ACTs or studies based on disease type, clinical populations, or tested interventions. In cases where the co-authors are not fluent or review authors are unable to understand the study text, we will utilize Google Translate (Mountain View, CA, USA).
Search strategy
All databases to be used in this review were selected with guidance from Lunny et al. (2021) [31]. We will search all databases with no language or publication type limits. We will search the following databases: MEDLINE (Ovid), CINAHL, EMBASE (Ovid), the Cochrane library, the Cochrane Central Register of Controlled Trials (CENTRAL), Web of Science, BIOSIS, Derwent Innovations Database, and KCI. We will also search clinical trials registries including clinicaltrials.gov and the WHO International Clinical Trials Registry Platform (ICTRP). We will search the following grey literature databases and resources: the EQUATOR network, dissertation abstracts, websites of evidence synthesis organizations (e.g., Campbell Collaboration, Cochrane Multiple Treatments Group, CADTH, NICE-DSU, Health Technology Assessment International (HTAi), Pharmaceutical Benefits Advisory Committee, Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen, European Network for Health Technology Assessment, Guidelines International Network, ISPOR, International Network of Agencies for Health Technology Assessment, and JBI), and methods collections (e.g., Cochrane Methodology Register, AHRQ Effective Healthcare Programme). We will also search LIGHTS and LATITUDES (https://www.latitudes-network.org/), which are two methodological registries that capture guidance and validity assessment tools, respectively [33]. All online registries will be searched using the following terms “adaptive clinical trial”, “bias”, and/or risk of bias”. The words found within the titles, abstracts, and MeSH terms of relevant articles were used to develop focused search strategies for each database. Reference lists of studies found will also be searched for additional papers to be included. The MEDLINE search will be validated for 10 studies identified by senior authors prior to screening. Eligibility screening will only begin after these 10 trials are identified from the search strategy. All database search strategies are described in S1 File.
The search strategy will be generated by two authors (P.S. and D.O.) alongside a librarian specialist. It will be generated and reviewed in accordance with PRESS (Peer Review Electronic Search Strategies) guidelines [34]. Any concerns with search strategy generation will be raised with a senior methodologist (M.B.). The database search will be conducted without limitations to publication type, status, language, or date to identify existing tools or articles.
Screening and data extraction
First, we will pilot eligibility criteria in Microsoft Excel (Redmond, WA, USA) by evaluating a sample of 25 citations amongst two independent reviewers. If high agreement is achieved (≥70%), then we will continue to abstract screening with two reviewers. If less than high agreement is achieved, then the eligibility criteria will be re-examined, and additional teaching sessions will be provided to reviewers. All screening and full-text review will be conducted using the web-based application, Covidence (http://www.covidence.org; Melbourne, Australia). Study titles and abstracts will then be assessed for relevance and eligibility. All screening and full-text review will be performed in accordance with PRISMA guidelines (S2 File). Any disagreements identified during these screening and review stages will be resolved via discussion until consensus is reached. A third senior reviewer will arbitrate if screening or full-text review disagreements cannot be resolved.
A data extraction form will be generated using Microsoft Excel and piloted by reviewers on five included studies. Two authors will extract data from all included studies. We will first categorize all sources based on our eligibility criteria and we will extract author details, publication year, and study type. For all studies that identified bias items, tools, or quantified methodological bias in ACTs, we will generate the following list of headings: type of tool (e.g., tool, scale, checklist, or domain-based tool); scope of the tool; number of items within the tool; domains within the tool; whether the item relates to reporting or methodological quality; ratings of items and domains within the tool; methods used to develop the tool and the availability of an “explanation and elaboration”. These fields were all derived from Lunny et al. (2021) and Page et al. (2018) [31, 35]. Data will be extracted on items that are relevant to ACTs and all items will initially be extracted verbatim.
Data analysis and reporting
All studies evaluating methodological quality, and/or proposing bias tools, items, concepts, checklists will be collated. These studies will undergo descriptive analyses based upon previously extracted fields. All bias items will all be mapped to corresponding domains within the CONSORT-ACE guidelines, as this is the only known quality tool specific to ACT features. If no risk-of-bias tools or relevant items are identified in our review, then we will plan to hypothesize items and formulate a candidate tool. We will develop a candidate tool that will undergo refinement by the authors, and in subsequent steps will be further evaluated and refined using a Delphi consensus method prior to undergoing validation and testing. Currently, our plan is to create a standalone risk of bias tool for ACTs that will gather inspiration from the domains of the Cochrane RoB 2.0 tool. But, as we proceed to the knowledge user survey, Delphi process, and candidate tool development phases, we will further evaluate the feasibility and advantages of instead developing an extension to the Cochrane RoB 2.0. All statistical analyses will be performed using R software (version 4.3.2, Vienna, Austria).
Discussion
A comprehensive risk of bias tool is needed to improve the reproducibility and transparency of future ACTs and ACT meta-analyses. A recent scoping review demonstrated that ACTs adhere poorly to the reporting recommendations published in the CONSORT-ACE statement [12]. We are, therefore, going to use the framework developed by Whiting and colleagues to develop a new risk of bias tool to improve the quality of ACTs [29]. A risk of bias tool for ACTs is needed since these designs often demonstrate key methodological differences from conventional RCTs, such as interim analyses and adaptive decisions made after the trial has begun patient recruitment [6]. The current repertoire of risk of bias tools used for conventional RCTs do not possess domains that address these unique adaptive design features. Moreover, adaptive decisions made during the running of a clinical trial can increase methodological bias in the final trial analysis, thus emphasizing the need for a risk of bias tool to be used when evaluating ACTs. Potential limitations include missing any published methodology studies or ROB tools of ACTs, but we will counteract this via a broad search strategy of peer-reviewed literature databases, grey literature databases, and methodological tool repositories. As computational technologies continue to improve, their role in generating adaptive trial paradigms and performing statistical simulations may revolutionize the future of trial design and medical innovation [36]. We, must, therefore ensure that methodological tools are developed at a similar pace, so that these novel trial designs are standardized, transparent, reproducible, and interpretable for the future.
Supporting information
S2 File. PRISMA flow diagram for prospective article screening and full-text review.
https://doi.org/10.1371/journal.pone.0303315.s002
(DOCX)
References
- 1. Masic I, Miokovic M, Muhamedagic B. Evidence based medicine—new approaches and challenges. Acta Inform Med. 2008;16(4):219–25. pmid:24109156.
- 2. Sibbald B, Roland M. Understanding controlled trials. Why are randomised controlled trials important? BMJ. 1998;316(7126):201. pmid:9468688.
- 3. Nichol AD, Bailey M, Cooper DJ, Polar, Investigators EPO. Challenging issues in randomised controlled trials. Injury. 2010;41 Suppl 1:S20–3. Epub 20100422. pmid:20413119.
- 4. Pronk AJM, Roelofs A, Flum DR, Bonjer HJ, Abu Hilal M, Dijkgraaf MGW, et al. Two decades of surgical randomized controlled trials: worldwide trends in volume and methodological quality. Br J Surg. 2023;110(10):1300–8. pmid:37379487.
- 5. Chapman SJ, Shelton B, Mahmood H, Fitzgerald JE, Harrison EM, Bhangu A. Discontinuation and non-publication of surgical randomised controlled trials: observational study. BMJ. 2014;349:g6870. Epub 20141209. pmid:25491195.
- 6. Pallmann P, Bedding AW, Choodari-Oskooei B, Dimairo M, Flight L, Hampson LV, et al. Adaptive designs in clinical trials: why use them, and how to run and report them. BMC Med. 2018;16(1):29. Epub 20180228. pmid:29490655.
- 7. Pushpakom SP, Taylor C, Kolamunnage-Dona R, Spowart C, Vora J, Garcia-Finana M, et al. Telmisartan and Insulin Resistance in HIV (TAILoR): protocol for a dose-ranging phase II randomised open-labelled trial of telmisartan as a strategy for the reduction of insulin resistance in HIV-positive individuals on combination antiretroviral therapy. BMJ Open. 2015;5(10):e009566. Epub 20151015. pmid:26474943.
- 8. Stallard N, Hampson L, Benda N, Brannath W, Burnett T, Friede T, et al. Efficient Adaptive Designs for Clinical Trials of Interventions for COVID-19. Stat Biopharm Res. 2020;12(4):483–97. Epub 20200729. pmid:34191981.
- 9. Alemayehu C, Mitchell G, Nikles J. Barriers for conducting clinical trials in developing countries- a systematic review. Int J Equity Health. 2018;17(1):37. Epub 20180322. pmid:29566721.
- 10. Madani Kia T, Marshall JC, Murthy S. Stakeholder perspectives on adaptive clinical trials: a scoping review. Trials. 2020;21(1):539. Epub 20200617. pmid:32552852.
- 11. Zhu H, Wong WK. An Overview of Adaptive Designs and Some of Their Challenges, Benefits, and Innovative Applications. J Med Internet Res. 2023;25:e44171. Epub 20231016. pmid:37843888.
- 12. Purja S, Park S, Oh S, Kim M, Kim E. Reporting quality was suboptimal in a systematic review of randomized controlled trials with adaptive designs. J Clin Epidemiol. 2023;154:85–96. Epub 20221215. pmid:36528234.
- 13. Dimairo M, Pallmann P, Wason J, Todd S, Jaki T, Julious SA, et al. The adaptive designs CONSORT extension (ACE) statement: a checklist with explanation and elaboration guideline for reporting randomised trials that use an adaptive design. Trials. 2020;21(1):528. Epub 20200617. pmid:32546273.
- 14. Sterne JAC, Savovic J, Page MJ, Elbers RG, Blencowe NS, Boutron I, et al. RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ. 2019;366:l4898. Epub 20190828. pmid:31462531.
- 15. Jadad AR, Moore RA, Carroll D, Jenkinson C, Reynolds DJ, Gavaghan DJ, McQuay HJ. Assessing the quality of reports of randomized clinical trials: is blinding necessary? Control Clin Trials. 1996;17(1):1–12. pmid:8721797.
- 16. Haile ZT. Critical Appraisal Tools and Reporting Guidelines. J Hum Lact. 2022;38(1):21–7. Epub 20211118. pmid:34791933.
- 17. Mol BW, Lai S, Rahim A, Bordewijk EM, Wang R, van Eekelen R, et al. Checklist to assess Trustworthiness in RAndomised Controlled Trials (TRACT checklist): concept proposal and pilot. Res Integr Peer Rev. 2023;8(1):6. Epub 20230620. pmid:37337220.
- 18. Ferreira D, Barthoulot M, Pottecher J, Torp KD, Diemunsch P, Meyer N. A consensus checklist to help clinicians interpret clinical trial results analysed by Bayesian methods. Br J Anaesth. 2020;125(2):208–15. Epub 20200620. pmid:32571570.
- 19. Park JJH, Harari O, Dron L, Lester RT, Thorlund K, Mills EJ. An overview of platform trials with a checklist for clinical readers. J Clin Epidemiol. 2020;125:1–8. Epub 20200513. pmid:32416336.
- 20. Jung A, Balzer J, Braun T, Luedtke K. Identification of tools used to assess the external validity of randomized controlled trials in reviews: a systematic review of measurement properties. BMC Med Res Methodol. 2022;22(1):100. Epub 20220406. pmid:35387582.
- 21. Schulz KF, Altman DG, Moher D, Group C. CONSORT 2010 statement: updated guidelines for reporting parallel group randomized trials. Ann Intern Med. 2010;152(11):726–32. Epub 20100324. pmid:20335313.
- 22. Nimavat BD, Zirpe KG, Gurav SK. Critical Analysis of a Randomized Controlled Trial. Indian J Crit Care Med. 2020;24(Suppl 4):S215–S22. pmid:33354045.
- 23. Godin K, Dhillon M, Bhandari M. The three-minute appraisal of a randomized trial. Indian J Orthop. 2011;45(3):194–6. pmid:21559097.
- 24.
(2023) CASP. Randomized-controlled trial CASP tool [online] [January 12, 2024]. https://casp-uk.net/.
- 25. Munn Z, Barker TH, Moola S, Tufanaru C, Stern C, McArthur A, et al. Methodological quality of case series studies: an introduction to the JBI critical appraisal tool. JBI Evid Synth. 2020;18(10):2127–33. pmid:33038125.
- 26.
(SIGN) SIGN. Randomized-controlled trial tool [online] [January 12, 2024]. https://www.sign.ac.uk/.
- 27. Clark E, Burkett K, Stanko-Lopp D. Let Evidence Guide Every New Decision (LEGEND): an evidence evaluation system for point-of-care clinicians and guideline development teams. J Eval Clin Pract. 2009;15(6):1054–60. pmid:20367705.
- 28. Buyse M. Limitations of adaptive clinical trials. Am Soc Clin Oncol Educ Book. 2012:133–7. pmid:24451722.
- 29. Whiting P, Wolff R, Mallett S, Simera I, Savovic J. A proposed framework for developing quality assessment tools. Syst Rev. 2017;6(1):204. Epub 20171017. pmid:29041953.
- 30. Sanderson S, Tatt ID, Higgins JP. Tools for assessing quality and susceptibility to bias in observational studies in epidemiology: a systematic review and annotated bibliography. Int J Epidemiol. 2007;36(3):666–76. Epub 20070430. pmid:17470488.
- 31. Lunny C, Tricco AC, Veroniki AA, Dias S, Hutton B, Salanti G, et al. Methodological review to develop a list of bias items used to assess reviews incorporating network meta-analysis: protocol and rationale. BMJ Open. 2021;11(6):e045987. Epub 20210624. pmid:34168027.
- 32. Shamseer L, Moher D, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: elaboration and explanation. BMJ. 2015;350:g7647. Epub 20150102. pmid:25555855.
- 33. Hirt J, Schonenberger CM, Ewald H, Lawson DO, Papola D, Rohner R, et al. Introducing the Library of Guidance for Health Scientists (LIGHTS): A Living Database for Methods Guidance. JAMA Netw Open. 2023;6(2):e2253198. Epub 20230201. pmid:36787138.
- 34. McGowan J, Sampson M, Salzwedel DM, Cogo E, Foerster V, Lefebvre C. PRESS Peer Review of Electronic Search Strategies: 2015 Guideline Statement. J Clin Epidemiol. 2016;75:40–6. Epub 20160319. pmid:27005575.
- 35. Page MJ, McKenzie JE, Higgins JPT. Tools for assessing risk of reporting biases in studies and syntheses of studies: a systematic review. BMJ Open. 2018;8(3):e019703. Epub 20180314. pmid:29540417.
- 36. Cascini F, Beccia F, Causio FA, Melnyk A, Zaino A, Ricciardi W. Scoping review of the current landscape of AI-based applications in clinical trials. Front Public Health. 2022;10:949377. Epub 20220812. pmid:36033816.