Figures
Abstract
Background
Given the growth of collaborative care strategies for people with opioid use disorder and the changing composition of the illicit drug supply, there is a need to identify and analyze clinic-level outcomes for centers prescribing opioid agonist treatment (OAT). We aimed to determine and validate whether prescriber networks, constructed with administrative data, can successfully identify distinct clinical practice facilities in Ontario, Canada.
Methods
We executed a retrospective population-based cohort study using OAT prescription records from the Canadian Addiction Treatment Centres in Ontario, Canada between 01/01/2013 and 12/31/2020. Social network analysis was utilized to create networks with connections between physicians based on their shared OAT clients. We defined connections two different ways, by including the number of clients shared or a relative threshold on the percentage of shared OAT clients per physician. Clinics were identified using modularity maximization, with sensitivity analyses applying Louvain, Walktrap, and Label Propagation algorithms. Concordance between network-identified facilities and the (gold standard) de-identified facility-level IDs was assessed using overall, positive and negative agreement, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV).
Results
From 144 physicians at 105 clinics with 32,842 OAT clients, we assessed 250 different versions of the created networks. The three different detection algorithms had wide variation in concordance, with ranges on sensitivity from 0.02 to 0.88 and PPV from 0.06 to 0.97. The optimal result, derived from the modularity maximization method, achieved high specificity (0.98, 95% CI: 0.98, 0.98) and NPV (0.98, 95% CI: 0.97, 0.98) and moderate PPV (0.54, 95% CI: 0.52, 0.57) and sensitivity (0.45, 95% CI: 0.43, 0.47). This scenario had an overall agreement of 0.96, negative agreement of 0.98, and positive agreement of 0.49.
Citation: Kurz M, Tatangelo M, Morin KA, Zanette M, Krebs E, Marsh DC, et al. (2025) Identifying opioid agonist treatment prescriber networks from health administrative data: A validation study. PLoS One 20(5): e0322064. https://doi.org/10.1371/journal.pone.0322064
Editor: Marianna Mazza, Universita Cattolica del Sacro Cuore Sede di Roma, ITALY
Received: October 22, 2024; Accepted: March 15, 2025; Published: May 16, 2025
Copyright: © 2025 Kurz et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The data underlying the results presented in this study cannot be shared in a public repository due to privacy restrictions, but are available from the Canadian Addiction Treatment Centres (CATC; https://canatc.ca/) upon reasonable request. Requests for access to the data can be directed to April Gamache, Chief Executive Officer of CATC (email: agamache@canatc.ca). The CATC is located at 175 Commerce Valley Drive West, Suite 300, Markham, Ontario, L3T 7P6, and can be reached toll-free at 1-877-937-2282. Further details on the data are available in the cohort profile: Morin KA, Tatangelo M, Marsh D. Canadian Addiction Treatment Centre (CATC) opioid agonist treatment cohort in Ontario, Canada. BMJ Open 2024;14:e080790. doi: 10.1136/bmjopen-2023-080790. All the statistical codes used for the analysis are available in the following repository: https://github.com/HERU-modeling/Network-Validation/.
Funding: This work was funded by the Canadian Institutes of Health Research award number no. PJT-190265 and National Institutes of Health/National Institute on Drug Abuse award no. R01-DA050629. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: DM previously held the position of Chief Medical Director at CATC ending in June 2022 and maintains the role of OAT provider. DM had no ownership stake in the CATC as a stipendiary employee. We do not forsee any conflict of interest as findings will be made freely available to the public and the CATC, and neither the Universities nor CATC can prevent publication and dissemination of knowledge. There are no patents or products related to this submission. There are no competing interests that would alter myself and my co-authors adherence to PLOS ONE policies for sharing data and materials.
Introduction
Given the growing interest in the development of collaborative and continuing care strategies for people with opioid use disorder (PWOUD) [1–4], there is considerable clinical and scientific importance associated with validating prescriber networks and subsequently understanding the benefit of physician networks on treatment retention. As defined by the Agency for Healthcare Research and Quality, care coordination is the “deliberate organization of patient care activities between two or more participants involved in a patient’s care to facilitate appropriate delivery of health care services” [5]. Opioid agonist treatment (OAT) is the standard treatment for opioid use disorder and improving treatment retention has been a key component in public health strategies to reduce the risk of overdose and subsequent mortality among PWOUD [6–10]. While structural and individual-level characteristics have an impact on retention in OAT [11], clinic-level characteristics, such as receiving care at a specialized treatment center or hospital [12], are also important determinants of OAT retention [11,13–17] which are directly amenable to systemic intervention.
Within Canada, 80% of all Canadians receiving OAT reside in Ontario or British Columbia (BC) [18]. The largest treatment provider organization in Ontario is the Canadian Addiction Treatment Centre (CATC), which operates outpatient and inpatient treatment, with specialized pharmacies [19]. In BC, OAT is provided through specialized treatment centres, such as rapid access to addiction care clinics, community health centers and physicians’ private offices. Like Ontario, BC boasts comprehensive linked health administrative data, however clinic-level identifiers are not available within these databases [20], thus limiting the opportunity for clinic-level comparisons in practice patterns and outcomes. Federal and Provincial governments have proposed increasing connections to care and collaborative care practices as interventions to improve outcomes for individuals with substance use disorders [21–23]. The Canadian Centre on Substance Abuse recommends single assessment processes, shared medical records and centralized access points as mechanisms to work towards collaborative care [24].
Evaluating the performance of collaborative care and speciality clinics can be labour-intensive and thus cost-prohibitive when conducted through observations of practices, qualitative interviews and surveys. The use of administrative data may be feasible for routine, ongoing evaluation but many datasets, lack or suppress facility-level identifiers, including only de-identified provider information. Two previous studies conducted in the United States demonstrated that patient-sharing, identified using administrative data, is an informative measure for determining relationships between physicians [25,26]. More recently, a study used Medicare data to identify shared care between primary care providers [27]. Given the case mix complexity and potential need for multidisciplinary care for people presenting with OUD, these methods require validation within this population and care context to ensure accuracy. However, no studies exist to validate the use of social network analysis to measure relationships among OAT physicians, and further identify clinics when administrative data lacks the identifying information on facilities or survey data is unavailable. The inclusion of de-identified clinic identifiers in Ontario’s CATC data presents an opportunity to fill this gap.
Social network analysis can be further applied to identify the impact of these networks on patient outcomes and health care system performance through the evaluation of care coordination [5,28,29]. While care coordination is challenging to measure through administrative claims data, “care density” is a surrogate measure which suggests that indirectly, through having visits with the same patients, physicians develop relationships that result in opportunities for direct communication and information sharing that may lower barriers to care coordination and ultimately lower costs [5]. Other applications of social network analysis include improving the implementation of new policies or clinical guidelines through network interventions that utilize the theory of diffusion of innovation by identifying and targeting individuals or groups for behaviour change [30,31]. Validating the identification of clinics via administrative data provides the groundwork to identify and target groups of clinicians on a clinic-by-clinic basis for a segmentation style network intervention [30] while utilizing the lower evaluation cost associated with administrative data.
We aimed to determine whether prescriber networks, constructed via de-identified physician IDs in daily drug dispensation records, can successfully identify distinct clinical practices. We employ electronic medical record data from the largest network of physicians providing OAT in Canada (a group of 105 OAT clinics in Ontario, Canada) to assess the concordance between constructed prescriber networks and actual relationships between physicians.
Methods
Study design
To validate the physician communities identified through the network analysis we considered whether physician pairs treated any clients at the same clinic as the gold standard and compared with the physician pairs’ community (network identified clinic) that was assigned through our analysis. The pairing was concordant if they prescribed to clients at the same clinic and were assigned to the same community, or if they did not care for clients at the same clinic and were in different communities.
Study population & data sources
Electronic medical records from CATC were used to create a retrospective population-based cohort. CATC is a network of 105 treatment centers in Ontario, Canada that captures OAT prescriptions, clinic identifiers, prescriber identifiers and client identifiers [19]. The cohort was comprised of all physicians with a prescription from a clinic within the CATC network between January 2013 and December 2020. We defined physician-clinic connection as any OAT prescriber with a record in a given clinic. Physician- and clinic-client connection was defined as any OAT prescription between the physician and/or clinic and the OAT client.
Data collection and ethics approval
Data were obtained on February 1st, 2021, from CATC. The data were de-identified and all individual-level characteristics were removed prior to analysis. We received approval from the Laurentian University Research and Ethics Board for this study (Approval number 6020852).
Prescriber Network construction
We used de-identified clinic-level identifiers in the CATC dataset, based on prescription records of the prescribing physician, as our gold standard indication of a prescriber’s clinic membership. Social network analysis was utilized to create a network with connections between physicians based on their shared OAT clients. A network is defined as a set of ‘actors’ (prescribers) with ‘relational ties’ (shared clients) [32]. As the network was constructed on the prescriber-level, a ‘connection’ between two prescribers entailed prescription to at least one shared OAT client. Within the network, the connections were weighted to represent the total number of shared OAT clients, such that the greater the weighted tie, the more clients were shared between prescribers. Sub-network groups, commonly referred to as ‘communities’ in the network literature, were identified based on statistically non-random connections using modularity maximization [33]. This involved assigning prescribers to disjoint communities through comparing the differences between the true number of shared clients and the number of expected by chance, given each prescriber’s client loads. We applied the modularity maximization procedure recursively to identify smaller sub-group more likely to be representative of clinics, as seen in Stein et al (2017) [34].
Statistical analysis
Concordance was assessed through several measures, including percent overall agreement, positive agreement, negative agreement and Gwet’s agreement coefficient (AC1). Gwet’s AC1 was chosen over Cohen’s Kappa as it is more reliable in scenarios with low prevalence and imbalanced marginal probabilities [35]. We also calculated the sensitivity (probability for a physician pair to be grouped in a network-identified community together when they are attached to the same clinic), specificity (probability for a physician pair to be in separate network-identified communities when they do not practice at the same clinics), positive predictive value (PPV; probability of a physician pair practicing at the same clinic when they are grouped in a network-identified community) and negative predictive value (NPV; probability of a physician pair practicing at different clinics when they are in different network-identified communities). Following previous studies that validated network-identified clinics [27,36] we also estimated recall (the percentage of physicians in the network-identified clinic that are in the true clinic), 1-purity (percentage of physicians in the network-identified clinic that are not in the true clinic) and the F-measure (the harmonic mean of purity and recall values of each network-identified clinic, where a maximum value of 1 indicates perfect agreement) for the network-identified clinics.
Several sensitivity analyses were executed to ensure the results were robust. We applied several alternative definitions of ties in the network, including different minimum numbers of shared clients (absolute thresholds), retaining only the top 20–80 percent of a physicians weighted ties (based on shared clients, relative threshold), and ties defined on shared OAT clients treated during the same OAT episode (defined as continuous OAT without treatment gaps lasting at least 5 days for methadone or 6 days of buprenorphine/naloxone [37]). We also applied several different definitions of physician-clinic connection, were we defined a physician-clinic connection if the clinic was where the physician prescribed to the majority of their OAT clients or prescribed to more than 15, 30 or 45 percent of their total OAT clients. In addition to the modularity maximization method to identify disjoint communities, we compared the Walktrap [38] and Louvain [39] methods to test different identification algorithms, as they have been used in other community detection studies [26,27]. We also compared the Label Propagation [40] algorithm, although it has not been used in community detection studies among prescribers. Lastly, we used several different recursive iterations of the modularity maximization procedure to define the smaller communities. To identify which algorithm was best for detection of communities, we prioritized sensitivity and positive predictive value, while expecting that specificity and negative predictive value should be high due to the large amount of true physician-pairs that do not care for clients at the same clinics.
Results
Characteristics of physicians, clinics and identified facilities
We included 144 physicians treating 32,842 OAT clients at 105 clinics over the 7-year study period to assess 350 different versions of the created networks (S1 Table). Clinics were identified on an annual basis. Between January 1st 2020 and December 31st 2020 there were 59 CATC clinics that treated OAT clients, and we identified 36 communities where physician-clinic connection was defined as at least 15% of the physician’s client load (Table 1, S2 Table). Among these communities, recall (the number of physicians identified in the communities that were also in the administrative identified clinic) had a median of 50%, 1-purity (the number of physicians identified in communities that were not in the administrative identified clinic) had a median of 0.
Among the 36 communities, we presented three clinics and their matching network-identified practice as an example visualization (Fig 1). Clinic #25 was identified in the networks with no additional or missing physicians, however as the two overlapping physicians’ sub-communities had no clients shared, they were identified as two separate distinct clusters. Clinic #154 in the network-identified clinics had missed the inclusion of three physicians, however, all missing physicians (identified as H, G, and F) had small client loads (one to four clients). Finally, Clinic #149 missed the inclusion of two physicians, with “H” having a small client load (4 clients, all treated at different clinics), and “I” having a large client load, but more of their client load at a different clinic.
Shared patient network (left) is created through prescription records and connections are defined as physicians prescribed to at least one shared patient. Clinic network (right) is identified through physician-clinic connection in the Canadian Addiction Treatment Centre (CATC) administrative records. Physicians are labelled to the unique practice. Edge thickness is the log of the shared clients plus 1. Vertex size is the log of the physicians’ client load plus 1. *Practice 25 is over two different network-identified clinics as they cannot be grouped without shared clients.
Concordance statistics for network-identified clinics
When varying the detection algorithms and physician-clinic connection definitions, with ties defined under the relative threshold at 80%, sensitivity estimates ranged from 0.02 to 0.93 and PPV estimates ranged from 0.04 to 0.98 (Fig 2). Specificity estimates ranged from 0.49 to 1.00 and NPV estimates ranged from 0.44 to 1.00. Results from the Walktrap, Louvain, and Label Propagation algorithms had lower specificity and PPV compared to the modularity maximization algorithm. The Louvain algorithm had slightly higher estimates among all categories than Walktrap, except when physician-clinic connection was defined as any OAT client treated through the clinic. Label Propagation had the highest sensitivity, with the drawbacks of the lowest PPV and specificity. Among the modularity maximization algorithms, the fourth iterations’ 0.03 increase in PPV resulted in a 0.12 reduction in sensitivity. Altering the definition of a true physician-clinic connection from the percentage of a physician’s client load to any clients treated at the clinic reduced sensitivity to near zero, and NPV to near 0.50 (Fig 2, S3 Table).
Abbreviations: MM – Modularity maximization.
We altered the tie definition to assess concordance among all tie classifications with the community detection algorithm set to modularity maximization with two iterations and physician-clinic connection defined at the 15% level (Table 2). Overall agreement and negative agreement were unchanged at 0.96 and 0.98, respectively, across all definitions. Three different tie definitions featured the highest positive agreement of 0.49. When trying to achieve the best balance among the four validity measures, the solution with high specificity (0.98 (0.98, 0.98)), high NPV (0.98 (0.97, 0.98)), medium PPV (0.54 (0.52, 0.57)) and medium sensitivity (0.45 (0.43, 0.47)) had two different tie definitions with the same result: Relative percentage defined as at the top 80% of both physicians shared clients or an absolute threshold of at least one client. Gwet’s AC1 estimates for both definitions were 0.96 (0.95, 0.96). Further, these were the highest performing set of all 350 versions of the detection algorithms, physician-clinic connection and tie definitions (S1 and S2 Fig and S3 Table). Other algorithms with higher PPV had low or near zero estimates for sensitivity or only medium NPV.
Discussion
Using a previously validated methodology [26,27,36], we compared 350 different algorithms to validate the identification of OAT clinics from administrative data by creating prescriber networks from shared clients. Using a relative percentage to define network ties with two or three recursive iterations of the modularity maximization algorithm were identified as the best performing strategies for constructing network-defined clinics due to high specificity and NPV and moderate PPV and sensitivity. The misclassification from the algorithm occurred when physicians had low client loads or comparably high client loads across multiple clinics.
Generally, use of administrative databases for evaluation and quality improvement has limitations, particularly with missing information and the lack of identifying indicators for clinical sites and referral networks. These limitations are exceptionally relevant for clinical OUD management in the fentanyl-era, when rapidly accessible specialty addictions clinics [41] and collaborative care [1,4] have been introduced as one component to combat the declines in treatment retention and persistently high rates of overdose death among people who use drugs [8,42–44]. By validating the identification of clinics, we provide an opportunity to distinguish and evaluate clinical caseloads and clinic-level performance among databases that lack clinic-level identifiers, thus improving the use of administrative health records for ongoing evaluation and quality improvement particularly for opioid use disorder. Additionally in other clinical areas, care coordination, measured through indicators such as care density and centrality, has shown to have an impact on rates of hospitalization, health care costs, as well as on the referral and prescribing practices of physicians [5,28,29,45]. A previous study in the United States analyzed insurance claims to demonstrate that patients receiving care from doctors with higher levels of shared patients (i.e., higher care density) had significantly lower total costs and rates of hospitalization [28]. Otherwise, another study in the United States applied network analysis to demonstrate that higher care density is associated with a reduction in repeated co-prescription of interacting drugs by multiple providers [29]. Centrality, which describes the importance or influence of a node within a network, is another indicator of coordination [46]. For example, Barnett et al. compared the relative ratio of primary care physician (PCP) centrality to other physicians’ centrality to quantify the average centrality of PCPs compared to other physicians in a network and found that higher “centrality” of primary care providers within constructed hospital networks was associated with fewer medical specialist visits, as well as lower spending on imaging and tests [45]. Validation of network-constructed clinics may therefore have broader implications and applicability in programmatic and health system evaluation.
Identifying facilities with administrative data otherwise provides an opportunity for enhanced control of confounding, which may be otherwise unmeasured in comparative effectiveness studies, when there is a lack of clinic-level identifiers. A previous study that identified predictors of selection into OAT had estimated individual and prescriber-level impacts with residual variance between prescribers [47]. This unexplained variance could be related to clinic-level characteristics that were unable to be determined previously. Furthermore, measures of prescribing preferences are commonly used in comparative effectiveness studies using instrumental variable analysis to address unmeasured confounding by indication [48–52]. A 2021 systematic review of preference-based instrumental variable designs found that of 185 articles, 40% had implemented a facility-level measure of prescribing preference [50], highlighting another important use to identifying clinics with administrative data that lacks facility-level identifiers. Additionally, as physicians’ peers’ prescribing habits has been shown to impact treatment uptake and selection of medications [53–55], these outcomes could be improved by identifying the impact of clinic level prescribing practices.
The results from our chosen ‘best-performing’ settings had an estimated PPV of 0.54 and sensitivity of 0.45 which is comparable to the study by Kuo et al on US Medicare clients with similar sensitivity, however slightly lower PPVs [27]. Specifically, when Kuo et al defined a tie as at least 14 shared patient they had nearly the same sensitivity. However, we had much higher estimates for specificity and NPV. This may be explained by the administrative data having a higher proportion of physicians not practicing at the same clinics because they are easily identified if they do not share patients. Another previous study had similar results with a shared patient threshold of 3, resulting in a PPV of 0.60, sensitivity of 0.57, and specificity of 0.77, however their NPV was lower at 0.76 and their goal was to identify referral relationships, not team-based clinics [25]. Overall, our results align with other studies and confirm administrative databases can be used to identify physician relationships and clinics.
Prior studies building physician networks used an absolute shared patient threshold utilizing Medicare from the United States [25,27,36]. However, we found that when network ties were defined based on relative percentage thresholds the results were concordant among all of the percentage-based definitions. As such, and consistent with the findings and recommendations of Landon et al [26], our results suggest using relative thresholds over absolute shared patient thresholds for network construction to accommodate diverse clinical contexts with varying patient loads and case complexities. Another study by Landon et al suggested similar, as absolute thresholds will vary for different specialties [26]. While Landon et al did not report their results and comparisons, they concluded that defining a connection based on a relative threshold of the top 20% of ties was the best option [26]. Other studies evaluating prescriber networks defined connections based on episodes of care [56]. We found that episodes of care also had similar results to relative thresholds, suggesting that both definitions create valid prescriber networks.
Overall, our study found that Walktrap and Louvain algorithms underperformed compared to the modularity maximization algorithm. While there is no precedent in the prescriber networks literature that evaluated the use of different community detection algorithms, other complex network studies have evaluated differences in detection algorithms [57–61]. There is not a single algorithm that prevails as the optimal algorithm for community detection among different types of networks (i.e., small and large networks) and performance measures. For example, the Louvain algorithm provided the best fit in two studies [57,61], while the modularity maximization algorithm was superior in Berardo de Sousa et al [58]. Our modularity maximization method was different, such that we followed Stein et al’s approach of recursive application to find smaller communities. After the third iteration, we found the minimal increase in PPV resulted in larger decrease of sensitivity, suggesting that higher iterations are unnecessary. This aligns with Stein et al, as they found the best split for minimum community sizes with minimum number of total communities was 3.3, aligning with our results that second or third iteration was best [34].
Our study was not without limitations. First, as physicians treated clients at multiple OAT clinics in Ontario, the network-identified communities will necessarily result in a degree of misclassification as a physician can only be assigned to a single community. To address this concern, we used multiple definitions for physician-clinic connection. In other settings and provinces physicians often work in multiple clinics, and we found the algorithm captures true groupings from clinics with at least 15% of the physicians’ client-load. The algorithm may not be reliable at detecting clinics which exclusively or primarily employ prescribers with a small number of clients relative to the physicians’ overall client-load. Future analyses should employ other sensitivity analyses, such as different detection algorithms and connection definitions to ensure clinic identification is consistent. While physicians often work in multiple clinics, our methodology required prescribers to be assigned to only one clinic. Algorithms to detect overlapping communities within weighted networks exist, such as the Speaker-listener Label Propagation Algorithm [62], however they have not been used in prior prescriber network literature and are not widely implemented in software. Second, for the express purpose of defining OAT clinics, we focus our only on OAT prescription records. It is possible that some OAT physicians could also be working at non-OAT clinics and sharing non-OAT patients which we would not capture. The methodology can be extended in other applications to capture any prescriptions, or any records of care provision, between shared clients. We note however the unique benefits of using OAT prescription records, given requirements for frequent (weekly or biweekly) prescription renewals. We note that the method was validated to identify primary care offices in Texas using outpatient care records from US Medicare data [27], and communities within 51 different hospital referral regions using inpatient and outpatient care records from US Medicare data [26]. Elsewhere it has be implemented to identify hospital referral regions in England using hospital admission records to measure infection transfers between hospitals [63].
This analysis examined OAT prescriber network characteristics with the aim of validating the use of administrative data for identifying OAT prescriber networks. Identifying OAT prescriber networks from health administrative data can help determine the impact of physician networks on OAT outcomes and provide recommendations to improve collaborative care.
Supporting Information
S1 Table. Characteristics of the 350 different networks created each calendar year.
https://doi.org/10.1371/journal.pone.0322064.s001
(DOCX)
S2 Table. Summary of characteristics for network-identified clinics and administrative identified clinics between 2013 and 2019.
https://doi.org/10.1371/journal.pone.0322064.s002
(DOCX)
S3 Table. Concordance between physician pairs’ assigned network identified community and the gold-standard administrative identified clinic for different algorithm settings.
https://doi.org/10.1371/journal.pone.0322064.s003
(DOCX)
S1 Fig. Sensitivity and positive predictive value estimates.
https://doi.org/10.1371/journal.pone.0322064.s004
(DOCX)
S2 Fig. Specificity and negative predictive value estimates.
https://doi.org/10.1371/journal.pone.0322064.s005
(DOCX)
References
- 1. Korthuis PT, McCarty D, Weimer M, Bougatsos C, Blazina I, Zakher B, et al. Primary care-based models for the treatment of opioid use disorder: a scoping review. Ann Intern Med. 2017;166(4):268–78. pmid:27919103
- 2. Blanco C, Volkow ND. Management of opioid use disorder in the USA: present status and future directions. Lancet. 2019;393(10182):1760–72. pmid:30878228
- 3. Brooklyn JR, Sigmon SC. Vermont hub-and-spoke model of care for opioid use disorder: development, implementation, and impact. J Addict Med. 2017;11(4):286–92. pmid:28379862
- 4. Humphreys K, Shover CL, Andrews CM, Bohnert ASB, Brandeau ML, Caulkins JP, et al. Responding to the opioid crisis in North America and beyond: recommendations of the stanford-lancet commission. Lancet. 2022;399(10324):555–604. pmid:35122753
- 5. Bynum JPW, Ross JS. A measure of care coordination?. J Gen Intern Med. 2013;28(3):336–8. pmid:23179970
- 6. Sordo L, Barrio G, Bravo MJ, Indave BI, Degenhardt L, Wiessing L, et al. Mortality risk during and after opioid substitution treatment: systematic review and meta-analysis of cohort studies. BMJ. 2017;357:j1550. pmid:28446428
- 7. Buster MCA, van Brussel GHA, van den Brink W. An increase in overdose mortality during the first 2 weeks after entering or re-entering methadone treatment in Amsterdam. Addiction. 2002;97(8):993–1001. pmid:12144602
- 8. Pearce LA, Min JE, Piske M, Zhou H, Homayra F, Slaunwhite A, et al. Opioid agonist treatment and risk of mortality during opioid overdose public health emergency: population based retrospective cohort study. BMJ. 2020;368:m772. pmid:32234712
- 9. Stone AC, Carroll JJ, Rich JD, Green TC. One year of methadone maintenance treatment in a fentanyl endemic area: Safety, repeated exposure, retention, and remission. J Subst Abuse Treat. 2020;115:108031. pmid:32600619
- 10. Peles E, Linzy S, Kreek M, Adelson M. One-year and cumulative retention as predictors of success in methadone maintenance treatment: a comparison of two clinics in the United States and Israel. J Addict Dis. 2008;27(4):11–25. pmid:19042587
- 11. Shields CG, Fuzzell LN, Christ SL, Matthias MS. Patient and provider characteristics associated with communication about opioids: An observational study. Patient Educ Couns. 2019;102(5):888–94. pmid:30552013
- 12. Jones NR, Nielsen S, Farrell M, Ali R, Gill A, Larney S, et al. Retention of opioid agonist treatment prescribers across New South Wales, Australia, 2001-2018: Implications for treatment systems and potential impact on client outcomes. Drug Alcohol Depend. 2021;219:108464. pmid:33360851
- 13. Peles E, Schreiber S, Adelson M. Factors predicting retention in treatment: 10-year experience of a methadone maintenance treatment (MMT) clinic in Israel. Drug Alcohol Depend. 2006;82(3):211–7. pmid:16219428
- 14. Kelly SM, O’Grady KE, Mitchell SG, Brown BS, Schwartz RP. Predictors of methadone treatment retention from a multi-site study: a survival analysis. Drug Alcohol Depend. 2011;117(2–3):170–5. pmid:21310552
- 15. Gauthier G, Eibl JK, Marsh DC. Improved treatment-retention for patients receiving methadone dosing within the clinic providing physician and other health services (onsite) versus dosing at community (offsite) pharmacies. Drug Alcohol Depend. 2018;191:1–5. pmid:30064001
- 16. Darker CD, Ho J, Kelly G, Whiston L, Barry J. Demographic and clinical factors predicting retention in methadone maintenance: results from an Irish cohort. Ir J Med Sci. 2016;185(2):433–41. pmid:26026953
- 17. Caplehorn JR, Lumley TS, Irwig L. Staff attitudes and retention of patients in methadone maintenance programs. Drug Alcohol Depend. 1998;52(1):57–61. pmid:9788007
- 18. Eibl JK, Morin K, Leinonen E, Marsh DC. The state of opioid agonist therapy in canada 20 years after federal oversight. Can J Psychiatry. 2017;62(7):444–50. pmid:28525291
- 19. Morin KA, Tatangelo M, Marsh D. Canadian Addiction Treatment Centre (CATC) opioid agonist treatment cohort in Ontario, Canada. BMJ Open. 2024;14(2):e080790. pmid:38401902
- 20. Homayra F, Pearce LA, Wang L, Panagiotoglou D, Sambo TF, Smith N, et al. Cohort profile: The provincial substance use disorder cohort in British Columbia, Canada. Int J Epidemiol. 2021;49(6):1776. pmid:33097934
- 21. Public Health Agency of Canada. Statement from the chief public health officer: Pharmacists help address the opioid public health crisis in Canada. 2017. [cited 2018 Sep 26]. https://www.canada.ca/en/public-health/news/2017/03/statement_from_thechiefpublichealthofficerpharmacistshelpaddress.html.
- 22.
Public Health Agency of Canada. Statement from the Co-Chairs of the Special Advisory Committee on the Epidemic of Opioid Overdoses on Updates to Opioid-Related Mortality Data. Ottawa, ON; 2018 [cited 2018]. Available from: https://www.canada.ca/en/public-health/news/2018/03/statement-from-the-co-chairs-of-thespecial-advisory-committee-on-the-epidemic-of-opioid-o.html].
- 23.
Health Canada. Collaboration for Addiction and Mental Health Care: Best Advice (Report). 2015.
- 24.
Addiction and Mental Health Collaborative Project Steering Committee. Collaboration for addiction and mental health care: Best advice. Ottawa Ontario: Canadian Centre on Substance Abuse; 2015.
- 25. Barnett ML, Landon BE, O’Malley AJ, Keating NL, Christakis NA. Mapping physician networks with self-reported and administrative data. Health Serv Res. 2011;46(5):1592–609. pmid:21521213
- 26. Landon BE, Onnela J-P, Keating NL, Barnett ML, Paul S, O’Malley AJ, et al. Using administrative data to identify naturally occurring networks of physicians. Med Care. 2013;51(8):715–21. pmid:23807593
- 27. Kuo Y-F, Raji MA, Lin Y-L, Ottenbacher ME, Jupiter D, Goodwin JS. Use of medicare data to identify team-based primary care: is it possible?. Med Care. 2019;57(11):905–12. pmid:31568165
- 28. Pollack CE, Weissman GE, Lemke KW, Hussey PS, Weiner JP. Patient sharing among physicians and costs of care: a network analytic approach to care coordination using claims data. J Gen Intern Med. 2013;28(3):459–65. pmid:22696255
- 29. Ong M-S, Olson KL, Chadwick L, Liu C, Mandl KD. The impact of provider networks on the co-prescriptions of interacting drugs: a claims-based analysis. Drug Saf. 2017;40(3):263–72. pmid:28000151
- 30. Valente TW. Network interventions. Science. 2012;337(6090):49–53. pmid:22767921
- 31. Sabot K, Wickremasinghe D, Blanchet K, Avan B, Schellenberg J. Use of social network analysis methods to study professional advice and performance among healthcare providers: a systematic review. Syst Rev. 2017;6(1):208. pmid:29058638
- 32.
Wasserman S, Faust K. Social network analysis: methods and applications. Cambridge University Press; 1994.
- 33. Newman MEJ. Modularity and community structure in networks. Proc Natl Acad Sci U S A. 2006;103(23):8577–82. pmid:16723398
- 34. Stein BD, Mendelsohn J, Gordon AJ, Dick AW, Burns RM, Sorbero M, et al. Opioid analgesic and benzodiazepine prescribing among Medicaid-enrollees with opioid use disorders: The influence of provider communities. J Addict Dis. 2017;36(1):14–22. pmid:27449904
- 35. Honda C, Ohyama T. Homogeneity score test of AC1 statistics and estimation of common AC1 in multiple or stratified inter-rater agreement studies. BMC Med Res Methodol. 2020;20(1):20. pmid:32020851
- 36. Kuo Y-F, Lin Y-L, Jupiter D. How to identify team-based primary care in the United States using medicare data. Med Care. 2021;59(2):118–22. pmid:33273297
- 37.
British Columbia Centre on Substance Use, B. C. Ministry of Health. A Guideline for the Clinical Management of Opioid Use Disorder. British Columbia Center on Substance Use; 2017.
- 38.
Pons P, Latapy M. Computing communities in large networks using random walks. In: Comput Inform Sci - ISCIS 2005. Berlin, Heidelberg: Springer Berlin Heidelberg; 2005.
- 39. Blondel VD, Guillaume J-L, Lambiotte R, Lefebvre E. Fast unfolding of communities in large networks. J Stat Mech. 2008;2008(10):P10008.
- 40. Raghavan UN, Albert R, Kumara S. Near linear time algorithm to detect community structures in large-scale networks. Phys Rev E Stat Nonlin Soft Matter Phys. 2007;76(3 Pt 2):036106. pmid:17930305
- 41. Corace K, Thavorn K, Suschinsky K, Willows M, Leece P, Kahan M, et al. Rapid access addiction medicine clinics for people with problematic opioid use. JAMA Netw Open. 2023;6(11):e2344528. pmid:37991762
- 42. Kurz M, Min JE, Dale LM, Nosyk B. Assessing the determinants of completing OAT induction and long-term retention: A population-based study in British Columbia, Canada. J Subst Abuse Treat. 2022;133:108647. pmid:34740484
- 43. Piske M, Zhou H, Min JE, Hongdilokkul N, Pearce LA, Homayra F, et al. The cascade of care for opioid use disorder: a retrospective study in British Columbia, Canada. Addiction. 2020;115(8):1482–93. pmid:31899565
- 44. BC Coroners Service. Unregulated Drug Deaths in BC 2023 [updated September 18, 2023 October 11, 2023]. Available from: https://app.powerbi.com/view?r=eyJrIjoiYTdiOGJlMmYtZTBmMC00N2FlLWI2YmYtMDIzOTY5NzkwODViIiwidCI6IjZmZGI1MjAwLTNkMGQtNGE4YS1iMDM2LWQzNjg1ZTM1OWFkYyJ9
- 45. Barnett ML, Christakis NA, O’Malley J, Onnela J-P, Keating NL, Landon BE. Physician patient-sharing networks and the cost and intensity of care in US hospitals. Med Care. 2012;50(2):152–60. pmid:22249922
- 46. DuGoff EH, Fernandes-Taylor S, Weissman GE, Huntley JH, Pollack CE. A scoping review of patient-sharing network studies using administrative data. Transl Behav Med. 2018;8(4):598–625. pmid:30016521
- 47. Homayra F, Hongdilokkul N, Piske M, Pearce LA, Zhou H, Min JE, et al. Determinants of selection into buprenorphine/naloxone among people initiating opioid agonist treatment in British Columbia. Drug Alcohol Depend. 2020;207:107798. pmid:31927163
- 48. Swanson SA, Hernán MA. Commentary: how to report instrumental variable analyses (suggestions welcome). Epidemiology. 2013;24(3):370–4. pmid:23549180
- 49. Davies NM, Smith GD, Windmeijer F, Martin RM. Issues in the reporting and conduct of instrumental variable studies: a systematic review. Epidemiology. 2013;24(3):363–9. pmid:23532055
- 50. Widding-Havneraas T, Chaulagain A, Lyhmann I, Zachrisson HD, Elwert F, Markussen S, et al. Preference-based instrumental variables in health research rely on important and underreported assumptions: a systematic review. J Clin Epidemiol. 2021;139:269–78. pmid:34126207
- 51. Chen Y, Briesacher BA. Use of instrumental variable in prescription drug research with observational data: a systematic review. J Clin Epidemiol. 2011;64(6):687–700. pmid:21163621
- 52. Homayra F, Enns B, Min JE, Kurz M, Bach P, Bruneau J, et al. Comparative analysis of instrumental variables on the assignment of buprenorphine/naloxone or methadone for the treatment of opioid use disorder. Epidemiology. 2024;35(2):218–31. pmid:38290142
- 53. Kurz M, Guerra-Alejos BC, Min JE, Barker B, Pauly B, Urbanoski K, et al. Influence of physician networks on the implementation of pharmaceutical alternatives to a toxic drug supply in British Columbia. Implement Sci. 2024;19(1):3. pmid:38184548
- 54. Keating NL, O’Malley AJ, Onnela J-P, Gray SW, Landon BE. Association of physician peer influence with subsequent physician adoption and use of bevacizumab. JAMA Netw Open. 2020;3(1):e1918586. pmid:31899533
- 55. Arnold C, Koetsenruijter J, Forstner J, Peters-Klimm F, Wensing M. Influence of physician networks on prescribing a new ingredient combination in heart failure: a longitudinal claim data-based study. Implement Sci. 2021;16(1):84. pmid:34454547
- 56. Onnela J-P, O’Malley AJ, Keating NL, Landon BE. Comparison of physician networks constructed from thresholded ties versus shared clinical episodes. Appl Netw Sci. 2018;3(1):28. pmid:30839809
- 57. Oza PR, Agrawal S, Ravaliya D, Kakkar R. Evaluating the igraph community detection algorithms on different real networks. SCPE. 2023;24(2):173–80.
- 58.
Berardo de Sousa F, Zhao L, editors. Evaluating and Comparing the IGraph Community Detection Algorithms. 2014 Brazilian Conference on Intelligent Systems; 2014 18-22 Oct. 2014.
- 59. Harenberg S, Bello G, Gjeltema L, Ranshous S, Harlalka J, Seay R, et al. Community detection in large‐scale networks: a survey and empirical evaluation. WIREs Computational Stats. 2014;6(6):426–39.
- 60. Orman GK, Labatut V, Cherifi H. Comparative evaluation of community detection algorithms: a topological approach. J Stat Mech. 2012;2012(08):P08001.
- 61. Yang Z, Algesheimer R, Tessone CJ. A comparative analysis of community detection algorithms on artificial networks. Sci Rep. 2016;6:30750. pmid:27476470
- 62.
Xie J, Szymanski BK, editors. Towards Linear Time Overlapping Community Detection in Social Networks. Berlin, Heidelberg: Springer Berlin Heidelberg; 2012.
- 63. Donker T, Wallinga J, Slack R, Grundmann H. Hospital networks and the dispersal of hospital-acquired pathogens by patient transfer. PLoS One. 2012;7(4):e35002. pmid:22558106