Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Utilization of expert opinion in infectious diseases clinical guidelines—A meta-epidemiological study

  • Blin Nagavci ,

    Roles Conceptualization, Formal analysis, Methodology, Writing – original draft

    dr.bnagavci@gmail.com

    Affiliation Doctoral School of Clinical Medicine, Semmelweis University, Budapest, Hungary

  • Lukas Schwingshackl,

    Roles Writing – review & editing

    Affiliation Faculty of Medicine, Institute for Evidence in Medicine, Medical Center-University of Freiburg, University of Freiburg, Freiburg, Germany

  • Ignacio Martin-Loeches,

    Roles Writing – review & editing

    Affiliation Department of Intensive Care Medicine, Multidisciplinary Intensive Care Research Organization (MICRO), Leinster, Dublin, Ireland

  • Botond Lakatos

    Roles Conceptualization, Methodology, Supervision, Validation, Writing – original draft, Writing – review & editing

    Affiliations Division of Infectology, Department of Hematology and Internal Medicine, Semmelweis University, Budapest, Hungary, South Pest Central Hospital, National Institute of Hematology and Infectious Diseases, Budapest, Hungary

Abstract

Introduction

Expert opinion is widely used in clinical guidelines. No research has ever been conducted investigating the use of expert opinion in international infectious disease guidelines. This study aimed to create an analytical map by describing the prevalence and utilization of expert opinion in infectious disease guidelines and analyzing the methodological aspects of these guidelines.

Methods

In this meta-epidemiological study, systematic searches in PubMed and Trip Medical Database were performed to identify clinical guidelines on infectious diseases, published between January 2018 and May 2023 in English, by international organizations. Data extracted included guideline characteristics, expert opinion utilization, and methodological details. Prevalence and rationale of expert opinion use were analyzed descriptively. Methodological differences between groups were analyzed with Chi-square and Mann-Whitney U Test.

Results

The analysis covered 66 guidelines with 2296 recommendations, published/endorsed by 136 organizations. Most guidelines (79%) used systematic literature searches, 42% provided search strategies, and 38% presented screening flow diagrams and conducted risk of bias assessments. 48.5% of the guidelines allowed expert opinion, most of which included expert opinion as part of the evidence hierarchy within the grading system. Guidelines allowing expert opinion, compared to those which do not, issued more recommendations per guideline (48.82 vs.19.13, p<0.001), and reported fewer screening flow diagrams (25% vs. 65%, p = 0.002), and less risk of bias assessments (19% vs.78%, p<0.001).

Conclusions

Expert opinion is utilized in half of assessed guidelines, often integrated into the evidence hierarchy within the grading system. Its utilization varies considerably in methodology, form, and terminology between guidelines. These findings highlight a pressing need for additional research and guidance, to improve and advance the standardization of infectious disease guidelines.

Introduction

Clinical practice guidelines are statements with recommendations developed by experts in a particular medical field, guiding healthcare professionals in making informed decisions about patient care [1]. These documents are typically created through a standardized and transparent approach including a systematic review of medical literature, and they usually consider a range of available evidence, including systematic reviews, clinical trials, observational studies such as cohort studies and less often case series or case reports. In some instances, when high-quality evidence is not available, guideline developers might rely on clinical judgement or expert opinion (EO) for issuing recommendations [24]. Presently, authors hold diverse definitions and perspectives regarding the concept and use of EO in clinical guidelines. For example, Eibling et al. consider EO not only as personal experience gained over the years but also as knowledge accumulated from a wide range of sources, which should be considered in clinical guidelines [5]. On the other hand, Schünemann et al. consider EO solely an opinion, which needs to be separated from any type of evidence and should not be used as a source for recommendations [6, 7]. Various organizations and societies have different policies and utilize EO in different ways, some allowing it to make recommendations in certain situations [8, 9], while others do not [7, 10]. Nevertheless, EO continues to be used in clinical guidelines. Up to one-quarter of guidelines, published between 2010 to 2016 in different topics, issued recommendations based on EO, 91% of which did not show an explicit rationale for EO use [2]. In critical care guidelines, 10% of strong recommendations were based on EO [11], while in cardiology guidelines up to 55% of recommendations were based on EO [12]. There is a lack of clarity or consensus on how EO should be used in clinical guidelines. Guideline developers employ and present EO in different ways, often without providing a rationale for the recommendations, whether by issuing recommendations based only on EO when there is no evidence available, considering low-quality observational data or indirect data as expert opinion, or by using EO as a level of evidence [2]. This has the potential to impede clinicians’ understanding of the underlying evidence supporting clinical recommendations and might affect guideline implementation [2, 3]. The exact implications of this issue for clinical practice remain uncertain and warrant further investigation.

When it comes to infectious diseases (ID) guidelines, they too are prone to such inconsistencies in evidence interpretation and subjectivity. An extra layer of complexity to synthesize and interpret evidence arises from the unique nature of ID, in which epidemiology, diagnosis and treatment might affect individual populations or exhibit pathogenicity only at certain conditions or geographic locations [13]. Despite the large number of guidelines on ID, and their importance, no research has been conducted that investigates the use of EO in this field of medicine. We postulated that international societies and organizations use diverse approaches and methodologies for utilizing EO.

Bearing in mind the impact of such guidelines, this meta-epidemiological study aimed to create an analytical map of international ID guidelines and EO use, by describing the prevalence and utilization of EO and analyzing the methodological aspects of these guidelines.

Methods

This meta-epidemiological study is designed and reported in accordance with guidelines for reporting meta-epidemiological research [14]. An internal protocol was used for methodological consistency of this project (S1 Appendix).

Literature searches

Systematic literature searches were conducted in two main databases, PubMed, and the Trip Medical Database, in May 2023. A combination of Medical Subject Headings (MeSH) terms and keywords was used with a focus on achieving high sensitivity. The searches targeted international infectious disease guidelines published within the last 5.5 years (from January 1, 2018, to May 15, 2023) to ensure the inclusion of most recent guidance. No additional filters were applied. Detailed search strategies can be accessed in the S1 Appendix.

Identification of relevant guidelines

The screening process for relevant guidelines was conducted in two phases: title/abstract screening and full-text screening, using Rayyan (www.rayyan.ai) by a single reviewer (BN). Retrieved guidelines were evaluated and included if they met the following criteria: 1) clinical guidelines pertaining to any infectious disease, 2) published by an international society or organization involving at least two countries, 3) utilized primary studies (e.g. trials, cohort studies etc.) for evidence synthesis, 4) published in the English language, 5) published from 01.01.2018 onwards, and 6) focused on guidelines for humans rather than animals. Conversely, the following documents were excluded: public health guidelines, documents not published by a society/organization, documents authored by individuals from a single country, and guidelines that adopted recommendations from other guidelines. An exception was made for two national societies, namely the National Institute for Health and Care Excellence (NICE) and Infectious Diseases Society of America (IDSA), which were considered relevant for inclusion due to their significant international impact. In cases where the systematic searches yielded multiple versions of a guideline (e.g., living guidelines), only the most recent version was included and assessed. The screening results, along with the reasons for exclusion, are presented using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram [15], in Fig 1.

Data extraction

The data extraction process was conducted using piloted Excel sheets by one reviewer (BN). The extracted data for each guideline include title, authors, publication year, publishing and endorsing societies, methodological details (systematic searches, risk of bias assessment, evidence appraisal systems, screening flowcharts), details on EO use, terminology used for EO recommendations, and number of recommendations (complete list in the S1 Appendix).

To assess the utilization of EO in the included guidelines, both the methods section and the supplementary materials were thoroughly examined. The evidence grading systems were evaluated to determine if they incorporated EO, and all issued recommendations and their corresponding evidence ratings were assessed for each guideline. In cases where guidelines did not provide sufficient information, additional sources such as society websites and guideline manuals were consulted.

Data analysis

Descriptive statistics were used for assessing the prevalence of EO use. The definitions and rationale of the EO use in the included guidelines were presented narratively. Distribution of data was assessed with the Shapiro–Wilk test for normality. Chi-square and Mann-Whitney U Test were used to compare methodological differences between groups, with an α level 0.05. ChatGPT was used for grammar corrections and readability. Upon utilizing this tool, the authors made necessary edits to the content, assuming full responsibility for the publication’s accuracy and integrity.

Results

Search results

The systematic literature searches yielded a total of 2,912 references, which were initially screened based on their title and abstract, and 178 were selected for full-text assessment. Finally, a total of 66 guidelines were deemed relevant and were included in this study (Fig 1). Included guidelines were published and endorsed by 136 distinct societies or organizations (S1 Appendix). The cumulative number of recommendations provided across all included guidelines was 2296.

Methodological characteristics

Among the 66 included guidelines, 79% of them reported use of systematic literature searches, and 67% explicitly mentioned the electronic databases that were searched, while 42% provided full details of their search strategies. 38% presented screening flow diagrams (e.g., PRISMA or similar) to depict the screening process, and the same percentage conducted a risk of bias assessment. The decision-making processes for issuing recommendations was detailed in 68% of guidelines (Table 1).

The use of a grading system for evaluation of evidence as well as a rating of the strength of recommendations was carried out by 94% of the included guidelines. The most used system was the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) with 47% [7], followed by the Infectious Diseases Society of America-US Public Health Service System (IDSA-USPHS) with 16% [16] (S1 Appendix). In 14% of the guidelines, a system could not be identified due to lack of detailed information.

Utilization of expert opinion

Among the 66 guidelines included in the study, 32 (48.5%) reported that they allowed or utilized EO in formulating recommendations. In contrast, 23 (34.8%) did not report the use or utilization of EO, while 11 (16.6%) of guidelines did not provide sufficient information to assess whether EO was allowed or not, mostly due to the lack of a methods section or lack of presenting the Quality of Evidence (QoE) in the recommendations.

In the 32 guidelines that incorporated EO, a total of 1,465 recommendations were issued, of which 448 (30%) were classified as either recommendations based on EO or based on evidence levels that contained EO.

Within the subset of these 32 guidelines, 16 used evidence grading systems in which EO is allowed and listed within the same evidence level as low or very low-quality evidence (Fig 2). For example: “Level of evidence C: Consensus of opinion of the experts and/or small studies, retrospective studies, registries” [17], or “Quality of Evidence III: Evidence from opinions of respected authorities, based on clinical experiences, descriptive studies, or reports of expert committees” [18]. The evidence grading systems used in this group of guidelines were: The Infectious Diseases Society of America-US Public Health Service (IDSA-USPHS) system [18], European Society of Cardiology (ESC) system [17] and systems modified from GRADE (Fig 2). From this group, only one guideline delineated EO from very low/low-quality evidence, by explaining this in each recommendation [19].

thumbnail
Fig 2. Visual description of EO utilization and evidence grading systems.

EO: Expert Opinion, IDSA-USPHS: The Infectious Diseases Society of America-US Public Health Service, ESC: European Society of Cardiology, GRADE: Grading of Recommendations, Assessment, Development, and Evaluations, AHRQ: Agency for Healthcare Research and Quality, OCEBM: Oxford Centre for Evidence-Based Medicine, AUA: American Urological Association.

https://doi.org/10.1371/journal.pone.0306098.g002

On the other hand, 14 guidelines used evidence grading systems where EO is not incorporated in the hierarchy of evidence, or it is categorized separately from other types of evidence. Nine of these guidelines followed the Grading of Recommendations, Assessment, Development, and Evaluations (GRADE) approach [7], which does not incorporate EO, though EO was still used in these guidelines. Five guidelines adhered to the Agency for Healthcare Research and Quality (AHRQ) [20], Oxford Centre for Evidence-Based Medicine (OCEBM) [21], or American Urological Association (AUA) systems [22], where EO is isolated from other types of evidence and has its own distinct level. Two guidelines did not use evidence grading systems. Details available in Fig 2.

From the 23 guidelines which did not report to have utilized EO, all used the GRADE approach for assessing the quality of evidence [7]. The complete list with assessed guidelines and respective evidence grading systems can be found in the S1 Appendix.

Methodological differences

Significant differences in methodology were observed between guidelines which allowed EO and those that did not. In the group that allowed EO, on average, a higher number of recommendations were issued per guideline compared to the group that did not allow EO (48.8 vs. 19.1, p<0.001). Additionally, a smaller proportion of guidelines in the EO group provided screening flow diagrams (25% vs. 65%, p = 0.002), and fewer guidelines reported conducting risk of bias assessments (19% vs. 78%, p<0.001) (Table 2). No significant differences were observed for literature searches, electronic databases, search strategy reporting and presence of systems for evidence appraisal (Table 2).

thumbnail
Table 2. Methodological comparisons of guidelines allowing EO and guidelines not allowing EO.

https://doi.org/10.1371/journal.pone.0306098.t002

Terminology and phrasing

In the 32 guidelines allowing EO, eight different terms were identified to label EO as a concept. The most prevalent terms were "Opinions of respected authorities," "Expert opinion," "Expert judgment," and "Consensus of expert opinion." Followed by "Consensus recommendation," "Good practice principle," and "In our practice statement" (S1 Appendix).

A diverse range of terminology was observed regarding the phrasing of EO-based recommendations. A total of 42 main verbs were identified to describe the 448 recommendations based on EO (or in evidence level containing EO). The most used verbs were "should," "recommend," "consider," "may," "is," "indicated," and "suggest." The complete list of verbs can be found in S1 Appendix.

Discussion

Summary of findings

Our results showed that half of the included guidelines allowed or used EO for formulating recommendations, with one third of their total number of recommendations being based on EO, or within an evidence level where EO is listed. In most cases, EO was incorporated into the evidence hierarchy and placed within the same category as low or very low-quality evidence. The phrasing and presentation of EO recommendations varied across guidelines in structure and form. Guidelines allowing EO issued more recommendations on average and reported less frequent use of screening flow diagrams and risk of bias assessments.

Comparisons with similar studies

In a meta-epidemiological study conducted by Ponce et al. [2], 69 guidelines from various fields, predominantly in endocrinology, were examined to identify the rationale behind EO recommendations. Relatively similar to our findings, 37.9% of recommendations in their sample had a level of evidence designated as EO. However, it is important to note that all guidelines in their sample were based on systematic reviews, whereas in our study, less than 80% utilized literature searches. This disparity may be attributed to methodological differences between the two studies. Ponce et al. specifically focused on guidelines from various fields and excluded those that did not employ EO, while our study focused on ID and included a more representative sample, as EO use was not an inclusion criterion but an outcome.

A study published by Mitchel et al. [23], examined 31 infection prevention and control guidelines revealing that 41.5% of recommendations were based on evidence from descriptive studies, EO, and low-quality evidence. While there are similarities between their findings and ours, our study focuses on clinical guidelines and not on infection prevention and control. Also, our methodology differs as we employed a systematic approach.

Implications

This study identified several challenges with current guidelines and their development processes. First, one-fifth of the guidelines did not utilize literature searches, indicating a lack of a systematic approach in identifying and synthesizing evidence. In addition, only half of the guidelines provided search strategies, suggesting a gap in transparency and reproducibility. Furthermore, less than half of the guidelines provided a flow diagram or risk of bias assessments, indicating a potential lack of methodological rigor in evidence synthesis. These findings highlight the need for more adherence to methodological principles for developing trustworthy guidelines. Similar results were reported also by previous studies [2426]. On the other hand, it was encouraging to see that nearly all guidelines used an evidence-grading system.

Second, half of the included guidelines allowed EO to formulate their recommendations, with 30% of their overall number of recommendations being based on EO or in an evidence level where EO is listed. Most of these guidelines used evidence grading systems where EO is part of the evidence hierarchy, placing EO in the same category as low or very low-quality evidence. This lack of distinction between EO and other types of evidence in the recommendations might pose a challenge [27]. For readers, it might be unclear whether a recommendation is based solely on clinical experience, or on very low-quality evidence such as case reports and case series, or on large observational studies, which might provide more solid evidence for basing recommendations. This lack of distinction between EO and evidence may result in misunderstandings regarding the strength and reliability of recommendations among clinicians, therefore, efforts need to be taken to make this clear in each recommendation where EO is used.

Third, there were statistically significant differences between guidelines that allowed EO and those that did not. The former issued a higher number of recommendations on average and reported less frequent use of screening flow diagrams and risk of bias assessments. As this study provides only a “map”, further research is necessary to explore this issue for a more detailed understanding of these differences.

Fourth, our findings highlight a major diversity in the terminology used to express recommendations. Especially regarding their strength and the verbs employed, both within and across different guidelines. An illustrative instance includes the shared use of the term "we recommend". This term is used for recommendations based on high-quality evidence and also for recommendations stemming entirely from EO. While we acknowledge the methodological differences among various societies and organizations, it’s imperative to consider the perspective of end users responsible for patient care. This array of terminology can potentially lead to confusion in the practical application of such recommendations. More research is needed in this field, and more efforts need to be made towards standardization of wording among international guidelines.

Strengths and limitations

This study has several strengths. It is the first study providing an analytical map of EO utilization in international clinical guidelines on ID and analyzing methodological aspects and differences. The study fills a critical gap in the current literature by shedding light on the patterns, prevalence, and characteristics of EO use in the ID guideline landscape. By employing systematic search strategies, a comprehensive collection of ID guidelines, from 136 distinct organizations, was identified, ensuring a representative sample for analysis. By providing an analytical map of EO utilization in ID guidelines, this study generates insights into current practices and patterns, which can guide the development of future guidance documents and regulatory frameworks to ensure standardized, transparent, and reliable use of EO or clinical experience in clinical guidelines.

This study has a few limitations. One is the exclusive assessment of guidelines published in English language, which may limit the generalizability of the findings to guidelines in other languages. However, since the study’s primary focus was on international guidelines, which are typically published in English, we do not consider this a major limitation. This is supported by the lack of identified international guidelines in languages other than English. Another potential limitation is that the screening and data extraction were performed by a single reviewer. To minimize any possible errors, we utilized a detailed protocol with clear inclusion criteria (S1 Appendix) and piloted data extraction sheets. Additionally, S1 Appendix provides a comprehensive list of all excluded guidelines and their respective exclusion reasons, ensuring maximal transparency.

Conclusions

Half of international infectious disease guidelines use or allow expert opinion. In most cases, expert opinion is part of evidence hierarchy within the evidence grading systems. Its utilization varies considerably in methodology, form, and terminology between guidelines. These findings highlight a pressing need for more guidance and standardization in infectious disease guidelines.

References

  1. 1. Graham R, Mancher M, Miller Wolman D, Greenfield S, Steinberg E. Institute of Medicine 2011. Clinical Practice Guidelines We Can Trust. Washington (DC): National Academies Press (US); 2011. Available from: https://www.ncbi.nlm.nih.gov/books/NBK209539/.
  2. 2. Ponce OJ, Alvarez-Villalobos N, Shah R, Mohammed K, Morgan RL, Sultan S, et al. What does expert opinion in guidelines mean? a meta-epidemiological study. Evid Based Med. 2017;22(5):164–9. pmid:28924055
  3. 3. Stamm TA, Andrews MR, Mosor E, Ritschl V, Li LC, Ma JK, et al. The methodological quality is insufficient in clinical practice guidelines in the context of COVID-19: systematic review. J Clin Epidemiol. 2021;135:125–35. pmid:33691153
  4. 4. Gattrell WT, Logullo P, van Zuuren EJ, Price A, Hughes EL, Blazey P, et al. ACCORD (ACcurate COnsensus Reporting Document): A reporting guideline for consensus methods in biomedicine developed via a modified Delphi. PLoS Med. 2024;21(1):e1004326. pmid:38261576
  5. 5. Eibling D, Fried M, Blitzer A, Postma G. Commentary on the role of expert opinion in developing evidence-based guidelines. Laryngoscope. 2014;124(2):355–7. pmid:24151042
  6. 6. Schünemann HJ, Zhang Y, Oxman AD. Distinguishing opinion from evidence in guidelines. Bmj. 2019;366:l4606. pmid:31324659
  7. 7. Balshem H, Helfand M, Schünemann HJ, Oxman AD, Kunz R, Brozek J, et al. GRADE guidelines: 3. Rating the quality of evidence. J Clin Epidemiol. 2011;64(4):401–6. pmid:21208779
  8. 8. World Health Organization. WHO handbook for guideline development2014 August 2023.
  9. 9. Shiffman RN, Marcuse EK, Moyer VA, Neuspiel DR, Hodgson ES, Glade G, et al. Toward transparent clinical policies. Pediatrics. 2008;121(3):643–6. pmid:18310217
  10. 10. Nagavci B, Tonia T, Roche N, Genton C, Vaccaro V, Humbert M, et al. European Respiratory Society clinical practice guidelines: methodological guidance. ERJ Open Res. 2022;8(1). pmid:35083323
  11. 11. Sims CR, Warner MA, Stelfox HT, Hyder JA. Above the GRADE: Evaluation of Guidelines in Critical Care Medicine. Crit Care Med. 2019;47(1):109–13. pmid:30303840
  12. 12. Fanaroff AC, Califf RM, Windecker S, Smith SC Jr., Lopes RD. Levels of Evidence Supporting American College of Cardiology/American Heart Association and European Society of Cardiology Guidelines, 2008–2018. Jama. 2019;321(11):1069–80. pmid:30874755
  13. 13. Miles KE, Rodriguez R, Gross AE, Kalil AC. Strength of Recommendation and Quality of Evidence for Recommendations in Current Infectious Diseases Society of America Guidelines. Open Forum Infect Dis. 2021;8(2):ofab033. pmid:33614818
  14. 14. Murad MH, Wang Z. Guidelines for reporting meta-epidemiological methodology research. Evid Based Med. 2017;22(4):139–42. pmid:28701372
  15. 15. Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71. pmid:33782057
  16. 16. Kish MA. Guide to development of practice guidelines. Clin Infect Dis. 2001;32(6):851–4. pmid:11247707
  17. 17. Recommendations for Guidelines Production—A document for Task Force Members Responsible for the production and Updating ESC Guidelines: European Society of Cardiology (ESC) 2010. Available from: https://www.escardio.org/static-file/Escardio/Guidelines/ESC%20Guidelines%20for%20Guidelines%20Update%202010.pdf.
  18. 18. Khan AR, Khan S, Zimmerman V, Baddour LM, Tleyjeh IM. Quality and strength of evidence of the Infectious Diseases Society of America clinical practice guidelines. Clin Infect Dis. 2010;51(10):1147–56. pmid:20946067
  19. 19. Chakfé N, Diener H, Lejay A, Assadian O, Berard X, Caillon J, et al. Editor’s Choice—European Society for Vascular Surgery (ESVS) 2020 Clinical Practice Guidelines on the Management of Vascular Graft and Endograft Infections. Eur J Vasc Endovasc Surg. 2020;59(3):339–84. pmid:32035742
  20. 20. Jacox A, Carr D, Payne R, Berde C. Management of cancer pain: AHCPR publication; 1994.
  21. 21. OCEBM Levels of Evidence Working Group. The Oxford 2011 Levels of Evidence: Oxford Centre for Evidence-Based Medicine; [Available from: http://www.cebm.net/index.aspx?o=5653.
  22. 22. Anger JT, Bixler BR, Holmes RS, Lee UJ, Santiago-Lastra Y, Selph SS. Updates to Recurrent Uncomplicated Urinary Tract Infections in Women: AUA/CUA/SUFU Guideline. J Urol. 2022;208(3):536–41. pmid:35942788
  23. 23. Mitchell BG, Fasugba O, Russo PL. Where is the strength of evidence? A review of infection prevention and control guidelines. J Hosp Infect. 2020;105(2):242–51. pmid:31978417
  24. 24. Luo X, Liu Y, Ren M, Zhang X, Janne E, Lv M, et al. Consistency of recommendations and methodological quality of guidelines for the diagnosis and treatment of COVID-19. J Evid Based Med. 2021;14(1):40–55. pmid:33565225
  25. 25. Dersch R, Toews I, Sommer H, Rauer S, Meerpohl JJ. Methodological quality of guidelines for management of Lyme neuroborreliosis. BMC Neurol. 2015;15:242. pmid:26607686
  26. 26. Henig O, Yahav D, Leibovici L, Paul M. Guidelines for the treatment of pneumonia and urinary tract infections: evaluation of methodological quality using the Appraisal of Guidelines, Research and Evaluation II instrument. Clin Microbiol Infect. 2013;19(12):1106–14. pmid:24033764
  27. 27. Spellberg B, Wright WF, Shaneyfelt T, Centor RM. The Future of Medical Guidelines: Standardizing Clinical Care With the Humility of Uncertainty. Ann Intern Med. 2021;174(12):1740–2. pmid:34781711