Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

AI-assisted radiologists vs. standard double reading for rib fracture detection on CT images: A real-world clinical study

  • Li Sun ,

    Contributed equally to this work with: Li Sun, Yangyang Fan

    Roles Data curation, Formal analysis, Funding acquisition, Investigation, Visualization, Writing – original draft

    Affiliation Department of Radiology, Guang’anmen Hospital, China Academy of Chinese Medical Sciences, Beijing, China

  • Yangyang Fan ,

    Contributed equally to this work with: Li Sun, Yangyang Fan

    Roles Data curation, Formal analysis, Investigation, Methodology, Project administration, Validation, Visualization

    Affiliation Department of Radiology, Guang’anmen Hospital, China Academy of Chinese Medical Sciences, Beijing, China

  • Shan Shi,

    Roles Data curation, Investigation

    Affiliation Department of Radiology, Guang’anmen Hospital, China Academy of Chinese Medical Sciences, Beijing, China

  • Minghong Sun,

    Roles Data curation, Investigation

    Affiliation Department of Radiology, Guang’anmen Hospital, China Academy of Chinese Medical Sciences, Beijing, China

  • Yunyao Ma,

    Roles Investigation

    Affiliation Department of Radiology, Guang’anmen Hospital, China Academy of Chinese Medical Sciences, Beijing, China

  • Kuo Zhang,

    Roles Investigation

    Affiliation Department of Radiology, Guang’anmen Hospital, China Academy of Chinese Medical Sciences, Beijing, China

  • Feng Zhang,

    Roles Investigation

    Affiliation Department of Radiology, Guang’anmen Hospital, China Academy of Chinese Medical Sciences, Beijing, China

  • Huan Liu,

    Roles Investigation

    Affiliation Department of Radiology, Guang’anmen Hospital, China Academy of Chinese Medical Sciences, Beijing, China

  • Tong Yu,

    Roles Investigation, Supervision

    Affiliation Department of Orthopedic, Guang’anmen Hospital, China Academy of Chinese Medical Sciences, Beijing, China

  • Haibin Tong,

    Roles Conceptualization, Supervision

    Affiliation Department of Radiology, Guang’anmen Hospital, China Academy of Chinese Medical Sciences, Beijing, China

  • Xuedong Yang

    Roles Conceptualization, Methodology, Supervision, Writing – review & editing

    yangxuedong1@163.com

    Affiliation Department of Radiology, Guang’anmen Hospital, China Academy of Chinese Medical Sciences, Beijing, China

Abstract

To evaluate the diagnostic accuracy of artificial intelligence (AI) assisted radiologists and standard double-reading in real-world clinical settings for rib fractures (RFs) detection on CT images. This study included 243 consecutive chest trauma patients (mean age, 58.1 years; female, 166) with rib CT scans. All CT scans were interpreted by two radiologists. The CT images were re-evaluated by primary readers with AI assistance in a blinded manner. Reference standards were established by two musculoskeletal radiologists. The re-evaluation results were then compared with those from the initial double-reading. The primary analysis focused on demonstrate superiority of AI-assisted sensitivity and the noninferiority of specificity at patient level, compared to standard double-reading. Secondary endpoints were at the rib and lesion levels. Stand-alone AI performance was also assessed. The influence of patient characteristics, report time, and RF features on the performance of AI and radiologists was investigated. At patient level, AI-assisted radiologists significantly improved sensitivity by 25.0% (95% CI: 10.5, 39.5; P < 0.001 for superiority), compared to double-reading, from 69.2% to 94.2%. And, the specificity of AI-assisted diagnosis (100%) was noninferior to double-reading (98.2%) with a difference of 1.8% (95% CI: -3.8, 7.4; P = 0.999 for noninferiority). The diagnostic accuracy of both radiologists and AI was influenced by patient gender, rib number, fracture location, and fracture type. Radiologist performance was affected by report time, whereas AI’s diagnostic accuracy was influenced by patient age and the side of the rib involved. AI-assisted additional-reader workflow might be a feasible strategy to instead of traditional double-reading, potentially offering higher sensitivity and specificity compared to standard double-reading in real-word clinical practice.

Introduction

Rib fractures (RFs) are commonly encountered in clinical practice. Given that traumatic RFs are associated with higher mortality and morbidity, a quick and accurate diagnosis is of clinical importance [1]. Multidetector computed tomography (MDCT), owing to its superior in-plane resolution and reduced slice thickness and spacing, has become the preferred modality for radiologic evaluation of trauma patients [2]. MDCT is advantageous for identifying RFs and associated complications, offering higher sensitivity and specificity compared to plain radiography [3]. However, despite the high-quality and multiplanar reformatted images provided by MDCT, missed RFs remain a common issue in daily clinical practice [4].

Double-reading is one of the ways to increase the quality of radiology reports. This process involves two radiology specialists interpreting the same study, either as peers with similar experience levels or through secondary reading by a higher level of sub-specialization [5]. Trauma CT is one of the areas where double-reading seems to be important [6]. Nevertheless, practical constraints such as radiologist availability and workload often limit the widespread adoption of this practice. Furthermore, the potential for systematic errors due to fatigue and high work-load remains an inherent challenge in human diagnosis [7, 8].

Deep learning shows potential in reducing the workload of image interpretation and minimizing diagnostic oversight [912]. Studies have demonstrated that AI can achieve diagnostic efficiency comparable to that of experienced radiologists in controlled settings, with shorter reading time [1318]. However, the effectiveness of AI in real-world clinical scenarios, especially in conjunction with double-reading, and its performance consistency across diverse cases require further investigation.

This study aimed to evaluate the diagnostic accuracy of AI-assisted radiologists (radiologist plus AI) and standard double reading by dual-radiologists in real-world clinical settings for RF detection on CT scans. The influence of patient characteristics, report timing, and RF features on performance of both AI and radiologists will also be assessed, to provide a nuanced understanding of the AI’s potential in augmenting radiological diagnostics.

Materials and methods

The study was approved by our Ethics Committee (No. 2024-058-KY; 03/04/2024), with informed consent waived in line with guidelines.

1. Dataset collection and access

Our dataset included rib CT scans of 243 consecutive chest trauma patients (mean age, 58.1 years; female, 166) admitted to our emergency department from January 1, 2019, to December 31, 2020, with 188 patients (mean age, 61.1 years; female, 127) diagnosed with RFs and 55 patients (mean age, 46.6 years; female, 39) serving as non-RF controls (Table 1). Intrathoracic injuries associated with RFs included pneumothorax (5.3%, 10 of 188), pulmonary contusions (2.1%, 4 of 188), and hemothorax (1.1%, 2 of 188). Single-reader interpretations and duplicate images were excluded (Fig 1). The patient data was accessed for research purposes between 4th April to 31st May 2024. All images were reviewed through the picture archiving and communication system (PACS) (3.0.30500.213; Hinacom Software and Technology, Beijing, China) on a Windows 7 system (Microsoft Corporation, Redmond, Washington, United States) using professional radiographic monitors (JUSHA-M52C, JUSHA-M53AA, JUSHA-M33B and JUSHA-M33D; Nanjing Jusha Display Technology, Nanjing, China). Authors had access to information that could identify individual participants during and after data collection.

Scans were performed on two Siemens CT scanners (Emotion 16 and SOMATOM Definition Flash) with a 120 kV tube voltage and automated dose modulation, with reconstruction layer thicknesses of 1.5 mm for Emotion 16 and 0.75 mm for SOMATOM Definition Flash, using a bone algorithm.

2. AI system

The AI analysis utilized a commercially available AI system (uAI-BoneCare; version 20220730SP1; Shanghai United Imaging Intelligence, Co., Ltd.), which is a medical device intended to detect RF on chest CT scans. The pipeline of this AI system consists of five steps: rib segmentation (using modified V-Net model), vertebrae detection (using VRB-Net), rib labelling, RF detection (using the DL-based detection model VRB-Net), and RF classification (using DL-based classification model, BasicNet) [17]. The data set used in the present study was not used for training, validation, or testing of this AI software.

3. Reference standard and lesion definition

Two musculoskeletal radiologists, with over a decade of experience, reviewed all CT examinations in consensus to establish the "ground truth", and the presence or absence of fractures was determined based on the imaging features of RF, clinical history, and findings from previous or follow-up CT scans, in April 2024. Additionally, the number of involved ribs from the 1st to 13th (where the 13th rib corresponds to the lumbar ribs), side (left or right), fracture location, and fracture type were documented. Fracture location was categorized into five anatomic regions: near the junction of the costal cartilage (Nj), anterior (A), lateral (L), posterior (P), and near the vertebral bodies (Nv), as depicted in Fig 2. Each lesion was classified into four main types based on morphology and age: (Ⅰ) fresh fracture with no or minimal dislocation, (Ⅱ) fresh fracture with distinct dislocation, (III) healing fracture with periosteal reaction or callus formation, (Ⅳ) old fracture lacking periosteal reaction or callus formation. These types are illustrated in Fig 3.

thumbnail
Fig 2. Schematic diagram of five anatomic regions of rib arc.

Nj: within 1cm of junction of costal cartilage (green). Nv: within 1cm of costovertebral junction. A: anterior 1/3 length between Nj and Nv (blue). L: middle 1/3 length between Nj and Nv (yellow). P: posterior 1/3 length between Nj and Nv (orange). The 11th and 12th ribs don’t have Nj. (Nj = near the junction of costal cartilage, A = anterior, L = lateral, P = posterior, Nv = near vertebral bodies).

https://doi.org/10.1371/journal.pone.0316732.g002

thumbnail
Fig 3. Demonstration of fracture types.

(Ⅰ) Fresh fracture with no or minimal dislocation (A and B); (Ⅱ) fresh fracture with distinct dislocation (C); (III) healing fracture with periosteal reaction or callus formation (D); (Ⅳ) old fracture lack of periosteal reaction or callus formation (E and F).

https://doi.org/10.1371/journal.pone.0316732.g003

4. Image interpretation

This study compared the diagnostic efficiency of double-reading by two radiologists, AI-alone, and AI-assisted diagnosis. Double-reading diagnoses were directly extracted from our hospital’s PACS from January 1, 2019, to December 31, 2020. AII primary reports were conducted by 2 residents (2nd and 3rd year of residency; female, 1) and 5 attending radiologists (4 to 8 years of experience; mean, 6.6±1.7 years of experience; female, 5), with secondary readings performed by senior radiologists with over 10 years of experience (See S1 Table for details). For AI-assisted diagnosis, to simulate real-world clinical scenarios, from April to May 2024, the CT images of each patient were integrated back into the daily workload of the original junior radiologists who had initially interpreted the cases via PACS based on the original scan timestamps. This time, radiologists read CT images with the assistance of AI. AI-alone diagnoses were drawn from the structured reports automatically formed by the AI system. Diagnostic outcomes were compared against the reference standard (Fig 1).

The diagnostic outcomes from the double-reading, AI-assisted groups, and AI-alone were compared against the reference standard. The comparisons were made at the patient, rib, and lesion levels to categorize the results as true positives, false negatives, true negatives, and false positives. These categorizations were based on the classification rules outlined in the study by Liu et al. [19].

5. Statistical analyses

Principles of statistical analysis.

Statistical analysis was performed using IBM SPSS Statistics (Version 27.0, IBM Corp., Armonk, N.Y., USA). Continuous variables, age, were reported as mean ± standard deviation (SD). Group comparisons for continuous variables were performed using independent samples t-tests. Categorical variables were summarized using frequencies and percentages. Group comparisons for categorical variables were evaluated using chi-square tests.

The DeLong test was used to compare the areas under the receiver operating characteristic (ROC) curves (AUCs) and provided a statistical measure of the difference. The McNemar test was used to analyze diagnostic performance by comparing sensitivity and specificity.

All statistical tests were two-tailed, and the significance level was set at p < 0.05. 95% confidence intervals (CIs) were reported.

Primary endpoint.

The primary endpoints of this study focused on patient-level sensitivity and specificity. The trial aimed to demonstrate that AI-assisted diagnosis had superior sensitivity and non-inferior specificity compared to double-reading. Superior sensitivity was claimed if the lower bound of the 95% CI for the sensitivity difference was >0. Non-inferiority of specificity was established if the lower bound of the 95% CI for the specificity difference was greater than −10%.

Secondary endpoint.

Secondary endpoints included statistical analysis of sensitivity and specificity at the rib level, along with AUC comparisons at both the patient and rib levels. Lesion-level assessments included sensitivity and precision comparisons between AI-assisted and double-reading groups. Furthermore, the performance of the AI-alone was evaluated against the double-reading.

Results

1. RF characteristics

A total of 826 RF lesions were identified across 721 ribs, while the remaining 5083 ribs were free from fractures (Table 1). 328 RFs classified as acute, with 245 as Type I and 83 as Type II. Old fractures, constituting 498 RFs, included 123 Type III and 375 Type IV. The spatial distribution of RFs within the rib arcs is detailed in S1 Fig.

2. Primary performance metrics

The sensitivity at patient level was estimated at 94.2% in AI-assisted group and 69.2% in double-reading, with an increase estimated at 25.0% (95% CI: 10.5, 39.5; P < 0.001). It indicated that the sensitivity of RF detection by AI-assisted was superior to that of double-reading. Specificity was perfect at 100% for AI-assisted diagnosis versus 98.2% for double-reading, showing a non-significant increase of 1.8% (95% CI: -3.8, 7.4; P = 0.999). It suggested that the specificity of RF detection by AI-assisted diagnosis was not inferior to double-reading. Primary performance metrics analysis was thus successful (Table 2).

thumbnail
Table 2. Performance comparison of AI-assisted and AI-alone diagnosis versus double-reading at patient, rib, and lesion levels.

https://doi.org/10.1371/journal.pone.0316732.t002

3. Secondary performance metrics

Rib-level analysis revealed that AI-assisted diagnosis sensitivity significantly higher than the double-reading (P < 0.001), with an increase of 23.2% (95% CI: 0, 46.4), and specificity with a non-significant increase of 0.1% (95% CI: -0.3, 0.5; P = 0.998). The AI-assisted group demonstrated a significantly higher AUC compared to the double-reading group at the patient level (P < 0.001) (Table 2).

For lesion-level analysis, 25 lesions were misdiagnosed as fractures by radiologists or AI, resulting in false positives. The sensitivity of AI-assisted diagnosis was significantly higher than that of double-reading (P < 0.001). No significant difference was found between precision of two groups (P = 0.999) (Table 2).

AI-alone demonstrated significantly higher sensitivity at patient, rib, and lesion levels and significant higher AUC at patient level than double-reading (all P < 0.001). AUC at rib level (P = 0.20), specificity at patient level (P = 0.999) and rib level (P = 0.998), and precision at lesion level (P = 0.880) was similar between AI-alone and double-reading (Table 2).

4. Influences on diagnostic accuracy at lesion level: Patient characteristics, report time, and RF features

Both radiologists and AI demonstrated a higher diagnostic accuracy for male patients than for females (p < 0.001). Age did not impact the radiologists’ diagnostic accuracy (p = 0.802); however, AI showed a decrease in accuracy with increasing patient age (p < 0.001) (Fig 4 and Table 3).

thumbnail
Fig 4. Bar graph of the impact of report time, rib number, fracture location, and fracture type on the diagnostic performance of double-reading and AI-alone.

A: Radiologists showed lowest accuracy in the early morning, aligning with the highest lesion volume, suggesting a workload effect, and A gradual afternoon decline in accuracy was observed. AI demonstrated consistent diagnostic performance throughout all reporting times. B: Rib number significantly influenced the diagnostic accuracy of both radiologists and AI, with the upper ribs (1st-3rd) being more challenging to diagnose correctly. C: Fracture location significantly affected the diagnostic accuracy, with least accurate diagnoses in Nv and Nj for both AI and radiologists. D: Fracture type significantly affected the diagnostic accuracy, with least accurate diagnoses for fractures with subtle morphological changes (such as Type I and IV). Notably, AI outperformed dual radiologists in diagnosing type I fractures (85.3% vs. 46.9%). (Nj = near the junction of costal cartilage, A = anterior, L = lateral, P = posterior, Nv = near vertebral bodies).

https://doi.org/10.1371/journal.pone.0316732.g004

thumbnail
Table 3. The impact of patient characteristics and report timing on performance of AI and radiologists.

https://doi.org/10.1371/journal.pone.0316732.t003

Radiologists’ diagnostic accuracy varied significantly by reporting time (p < 0.001). Notably, AI demonstrated consistent diagnostic performance throughout all reporting times, with no significant variation (p = 0.421) (Fig 4 and Table 3).

Rib number significantly influenced the diagnostic accuracy of both radiologists (p = 0.001) and AI (p = 0.002). Fracture location and type also significantly affected the diagnostic accuracy for both groups (p < 0.001). Radiologists showed no significant difference in diagnostic accuracy for left versus right RFs (p = 0.197). In contrast, AI exhibited a significant higher accuracy for left-sided than for right-sided fractures (p = 0.029) (Fig 4 and Table 4).

thumbnail
Table 4. The impact of features of RF on performance of AI and radiologists.

https://doi.org/10.1371/journal.pone.0316732.t004

Discussion

Previous studies provide evidence of AI’s potential to enhance diagnostic ability of both radiologists and non-radiologists [11, 20]. In time-consuming and labor-intensive works, such as RFs detection, AI-supported diagnosis is presumed to resulted in a similar accuracy, with shorter reading time [1118, 21], however, evidence of the safe implementation of AI in real clinical practice is limited. The stand-alone performance of our AI system (AUC, 0.97; sensitivity, 95.7%) stands in concert with the high benchmarks set by earlier studies (AUC, > 0.90; sensitivity, > 90% for most studies) [1114, 16, 18]. Notably, our study did not pre-screen images to eliminate factors that would reducing diagnostic accuracy, such as significant artifacts, osseous neoplasms, congenital rib abnormalities, or a history of rib surgery [1318]. Despite this more challenging and unfiltered dataset, our AI software demonstrated comparable diagnostic stability, indicating its robustness and adaptability to the complexities of clinical practice.

The AI-assisted performance of junior radiologists in clinical scenarios was impressive, with an AUC of 0.97 and a sensitivity of 95.7% at the patient level, which is in keeping with the high standards established by prior researches [1215]. Importantly, this represents a significant 25.0% (95% CI: 10.5, 39.5) increase in sensitivity over the double-reading method supervised by senior radiologists (69.2%; P < 0.001), with non-inferior specificity. The enhanced sensitivity of AI-assisted radiologists was also evident at the rib and lesion levels, highlighting AI’s potential in augmenting radiological diagnostics in the context of RF detection in real clinical settings. The findings provide compelling evidence for implementing AI in double-reading to reducing workload in RF evaluation for chest blunt trauma without sacrificing diagnostic accuracy.

The excellent diagnostic capability achieved by AI in conjunction with radiologists may be related to the complementarity of AI and human diagnostic abilities. During the diagnostic process, they may be influenced by different factors. For example, as depicted in the present study, performance of radiologist fluctuated across the day reflecting the cognitive errors due to human fatigue [7, 8]. In contrast, our results demonstrated AI maintained a consistent accuracy rate, not subject to the same fatigue-related limitations. While, older age was one of the factors that decrease AI’s diagnostic accuracy. This may be due to the challenge for AI algorithm in distinguishing between age-related bone changes and actual fractures. Radiologists, however, consisted with previous studies, were not influenced by age [22], indicating the role of human expertise in mitigating such biases. Our results also demonstrated that fresh fractures, particularly type I, were more accurately diagnosed by AI, suggesting it can detect subtle lesions that may be missed by the human eye [911].

Certain factors can influence the diagnostic accuracy of both AI and radiologists, but their reasons may differ. Radiologists showed lower accuracy for female and upper RFs may be due to the anatomical features, which may potentially affect the visibility of fracture, such as the smaller cross-sectional area and cortical thickness of female than male [23]. While, AI’s decreasing in accuracy may be attributed to the relatively insufficient in training dataset. For example, the relatively inferior diagnostic capability in upper rib may be attributed to the lower incidence of fractures in these regions, as are shielded by surrounding structures, which result in the relatively insufficient in training dataset [18]. As our study shows that RFs are approximately normally distributed, with relatively lower incidences of fractures in the upper and lower portion of rib cage, which also gained the relatively lowest diagnostic efficiency the present study. Moreover, challenges in diagnosing fractures in the anterior and posterior junctions of the rib were noted. The anatomical complexities of these areas pose challenges for both human and AI to detect RFs [4, 18, 22], emphasizing the need for caution when integrating AI into clinical practice, especially in junction areas.

This preliminary study had several limitations. First, data were collected from only one center, and only one commercial AI system was analyzed in the present study. The results needed to validate by more data from multiple centers and implementing other commercially available systems. Second, given that the interval between the first and second evaluations, during these 4–5 years, average workload of in our department increased by 20–40%. As a result, AI-assisted radiologists have achieved better diagnostic performance under greater work pressure. Third, since this is a retrospective study, the time spent on diagnosis cannot be calculated. Previous studies have consistently demonstrated the ability of AI to shorten the reading time of radiologist in diagnosing RFs in chest CT [1215]. With the aid of AI, report time savings ranged from approximately 73.9 seconds to 771 seconds, depending on the study conditions [12, 15]. Meanwhile, the results of this study showed AI’s potential to replace one of the radiologists in traditional double reading, which would reduce the workload.

In conclusion, our results indicate AI-assisted additional-reader workflow might be a feasible strategy to reducing workload in RF evaluation in real clinical practice. The present study highlighting the potential of AI to offer consistent diagnostic performance across varying conditions and its superiority in detecting certain types of fractures, while, further efforts are needed to refine AI algorithms to better accommodate the nuances of patient-specific factors and further enhance its diagnostic capabilities in collaboration with radiologists.

Supporting information

S1 Fig. Distribution of the number of RFs (n = 826).

The most common fracture location was the region A (40.2%) followed by Nj (23.4%). The prevalence of RFs approximated a Gaussian distribution, with the apex appearing in the fifth rib. More than 56% of all RFs were between the 4th and 7th ribs. (Nj = near the junction of costal cartilage, A = anterior, L = lateral, P = posterior, Nv = near vertebral bodies).

https://doi.org/10.1371/journal.pone.0316732.s002

(TIF)

Acknowledgments

I am grateful for the guidance and support provided by Jingjing Cui and Yaoling Shen in the field of statistics.

References

  1. 1. Sirmali M, Türüt H, Topçu S, Gülhan E, Yazici U, Kaya S, et al. A comprehensive analysis of traumatic rib fractures: morbidity, mortality and management. Eur J Cardiothorac Surg. 2003;24(1):133–138. pmid:12853057
  2. 2. Kaewlai R, Avery LL, Asrani AV, Novelline RA. Multidetector CT of blunt thoracic trauma. Radiographics. 2008;28(6):1555–1570. pmid:18936021
  3. 3. Chapman BC, Overbey DM, Tesfalidet F, Schramm K, Stovall RT, French A, et al. Clinical Utility of Chest Computed Tomography in Patients with Rib Fractures CT Chest and Rib Fractures. Arch Trauma Res. 2016;5(4):e37070. Published 2016 Sep 13. pmid:28144607
  4. 4. Cho SH, Sung YM, Kim MS. Missed rib fractures on evaluation of initial chest CT for trauma patients: pattern analysis and diagnostic value of coronal multiplanar reconstruction images with multidetector row CT. Br J Radiol. 2012;85(1018):e845–e850. pmid:22514102
  5. 5. Geijer H, Geijer M. Added value of double reading in diagnostic radiology, a systematic review. Insights Imaging. 2018;9(3):287–301. pmid:29594850
  6. 6. Banaste N, Caurier B, Bratan F, Bergerot JF, Thomson V, Millet I. Whole-Body CT in Patients with Multiple Traumas: Factors Leading to Missed Injury. Radiology. 2018;289(2):374–383. pmid:30084754
  7. 7. Lee CS, Nagy PG, Weaver SJ, Newman-Toker DE. Cognitive and system factors contributing to diagnostic errors in radiology. AJR Am J Roentgenol. 2013 Sep;201(3):611–7. pmid:23971454
  8. 8. Kasalak Ö, Alnahwi H, Toxopeus R, Pennings JP, Yakar D, Kwee TC. Work overload and diagnostic errors in radiology. Eur J Radiol. 2023 Oct;167:111032. pmid:37579563
  9. 9. Salinas MP, Sepúlveda J, Hidalgo L, Peirano D, Morel M, Uribe P, et al. A systematic review and meta-analysis of artificial intelligence versus clinicians for skin cancer diagnosis. NPJ Digit Med. 2024 May 14;7(1):125. pmid:38744955
  10. 10. Brugnara G, Baumgartner M, Scholze ED, Deike-Hofmann K, Kades K, Scherer J, et al. Deep-learning based detection of vessel occlusions on CT-angiography in patients with suspected acute ischemic stroke. Nat Commun. 2023;14(1):4938. Published 2023 Aug 15. pmid:37582829
  11. 11. Guermazi A, Tannoury C, Kompel AJ, Murakami AM, Ducarouge A, Gillibert A, et al. Improving Radiographic Fracture Recognition Performance and Efficiency Using Artificial Intelligence. Radiology. 2022;302(3):627–636. pmid:34931859
  12. 12. Jin L, Yang J, Kuang K, Ni B, Gao Y, Sun Y, et al. Deep-learning-assisted detection and segmentation of rib fractures from CT scans: Development and validation of FracNet. EBioMedicine. 2020;62:103106. pmid:33186809
  13. 13. Wu M, Chai Z, Qian G, Lin H, Wang Q, Wang L, et al. Development and Evaluation of a Deep Learning Algorithm for Rib Segmentation and Fracture Detection from Multicenter Chest CT Images. Radiol Artif Intell. 2021;3(5):e200248. Published 2021 Jul 21. pmid:34617026
  14. 14. Castro-Zunti R, Chae KJ, Choi Y, Jin GY, Ko SB. Assessing the speed-accuracy trade-offs of popular convolutional neural networks for single-crop rib fracture classification. Comput Med Imaging Graph. 2021;91:101937. pmid:34087611
  15. 15. Zhou QQ, Wang J, Tang W, Hu ZC, Xia ZY, Li XS, et al. Automatic Detection and Classification of Rib Fractures on Thoracic CT Using Convolutional Neural Network: Accuracy and Feasibility. Korean J Radiol. 2020;21(7):869–879. pmid:32524787
  16. 16. Yang C, Wang J, Xu J, Huang C, Liu F, Sun W, et al. Development and assessment of deep learning system for the location and classification of rib fractures via computed tomography. Eur J Radiol. 2022;154:110434. pmid:35797792
  17. 17. Meng XH, Wu DJ, Wang Z, Ma XL, Dong XM, Liu AE, et al. A fully automated rib fracture detection system on chest CT images and its impact on radiologist performance. Skeletal Radiol. 2021;50(9):1821–1828. pmid:33599801
  18. 18. Niiya A, Murakami K, Kobayashi R, Sekimoto A, Saeki M, Toyofuku K, et al. Development of an artificial intelligence-assisted computed tomography diagnosis technology for rib fracture and evaluation of its clinical usefulness. Sci Rep. 2022;12(1):8363. Published 2022 May 19. pmid:35589847
  19. 19. Liu X, Wu D, Xie H, Xu Y, Liu L, Tao X, et al. Clinical evaluation of AI software for rib fracture detection and its impact on junior radiologist performance. Acta Radiol. 2022 Nov;63(11):1535–1545. pmid:34617809
  20. 20. Rajpurkar P, Lungren MP. The Current and Future State of AI Interpretation of Medical Images. N Engl J Med. 2023;388(21):1981–1990. pmid:37224199
  21. 21. Shin HJ, Han K, Ryu L, Kim EK. The impact of artificial intelligence on the reading times of radiologists for chest radiographs. NPJ Digit Med. 2023;6(1):82. Published 2023 Apr 29. pmid:37120423
  22. 22. Liu C, Chen Z, Xu J, Wu G. Diagnostic value and limitations of CT in detecting rib fractures and analysis of missed rib fractures: a study based on early CT and follow-up CT as the reference standard. Clin Radiol. 2022 Apr;77(4):283–290. pmid:35164929
  23. 23. Holcombe SA, Huang Y, Derstine BA. Population trends in human rib cross-sectional shapes. J Anat. 2024 May;244(5):792–802. pmid:38200705