Figures
Abstract
The Pericapsular Nerve Group (PENG) block is a novel regional anesthesia technique that provides adequate analgesia while preserving motor function. This cross-sectional study evaluated the quality, reliability, and educational value of YouTube videos on the PENG block. Thirty-six videos were analyzed using validated scoring systems (GQS, JAMA, DISCERN, and modified DISCERN). Overall video quality was moderate, with higher scores observed in procedural and institutional videos. The findings highlight both the educational potential and the need for quality control in online medical content.
Citation: Aladağ E, Zora ME (2026) Assessing the accuracy and educational value of YouTube videos on a novel regional anesthesia technique (PENG block). PLoS One 21(2): e0341799. https://doi.org/10.1371/journal.pone.0341799
Editor: Cheong Kim, Dong-A University College of Business Administration, KOREA, REPUBLIC OF
Received: November 6, 2025; Accepted: January 12, 2026; Published: February 2, 2026
Copyright: © 2026 Aladağ, Zora. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The data underlying the findings of this study were obtained from publicly available YouTube videos. Due to YouTube’s Terms of Service and copyright restrictions, the authors are not permitted to publicly redistribute the collected dataset. The dataset consists of video URLs, video metadata (including titles, upload dates, view counts, likes, and comments), and quality assessment scores extracted at the time of analysis. No special privileges were required to access the data. Other researchers can obtain the same data by independently searching the YouTube platform (https://www.youtube.com) using the search strategy, keywords, and inclusion criteria described in the Methods section.
Funding: The author(s) received no specific funding for this work.
Competing interests: The authors have declared that no competing interests exist.
Abbreviations: PENG, Pericapsular Nerve Group; GQS, Global Quality Score; JAMA, Journal of the American Medical Association benchmark; DISCERN, DISCERN instrument; mDISCERN, Modified DISCERN; ICC, Intraclass Correlation Coefficient; VPI, Video Power Index
Introduction
Effective postoperative pain management is critically essential for accelerating recovery, reducing opioid use and its side effects, and increasing patient satisfaction [1]. In recent years, the popularity of ultrasound-guided fascial plane blocks has increased due to their technical ease of application and their potential for opioid-sparing effects.
The Pericapsular Nerve Group (PENG) block, which targets the extensive anterior capsular innervation of the hip joint, was described in 2018 by Girón-Arango et al. as a novel technique based on the administration of local anesthetic into the fascial plane between the psoas muscle and the superior pubic ramus [2]. The PENG block has gained attention for providing adequate analgesia in surgeries such as total hip arthroplasty and hip fracture, while preserving motor function [3,4]. Randomized controlled trials have demonstrated that the PENG block reduces postoperative pain, decreases analgesic requirements, and facilitates functional recovery [5,6].
In the context of education, the use of video-sharing platforms, such as YouTube, has become increasingly prevalent, particularly among medical students and professionals [7,8]. Due to its wide accessibility, low cost, and capacity to facilitate visual learning, YouTube has become an essential tool in modern medical education [9,10]. However, the uncontrolled nature of the platform may result in the dissemination of misleading or inadequate information through videos [11].
To date, no study has systematically examined the quality, reliability, and educational utility of YouTube videos related to the PENG block. This study aims to evaluate the content of PENG block videos available on YouTube and assess their quality, reliability, and usefulness. We hypothesize that videos related to the PENG block generally possess moderate quality and reliability, but these levels significantly differ according to the source of the video.
Materials and methods
Study design
This study is a cross-sectional content analysis examining the quality, reliability, and usefulness of PENG block videos published on the YouTube platform, owned by Alphabet Inc. Since the study was conducted solely on publicly available YouTube videos, no institutional ethics committee approval was required.
Video selection
Between February 26, 2025, and March 26, 2025, two researchers independently searched the YouTube search engine using the keywords “PENG block” and “pericapsular nerve group block.” Among the 450 videos identified, 122 were deemed relevant to the PENG block. After excluding duplicates and non-English videos, 36 videos meeting the inclusion criteria were included in the study. The inclusion criteria were as follows:
- (1). Demonstrating the technique and/or application of the PENG block
- (2). Being in English.
The exclusion criteria included animations, conference recordings, advertisements, and videos with poor audio or visual quality. The titles, URLs, and fundamental metrics of the videos (views, likes, comments, upload date) were recorded.
Evaluation process
The videos were independently assessed by two anesthesiology specialists, blinded to each other’s evaluations. Four different scales were used in the assessments:
- Global Quality Score (GQS): Rates the overall quality and flow of the video on a scale of 1–5; 1–2 = low, 3 = moderate, 4–5 = high quality [12].
- Journal of the American Medical Association (JAMA) Benchmark: Scores authorship, attribution, disclosure, and currency on a scale of 0–4; 1 = low, 2–3 = moderate, 4 = high accuracy [13].
- DISCERN: Rates content reliability on a scale of 0–75; 0–39 = poor, 40–59 = moderate, ≥ 60 = good quality [14].
- Modified DISCERN (mDISCERN): Assesses content reliability on a scale of 0–5; < 3 = poor, 3 = moderate, > 3 = good reliability [12].
Inter-rater agreement was calculated using the Intraclass Correlation Coefficient (ICC) and was found to be greater than 0.96 for all scales, indicating excellent agreement.
Video metrics analysis
The number of views, likes, comments, video duration, time since upload, and Video Power Index (VPI) were recorded for each video. Videos were also categorized by content type (presentation or demonstration) and uploader type (individual, academic, manufacturer, or educator).
Statistical analysis
Data analysis was performed using Jamovi version 2.6.22 (Windows). Data distribution was assessed with the Shapiro–Wilk test and was found to be non-normal; therefore, non-parametric tests were used. Comparisons between groups were made using the Mann–Whitney U test, and relationships between scales were evaluated with Spearman’s correlation coefficient. Statistical significance was set at p < 0.05.
Results
Inter-rater agreement
The accuracy, quality, and reliability of the videos were assessed by two independent reviewers using the GQS, JAMA, DISCERN, and Modified DISCERN scales. Inter-rater consistency was excellent across all scales, with an ICC greater than 0.96 (Table 1).
Overall quality, accuracy, and reliability
According to the GQS, 13.9% of the videos were of high quality (scores 4–5), 44.4% were of moderate quality (score 3), and 41.7% were of low quality (scores 1–2). According to the JAMA benchmark, 19.4% exhibited high accuracy, 75.0% moderate accuracy, and 5.6% low accuracy. According to the Modified DISCERN, 25.0% demonstrated good reliability, 36.1% moderate, and 38.9% poor reliability. According to the DISCERN scale, 13.9% of the videos were of good quality, 55.6% were moderate, and 30.6% were of low quality (Table 2).
Video metrics
The mean duration of the videos was 10.9 ± 9.16 minutes (median: 7.39), with a mean number of views of 23,766, likes of 230, comments of 8.92, and a Video Power Index (VPI) of 101. The detailed distribution of the video metrics is presented in Table 3.
Distribution by content type and source
Of the videos, 97.2% featured presentation content and 30.6% included application content. Most of the videos were uploaded by individuals (91.7%) and academic (100%) sources, while 97.2% were educational and 22.2% were uploaded by manufacturers.
Comparison of application content
Videos with application content were shorter in duration compared to those without (8.14 min vs. 12.14 min; p = 0.276); however, interaction metrics and some quality scores were significantly higher (Table 4). In particular, VPI (p = 0.008), number of views (p = 0.002), likes (p = 0.009), comments (p = 0.022), and DISCERN score (p = 0.025) were higher in application videos. There was also a significant difference in GQS (p = 0.002), whereas no significant differences were observed for JAMA and Modified DISCERN scores.
Comparison between individual and other sources
Differences between videos from non-individual sources using metrics such as VPI, likes, comments, and quality scores as compared to individual videos were not statistically significant.
Comparison between manufacturer and other sources
Videos uploaded by manufacturers had significantly higher numbers of views (p = 0.021), likes (p = 0.007), comments (p = 0.001), as well as significantly higher DISCERN (p = 0.008), GQS (p = 0.001), and Modified DISCERN (p = 0.014) scores (Table 5). No significant difference in video duration was observed (p = 0.837).
Correlation analysis
Video duration was positively correlated with JAMA (r = 0.434, p < 0.01), Modified DISCERN (r = 0.422, p < 0.05), and DISCERN (r = 0.340, p < 0.05) scores. Engagement metrics (views, likes, comments) showed strong positive correlations with each other (r > 0.8, p < 0.001). There was also a robust correlation between DISCERN and GQS (r = 0.882, p < 0.01; Table 6).
Discussion
In our study, the overall information quality and reliability of the evaluated YouTube videos were found to be moderate. This finding is consistent with previous studies reporting that health-related content on YouTube is generally low-to-moderate or suboptimal in quality [15,16]. For example, a comprehensive analysis of chemotherapy-related YouTube videos reported an average quality of only “moderate-to-low” [15]. Similarly, more than half of the videos on testicular cancer were found to be of poor quality, and only a few met high scientific standards [16]. These findings suggest that while many health videos on YouTube meet basic accuracy criteria, they often contain superficial or incomplete information. Indeed, the study by Loeb et al. revealed that 77% of popular prostate cancer videos included misleading or biased information [17]. This highlights that content with high views and likes on YouTube is not necessarily reliable. Therefore, health professionals should remain vigilant regarding the information patients obtain from YouTube and intervene to correct potential misunderstandings when necessary.
The type of content and presentation format play an essential role in both engagement and quality. In our study, videos with application (demonstration) content received significantly more views, likes, and comments compared to presentation-only videos, and also exhibited higher average quality and reliability scores. This suggests that visual content demonstrating practical applications is more engaging and educational for viewers. The literature also supports that patient education videos with visual and demonstrative elements achieve higher engagement than other categories. For example, patient information videos—comprising the majority of chemotherapy-related YouTube content—received significantly more views and likes (p < 0.05) than technical videos intended for healthcare professionals, and their quality scores were generally moderate to high [15]. This suggests that well-structured, application-based content can both reach a broader audience and achieve satisfactory content quality.
Nevertheless, popularity alone should not be considered a guarantee of quality. Some studies have shown that longer and more detailed videos are associated with significantly higher quality scores [18]. This suggests that comprehensive presentations add more value to viewers, but they must still be designed to remain engaging and understandable, even when lengthy.
The source of the video (i.e., the uploader type) is another key determinant of content reliability. In our study, videos uploaded by manufacturers or institutional sources demonstrated higher average quality and reliability scores than those uploaded by individual users. This indicates that videos produced by official organizations or industry may be prepared with greater rigor. Indeed, previous studies have shown that videos uploaded by healthcare professionals, universities, or hospitals score significantly higher in quality compared to those prepared by independent individuals. For example, Duran et al. reported that testicular cancer videos uploaded by urologists and academic institutions had significantly higher DISCERN, JAMA, and GQS scores than those from other sources [16]. Similarly, a study of medical videos on YouTube found that content produced by academic institutions achieved the highest reliability scores. However, some commercial company channels also scored above average on specific measures, such as HONcode criteria [17]. Notably, in our study, no statistically significant difference in quality was observed between individual and institutional uploaders. This suggests that while institutional videos tend to have higher scores on average, expert individual content creators can also deliver high-quality information comparable to institutional sources [19]. In other words, individual physicians or specialists can also produce reliable, widely viewed content on platforms like YouTube. Nevertheless, it should not be overlooked that source reliability can be inconsistent; as Loeb et al. observed, even among the most viewed videos, an inverse relationship may exist between view count and scientific quality [17]. Therefore, viewers should pay attention to the expertise of the video source and verify the information presented through independent references.
Another noteworthy finding in our evaluation is the consistency between the assessment scales used. Positive and strong correlations were observed among DISCERN, GQS, and JAMA scores, particularly a very strong correlation between DISCERN and GQS (e.g., r ≈ 0.88). This indicates that different assessment tools yield consistent results regarding the overall quality of a video, meaning that high-quality content scores well across all measures regardless of the specific scale used. In the literature, DISCERN and GQS scores have been reported to correlate closely in medical YouTube videos; for instance, one study reported a significant positive correlation between these two scores [20]. In our research, the fact that the scales produced mutually confirming results suggests that our evaluation method was reliable and presented a consistent picture of video quality classification. In future research, though using a single scale may indicate overall quality, using multiple scales would enhance reliability by providing cross-validation.
In light of these findings, several important implications for the future of health communication on YouTube can be drawn. First, YouTube has great potential in the health domain; when used appropriately, the platform can serve as a powerful tool for educating and informing the public. Indeed, a recent study emphasized that videos on myocardial infarction on YouTube provided consistent, high-quality information, suggesting that YouTube can contribute to raising awareness and facilitating early intervention in this critical health issue [21]. However, current fluctuations in content quality and the risk of misinformation may limit this potential. Therefore, both platform administrators and health authorities have essential responsibilities in improving content oversight and quality. To ensure the delivery of reliable health information on YouTube and similar platforms, several strategies can be proposed:
- (1). Healthcare professionals and academic institutions should be encouraged to take a more active role in producing evidence-based content on digital platforms. Educational videos prepared by experts in an understandable language, incorporating demonstrative elements, can combine high quality with audience engagement.
- (2). The publication of videos according to established standards should be promoted. For example, applying internationally recognized HONcode principles or checklists developed for medical content production can improve quality. For surgical videos, the use of guidelines such as LAP-VEGaS has been recommended to enhance educational value [22].
- (3). At the platform level, steps should be taken to make high-reliability content more visible and to identify and flag misleading content. YouTube’s collaboration with healthcare organizations to grant verified badges to trustworthy channels and to improve its search algorithm to prioritize reliable sources is valuable in this regard.
- (4). Finally, efforts to improve patients’ digital health literacy should be intensified. Teaching viewers how to critically evaluate online video content (e.g., verifying sources, recency, and level of evidence) can help mitigate the impact of misinformation.
The findings of this study highlight the importance of striking a balance between optimism and caution in online health communication. While the current moderate quality leaves room for improvement, the success of demonstrative and expert-sourced content suggests that the educational potential of the platform can be enhanced with the right strategies. Future research should examine the direct effects of YouTube health videos on patients’ clinical decision-making, treatment adherence, and health outcomes [23]. In doing so, it will become clearer which types of content truly provide benefit, and content creators can be guided accordingly. In conclusion, in the digital age, platforms like YouTube play a central role in disseminating health information globally; ensuring this role develops positively requires guaranteeing access to accurate information. Strengthened collaboration between health authorities, content creators, and platform providers will likely facilitate the future development of higher-quality, correct, and reliable health communication on YouTube.
Conclusion
This study revealed that health-related videos on YouTube generally offer moderate quality and reliability but exhibit significant differences depending on content type and source. Videos with application and visually rich content had higher views and like counts as well as better quality scores. In contrast, videos uploaded by academic institutions and institutional sources provided more reliable content compared to those uploaded by individual users. Moreover, the strong agreement among quality assessment tools such as DISCERN, GQS, and JAMA supports their reliability as evaluation measures.
These findings highlight both the potential of YouTube for health education and awareness, as well as the risks associated with variability and misinformation. Future research should assess the direct impact of YouTube health videos on patients’ clinical decision-making, treatment adherence, and health outcomes, as well as scientifically determine which content types are genuinely beneficial. This would enable both content creators and health authorities to adopt more strategic approaches to addressing health issues.
References
- 1. Steinhaus ME, Rosneck J, Ahmad CS, Lynch TS. Outcomes After Peripheral Nerve Block in Hip Arthroscopy. Am J Orthop (Belle Mead NJ). 2018;47(6). pmid:29979805
- 2. Girón-Arango L, Peng PWH, Chin KJ, Brull R, Perlas A. Pericapsular Nerve Group (PENG) Block for Hip Fracture. Reg Anesth Pain Med. 2018;43(8):859–63. pmid:30063657
- 3. Morrison C, Brown B, Lin D-Y, Jaarsma R, Kroon H. Analgesia and anesthesia using the pericapsular nerve group block in hip surgery and hip fracture: a scoping review. Reg Anesth Pain Med. 2021;46(2):169–75. pmid:33109730
- 4. Karaoğlan M, Küçükçay Karaoğlan B. PENG for chronic pain: the clinical effectiveness of pericapsular nerve group block in chronic hip pain. Hip Int. 2024;34(4):524–36. pmid:38380579
- 5. Et T, Korkusuz M. Comparison of the pericapsular nerve group block with the intra-articular and quadratus lumborum blocks in primary total hip arthroplasty: a randomized controlled trial. Korean J Anesthesiol. 2023;76(6):575–85. pmid:37013389
- 6. Kukreja P, Uppal V, Kofskey AM, Feinstein J, Northern T, Davis C, et al. Quality of recovery after pericapsular nerve group (PENG) block for primary total hip arthroplasty under spinal anaesthesia: a randomised controlled observer-blinded trial. Br J Anaesth. 2023;130(6):773–9. pmid:36964012
- 7. Burke S, Snyder S, Rager R. An Assessment of Faculty Usage of YouTube as a Teaching Resource. IJAHSP. 2009.
- 8. Duncan I, Yarwood-Ross L, Haigh C. YouTube as a source of clinical skills education. Nurse Educ Today. 2013;33(12):1576–80. pmid:23332710
- 9. Tulgar S, Selvi O, Serifsoy TE, Senturk O, Ozer Z. YouTube as an information source of spinal anesthesia, epidural anesthesia and combined spinal and epidural anesthesia. Rev Bras Anestesiol. 2017;67(5):493–9. pmid:28527780
- 10. Curran V, Simmons K, Matthews L, Fleet L, Gustafson DL, Fairbridge NA, et al. YouTube as an Educational Resource in Medical Education: a Scoping Review. Med Sci Educ. 2020;30(4):1775–82. pmid:34457845
- 11. Evran T, İlhan S. Evaluation of anesthesia-related videos in patients with obesity on the youtube platform in terms of quality, reliability, and usefulness. Medicine (Baltimore). 2025;104(20):e42445. pmid:40388777
- 12. Kaptı HB, Erdem B. Evaluation of the Reliability and Quality of YouTube Videos on Congenital Nasolacrimal Duct Obstruction. Cureus. 2023;15(3):e36365. pmid:36945232
- 13. Holge S, Gogikar A, Sultana R, Rathod U, Chetarajupalli C, Laxmi Supriya Y. Quality and Reliability of YouTube Videos on Myocardial Infarction: A Cross-Sectional Study. Cureus. 2023;15(8):e43268. pmid:37692661
- 14. Erdogan G. Female genital cosmetic surgery (FGCS): Evaluation of YouTube videos. J Gynecol Obstet Hum Reprod. 2021;50(4):102102. pmid:33631405
- 15. Sahin E, Seyyar M. Assessing the scientific quality and reliability of YouTube videos about chemotherapy. Medicine (Baltimore). 2023;102(45):e35916. pmid:37960752
- 16. Duran MB, Kizilkan Y. Quality analysis of testicular cancer videos on YouTube. Andrologia. 2021;53(8):e14118. pmid:34009641
- 17. Loeb S, Sengupta S, Butaney M, Macaluso JN Jr, Czarniecki SW, Robbins R, et al. Dissemination of Misinformative and Biased Information about Prostate Cancer on YouTube. Eur Urol. 2019;75(4):564–7. pmid:30502104
- 18. Zengin O, Onder ME. Educational quality of YouTube videos on musculoskeletal ultrasound. Clin Rheumatol. 2021;40(10):4243–51. pmid:34059985
- 19. Bolac R, Ozturk Y, Yildiz E. Assessment of the Quality and Reliability of YouTube Videos on Fuchs Endothelial Corneal Dystrophy. Beyoglu Eye J. 2022;7(2):134–9. pmid:35692267
- 20. Batar S, Söylemez MS, Kemah B, Cepni SK. A cross-sectional study on reliability and quality of YouTube® videos related to hallux valgus and evaluation of newly developed hallux valgus-specific survey tool. Digit Health. 2023;9. pmid:37113253
- 21. Holge S, Gogikar A, Sultana R, Rathod U, Chetarajupalli C, Laxmi Supriya Y. Quality and Reliability of YouTube Videos on Myocardial Infarction: A Cross-Sectional Study. Cureus. 2023;15(8):e43268. pmid:37692661
- 22. de’Angelis N, Gavriilidis P, Martínez-Pérez A, Genova P, Notarnicola M, Reitano E, et al. Educational value of surgical videos on YouTube: quality assessment of laparoscopic appendectomy videos by senior surgeons vs. novice trainees. World J Emerg Surg. 2019;14:22. pmid:31086560
- 23. Abdelmohsen SM, Eldesouky R, Celine T, Nguyen T, Patel P. YouTube videos as health decision aids for the public: an integrative review. BMJ Open. 2020;10(11):e037320.