Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Can deep learning identify humans by automatically constructing a database with dental panoramic radiographs?

  • Hye-Ran Choi,

    Roles Conceptualization, Data curation, Formal analysis, Methodology, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Department of Advanced General Dentistry, Inje University Sanggye Paik Hospital, Seoul, Korea

  • Thomhert Suprapto Siadari ,

    Roles Formal analysis, Methodology, Software, Visualization, Writing – original draft, Writing – review & editing

    ‡ TSS and D-YK are contributed equally to this work as second authors.

    Affiliation Artificial Intelligence Research Center, Digital Dental Hub Incorporation, Seoul, Korea

  • Dong-Yub Ko ,

    Roles Formal analysis, Software, Validation, Writing – original draft, Writing – review & editing

    ‡ TSS and D-YK are contributed equally to this work as second authors.

    Affiliation Artificial Intelligence Research Center, Digital Dental Hub Incorporation, Seoul, Korea

  • Jo-Eun Kim,

    Roles Investigation, Writing – review & editing

    Affiliation Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, Seoul National University, Seoul, Korea

  • Kyung-Hoe Huh,

    Roles Resources, Writing – review & editing

    Affiliation Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, Seoul National University, Seoul, Korea

  • Won-Jin Yi,

    Roles Resources, Writing – review & editing

    Affiliation Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, Seoul National University, Seoul, Korea

  • Sam-Sun Lee,

    Roles Resources, Writing – review & editing

    Affiliation Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, Seoul National University, Seoul, Korea

  • Min-Suk Heo

    Roles Conceptualization, Formal analysis, Project administration, Supervision, Writing – review & editing

    hmslsh@snu.ac.kr

    Affiliation Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, Seoul National University, Seoul, Korea

Abstract

The aim of this study was to propose a novel method to identify individuals by recognizing dentition change, along with human identification process using deep learning. Recent and past images of adults aged 20–49 years with more than two dental panoramic radiographs (DPRs) were assumed as postmortem (PM) and antemortem (AM) images, respectively. The dataset contained 1,029 paired PM-AM DPRs from 2000 to 2020. After constructing a database of AM dentition, the degree of similarity was calculated and sorted in descending order. The matched rank of AM identical to an unknown PM was measured by extracting candidate groups (CGs). The percentage of rank was calculated as the success rate, and similarity scores were compared based on imaging time intervals. The matched AM images were ranked in the CG with success rates of 83.2%, 72.1%, and 59.4% in the imaging time interval for extracting the top 20.0%, 10.0%, and 5.0%, respectively. The success rates depended on sex, and were higher for women than for men: the success rates for the extraction of the top 20.0%, 10.0%, and 5.0% were 97.2%, 81.1%, and 66.5%, respectively, for women and 71.3%, 64.0%, and 52.0%, respectively, for men. The similarity score differed significantly between groups based on the imaging time interval of 17.7 years. This study showed outstanding performance of convolutional neural network using dental panoramic radiographs in effectively reducing the size of AM CG in identifying humans.

Introduction

Human identification is crucial when catastrophes cause large-scale deaths. After the 2011 east Japan earthquake and tsunami, approximately 10% of the total number of victims whose identities were confirmed were identified by dental examination. In the 2003 Daegu subway fire disaster in Korea, a similar pattern was observed. Disaster victim identification (DVI) by dentists has therefore garnered growing attention. Teeth and dental treatment types are useful in human identification because they are resistant to destruction by fire [13]. The use of teeth, one of the hard tissues in the human body, helps reduce the number of potential candidates and accurately confirms their identity.

With an increasingly aging society, identifying individuals by dental restorations is more likely because the number and complexity of dental restorations increase with age [4]. A previous study [5] reported that individuals who had received several complicated dental treatments were more easily identified than individuals who had not received or rarely received such treatments. A dental panoramic radiograph (DPR) is a reliable tool for human identification because it is regularly filmed and updated. DPRs, which contain most of the major dental information, are widely used in clinical practice and therefore have a high potential for clinical use.

In recent years, artificial intelligence (AI) has assumed an important role in the types of image processing that are cumbersome or time-consuming for the observer [6]. Object detection, image classification, and pattern recognition using deep learning have developed rapidly and are applied to medical diagnoses using dental imaging [7,8]. The convolutional neural network (CNN) method is prominent in the image domain and is particularly useful in finding patterns for image recognition [9], including radiological applications in general medical fields [9,10]. In particular, object detection has recently undergone remarkable development, and has achieved the establishment of an efficient model variously named “end-to-end learning” [1113], “transfer learning” [14,15], “active learning” [16,17], and “explainable deep learning” [18,19]. Recently, explainable deep learning models have been increasingly introduced into biomedical image analysis to overcome the inherent black box problem.

AI has been most actively introduced in diagnostic imaging in the medical imaging area. Research on human identification conducted using deep neural networks began with detecting teeth and dental prostheses from DPRs [2032]. Going further from the excellent research results on detection, the research trend of deep learning in the field of human identification is the comprehensive establishment and automation of human identification processes. Studies related to human identification that simply compared DPR images predominate. The mainstream themes of existing studies have been to find possible candidates based on the similarity of images or to measure similarities, based on various identifiers by extracting certain features or patterns from DPRs. Extracting features or performing comparisons on images could be intuitive and fast, but it may also be affected by several endogenous variables such as distortion, contrast, and resolution of DPRs [33].

To date, few studies have conducted comprehensive simulations of the human identification process using deep neural networks. To achieve the ultimate comprehensive human identification, an approach that employs database construction is essential; it takes one step forward from existing studies of human identification that are primarily based on traditional comparisons of the postmortem (PM) and antemortem (AM) images. When the database is constructed, various parameters can be extracted from the images, and developing a human identification methodology from diverse viewpoints would ultimately be possible. This study aimed to develop a novel automated system that can identify humans as a basis for DVI in the field of forensic science.

Material and methods

DPR-pair dataset for constructing a database for the human identification model

The Institutional Review Board of Seoul National University Dental Hospital (SNUDH) approved this study (ERI20032) and waived the need for informed consent. The entire experiments including data acquisition were conducted under the relevant research regulations and guidelines. DPRs of 1,029 individuals were used to simulate the postulated human identification process. The dataset contained 2,058 DPRs comprising paired images of the recent and past DPRs for each individual. These images were randomly selected after removing identifiable patient information and were retrospectively reviewed from the picture archiving and communication system at SNUDH. All authors had not access to information that could identify individual participants during or after data collection. The radiographic images were obtained using panoramic radiographic machines OP-100 (Imaging Instrumentarium, Tuusula, Finland) and Rayscan Alpha P (Ray, Gyeonggi, Korea).

From January 2000 to November 2020, DPRs with information that include age and sex were collected for dental treatment or diagnosis. Recent and past DPRs of adults aged 20–49 years (465 men and 564 women) who had more than two DPRs were assumed as the PM and AM images, respectively. The timepoints at which the PM and AM DPRs were recorded were also collected as six digits in the format of two-digit year, two-digit month, and two-digit day. The age range was limited to 20–49 years to minimize the impact of dentition changes and image distortions over time, which could lead to false identifications. This range was chosen based on the exclusion of younger individuals with deciduous teeth and older individuals with higher rates of tooth loss, ensuring a more consistent and reliable analysis. Additionally, the age criteria used in the previous study [21] were maintained to ensure the validity of the model.

Detection of natural teeth and dental treatment types using the pre-trained model

Natural teeth and dental treatment types were detected and the teeth numbers were identified using a pre-trained object detection network (Fig 1) which was a CNN modified by EfficientDet-D3 [21]. DPRs (n = 1,638) were collected in advance to construct this model. These DPRs were mutually exclusive from the existing 2,058 DPRs of 1,029 individuals. A fully web browser-based labeling system developed by Digital Dental Hub (DDH, Inc., Seoul, Korea) was used in the annotation task.

thumbnail
Fig 1. Deep neural network architecture for the detection of natural teeth and dental treatment types.

https://doi.org/10.1371/journal.pone.0312537.g001

Table 1 presents the overall performance of the model, composition of the dataset and description of the subnetwork for each task in the network architecture. The performances were evaluated using the standard of average precision and average recall under the intersection over union (IoU) of 0.5. A true positive was considered only for prediction scores >50% and an IoU with a threshold value of 0.5.

thumbnail
Table 1. The performance of pre-trained network for object detection in dental panoramic radiographs.

https://doi.org/10.1371/journal.pone.0312537.t001

Data post-processing

A database was developed by generating the dental information using the pre-trained model from each set of pairs of the PM and AM DPRs of 1,029 individuals. Detectable information that could be extracted from each DPR using deep neural networks varied substantially; therefore, it was simplified to focus on the main dental components. This dental information consisted of natural teeth, including tooth number, prostheses, treated root canals, and implants. Detection of dental pathologic conditions such as dental caries or periodontal lesion was not considered.

Based on the state and direction of change of the individual’s teeth, the data regarding the detected natural teeth and dental treatment types were post-processed as objective values. As the teeth condition changed (Fig 2), the possible past status of the tooth condition for each tooth condition was determined. The summary of detailed past status was provided as supplementary data.

thumbnail
Fig 2. Index system based on the degree of dental treatment.

The tooth status can be changed only in the direction of the arrow. The tooth status can be converted to each other in case of missing teeth, pontics, and implants.

https://doi.org/10.1371/journal.pone.0312537.g002

Evaluation of the process of human identification

To simulate the automatic process of human identification, the algorithm for human identification was implemented by deriving possible candidates from the AM database, assuming the situation of an unknown DPR in the PM database. Three phases were established (Fig 3).

thumbnail
Fig 3. Scheme of the simulation of human identification using the automated human identification process, based on the dentition of an unknown postmortem (PM) individual.

In Phase 1, a database of antemortem (AM) dentition was constructed. In Phase 2, similarity scores were calculated for each pair of PM-AM dental panoramic radiographs. In Phase 3, the scored similarities were sorted in descending order and extracted for the top 20.0%, 10.0%, and 5.0%. The matched rank was calculated as the success rate for the percentage.

https://doi.org/10.1371/journal.pone.0312537.g003

In Phase 1, using the pre-trained model, all teeth numbers and therapeutic identifiers of 1,029 AM individuals were detected and post-processed. In Phase 2, the difference score was calculated to estimate the similarity score in the PM-AM pairs. This difference score was evaluated by comparing the dental condition at each position. A score of 0 points indicated that the position and the state were the same, and a higher score indicated that the distance difference between the tooth conditions increased. During this process, a relatively large score, the maximum of which was 10 points, indicated teeth for which the positions were unavailable in the past to increase the difference. A penalty of 10 points was imposed when an unexplainable discrepancy occurred. Table 2 summarizes the calculation methods of the difference and similarity scores. The difference score was subtracted from 1 to finally obtain the similarity score.

thumbnail
Table 2. The Student’s t-test analysis comparing the groups with shorter and longer imaging time intervals.

https://doi.org/10.1371/journal.pone.0312537.t002

In Phase 3, similarity scores were calculated by 1:1 matching for dentition information of one anonymous DPR and all AM dentitions. After sorting the derived similarity scores in descending order, the top 20.0%, 10.0%, and 5.0% candidate groups (CGs) were extracted. The success rate was calculated after the rank position was checked to determine whether matched AMs existed in this group, as follows:

The same procedure as described above was conducted within each sex group. All aforementioned processes were fully automated based on a deep neural network.

Comparison of similarity scores among the groups, based on the imaging time interval

The imaging time interval was calculated as the number of days by subtracting the first date the DPR was taken from the most recent date the DPR was taken. The cut-off value was determined by using the regression method to analyze whether the similarity scores were statistically different, based on the interval between the PM and AM imaging times. The imaging time intervals, a sequence of consecutive numbers, were first divided into units of 100 days to set a discrete section, and dummy variables were generated. The value was set to 0 and 1 in cases when the interval between imaging times was lower or higher than the discrete value, respectively. The individual regression model was used to establish the relationship between each of the discrete values of this interval between the imaging times and the dependent variable, which is the similarity score. The minimum value of the imaging time interval was 1 day and the maximum value was 7,732 days. Therefore, 77 regression equations were derived because the dummy variables were introduced by dividing by 100 days, as follows: where i and j are natural number that range from 1 to 1029 (i = 1…1029) and from 1 to 77 (j = 1…77), respectively. Yi is similarity scores from 1,029 paired PM-AM DPRs and Xi is a dummy variable from the division into units of 100 days of the imaging time intervals.

The 77 derived regression coefficients were plotted to analyze for trend. The section in which the trend changed most rapidly was set as the cut-off value. Finally, the groups were divided, based on this cut-off value, and the Student’s t-test was conducted. SPSS software version 23.0 for Windows (IBM, Chicago, IL, USA) was used for the statistical analyses.

Results

Descriptive statistics

The mean age of the individuals was 35.5±15.3 years; the youngest and oldest individuals were 20 and 49, respectively. The average imaging time interval between the most recent and past DPR was 2,197.5±1,934.7 days. The minimum value was 1 day and the maximum value was 7,732 days.

Analysis of the similarity scores between the imaging time interval groups

The plot of each derived regression coefficient is shown in S1 Fig. The section or imaging time interval between 6,400 and 6,500 days, in which the trend changed most rapidly, was chosen as the cut-off value. The regression coefficient showed the most rapid change from -0.22 to 0.15. Thus, this cut-off value was finally determined to be 6,450 days. Based on this cut-off value, the variables of the imaging time interval were divided into two groups. As shown in Table 2, the similarity score was significantly different between the two groups, based on the period between the date of the most recent DPR and that of the past DPR imaging at 17.7 years (p = 0.006).

Performance of the human identification process

The matched AM was ranked in the CG with a success rate of 83.2% for the extraction of the top 20.0% CG. The target individual was, on average, in the top 20.0% CG with 83.2% probability. The success rates were 72.1% and 59.4% for the extraction of the top 10.0% and top 5.0%, respectively. The success rate differed, based on sex (S2 Table). The success rates for the extraction of the top 20.0%, 10.0%, and 5.0% were 71.3%, 64.0%, and 52.0%, respectively, for men and 97.2%, 81.1%, and 66.5%, respectively, for women. S3 Table presents the performance of the human identification process in a group with an imaging time interval of <6,450 days.

Discussion

The aims of this study were to construct a database of individuals’ dentition with DPRs, to use a novel method based on deep neural networks to identify individuals by recognizing their dentition changes using a pre-trained object detection network, and to evaluate the feasibility of this novel method. We found that the developed method was useful in detecting individuals by their dentition.

To date, different models for human identification have been proposed. The latest study [34] developed an identification system using DPRs with the Visual Geometry Group 16-layer model showing the highest accuracy, and other similar studies [35,36] developed more sophisticated models to better extract features that improve matching performance. A previous study confirmed the decisive role that the sorting method has in human identification while considering similarity [37]. A database search based on the basic assumption regarding dentition changes and sorting using similarity calculation was useful in human identification. All processes, however, including database construction and sorting by similarity calculation, were conducted manually in this study.

The significance of this study is that, rather than comparing the images, the dental information from DPR images was converted into objective information to prevent errors that can be generated from analyses using simple image comparison in advance. An excellent model was developed for this information conversion. The superiority of the model in automatically detecting natural teeth and dental treatment types has already been acknowledged through a preliminary study [21]. This study considerably contributes to the whole process of human identification that considers PM and AM similarities automated through deep learning, which includes advanced automatic identification of natural teeth and dental treatment types and the construction of an individual database based on this information. During the entire identification process, 32 teeth were classified into six states to increase the diversity of teeth arrangement. A step was added to compress and derive a high-probability group from possible candidates into a group. We believe that assigning directionality to tooth change increased the probability of identification.

To summarize, the top three probabilities of the target human identification were 83.2%, 72.1%, and 59.4%, which were obtained by respectively extracting the top 20.0%, 10.0%, and 5.0% of observations from the calculated similarity scores. The scoring was iterated 1,029 times, and the probabilities were sorted in descending order (Fig 3). Therefore, every anonymous PM was paired with the AM DPRs for the similarity scoring. The success rate differed between men and women. The success rate of the top 20.0%, 10.0%, and 5.0% CGs was 71.3%, 64.0%, and 52.0%, respectively, among men and 97.2%, 81.1%, and 66.5%, respectively, among women. These favorable success rates, especially when extracting the top 20%, underscore the potential of DPRs in practically identifying victims, even when other identifying features are unavailable or have deteriorated.

A possible reason the overall success rate was higher among women is that the dental information was extracted more accurately from women than from men by deep learning. The more precise extraction of dental information from women could potentially be influenced by diverse factors such as sexual dimorphism in teeth, the higher demand for orthodontic treatment among women, gender differences in oral health maintenance, and variations in periodontal disease progression between sexes [38,39]. The findings of our study could lead to more tailored approaches in practical forensic investigations, where gender-specific characteristics are considered during the identification process, thereby improving overall accuracy.

The difference in the similarity scores between the groups was statistically significant when the imaging time interval was 17.7 years. If the interval between the imaging times was <18 years, the interpretation is that the similarity between the two DPRs is high. When determining the cut-off value, a key aspect of our approach was to segment the imaging time intervals into intuitively data-driven 100-day intervals using ’day dummy’ variables, followed by regression analysis to examine differences between the two groups. To achieve a more refined determination of the cut-off value, alternative statistical methods, such as nonlinear regression analysis, could be considered.

As dental treatments become more common, it is very unlikely for a person to visit a dental clinic for the first time in 18 years and to take a DPR. Thus, it serves as an objective and pragmatic basis for supporting the effectiveness of the human identification algorithm based on similarity in the present study. In conclusion, an appropriate interval of <18 years between the imaging times for human identification is imperative when using the individual database. This insight is crucial for forensic experts as it suggests that the likelihood of successful identification decreases as the imaging time interval increases, emphasizing the need for timely collection and comparison of dental records in DVI scenarios.

Calculating the rank revealed that the performance was lower when 5.0% of CGs were extracted than when 10.0% were extracted. In this study, equally ranked individuals were assigned the same rank, and the next individual was ranked by adding the number of people in the preceding rank. To improve the performance of this model, further consideration for the handling of equal score processing is necessary.

In this model, the similarity decreases when the model is applied to AM images showing no restorations. For example, the condition with the worst outcome was an AM image of an individual with 32 natural teeth (including four third molars) with no restorations. Subsequent extraction of a third molar and treatment on several teeth may have caused poor results because of lowered DPR similarity. In instances in which the similarity score is too low or a discrepancy occurs, completely removing the CG could also be considered. However, in this study, CGs were not removed, but penalties were given: a penalty of 10 points was given to greatly increase the difference in tooth change during the step for deriving possible CG.

In the DPRs in which the pre-trained model was used, several unclear cases occurred during the dental treatment type detection stage. For instance, a restoration treatment such as a gold inlay through indirect restoration could be damaged or lost over time, and if the individual receives a direct restoration in the following treatment, the shape of the restoration would appear differently on the DPR, often resulting in errors when read by the network. Even when studied by specialists, ambiguous cases are often difficult to distinguish as to whether the tooth was natural or had received a restoration treatment using a resin.

A follow-up study is warranted to divide the prosthetic part as an indirect or direct restoration and to use a large number of samples for learning so that the neural network may distinguish between these two situations. Furthermore, human identification was conducted using dental information detected via deep learning only without considering the estimated age and the pathologic condition such as dental caries or periodontal lesion. Adding these pathologic conditions and the estimated date of birth of an individual would further improve the performance of reducing the CG. Adding conditions other than teeth arrangement and invariable anatomical structures such as the mental foramen, maxillary sinus, and anterior nasal spine would further increase the success rate of human identification.

Another critical consideration when establishing a more elaborate human identification system is addressing the inherent issues in AI model development. To mitigate the risks of data bias and imbalance, it is crucial to implement data augmentation and balanced sampling techniques, while also incorporating bias detection algorithms to ensure fairness across the model. Overfitting can be prevented by utilizing cross-validation methodologies. Incorporating interpretable algorithms or tools enhances the transparency of the decision-making process. Ethical considerations should be continuously evaluated and refined, with transparency maintained throughout the modeling and data collection stages. Ongoing updates and iterative retraining of the model are vital for sustaining real-world performance, and it is crucial to rigorously monitor the model’s efficacy on real-world data. Finally, regular monitoring for performance degradation due to environmental changes, coupled with routine updates, is essential for ensuring the model’s long-term sustainability.

The establishment of individual database using DPRs has various advantages, in addition to its application in the dental and medical fields. AI can integrate heterogeneous data domains such as demographic and clinical data, image data, biomolecular data, and social network data [40,41]. When a big database is ultimately established using DPRs, the DPR database would be highly reliable identifiers that could represent individuals, as fingerprints do.

Conclusion

This study demonstrated the feasibility of human identification regarding dentition change on DPRs by not only automatically detecting dental objects but also constructing a comprehensive database through an indexing system. The developed method enhances the accuracy of the identification process by surpassing traditional image comparison trends. This approach could effectively reduce the size of an AM CG to be reviewed, especially in DVI scenarios, while serving as a powerful tool for dental professionals and other stakeholders.

Supporting information

S1 Fig. Determination of the cut-off value of the imaging time intervals from the trend analysis of the regression coefficients.

https://doi.org/10.1371/journal.pone.0312537.s001

(PDF)

S1 Table. Method used to calculate the similarity score.

https://doi.org/10.1371/journal.pone.0312537.s002

(PDF)

S2 Table. Success rates of human identification in the entire imaging time interval.

https://doi.org/10.1371/journal.pone.0312537.s003

(PDF)

S3 Table. Success rates of human identification for the imaging time interval of <6450 days.

https://doi.org/10.1371/journal.pone.0312537.s004

(PDF)

References

  1. 1. Fischman SL. The use of medical and dental radiographs in identification. Int Dent J. 1985;35: 301–306. pmid:3867636
  2. 2. Dumanèiæ J, Kaiæ Z, Njemirovskij V, Brkiæ H, Zeèeviæ D. Dental identification after two mass disasters in Croatia. Croat Med J. 2001;42: 657–662. pmid:11740850
  3. 3. Dutta SR, Singh P, Passi D, Varghese D, Sharma S. The role of dentistry in disaster management and victim identification: an overview of challenges in Indo-Nepal scenario. J Maxillofac Oral Surg. 2016;15: 442–448. pmid:27833335
  4. 4. Andersen L, Juhl M, Solheim T, Borrman H. Odontological identification of fire victims—Potentialities and limitations. Int J Legal Med. 1995;107: 229–234. pmid:7632598
  5. 5. Pretty IA, Sweet D. A look at forensic dentistry—Part 1: The role of teeth in the determination of human identity. Br Dent J. 2001;190: 359–366. pmid:11338039
  6. 6. Sur J, Bose S, Khan F, Dewangan D, Sawriya E, Roul A. Knowledge, attitudes, and perceptions regarding the future of artificial intelligence in oral radiology in India: A survey. Imaging Sci Dent. 2020;50: 193–198. pmid:33005576
  7. 7. Hwang JJ, Jung YH, Cho BH, Heo MS. An overview of deep learning in the field of dentistry. Imaging Sci Dent. 2019;49: 1–7. pmid:30941282
  8. 8. Kim HS, Ha EG, Kim YH, Jeon KJ, Lee C, Han SS. Transfer learning in a deep convolutional neural network for implant fixture classification: A pilot study. Imaging Sci Dent. 2022;52: 219–224. pmid:35799970
  9. 9. Nichols JA, Herbert Chan HW, Baker MAB. Machine learning: Applications of artificial intelligence to imaging and diagnosis. Biophys Rev. 2019;11: 111–118. pmid:30182201
  10. 10. Kavuluru R, Rios A, Lu Y. An empirical evaluation of supervised learning approaches in assigning diagnosis codes to electronic medical records. Artif Intell Med. 2015;65: 155–166. pmid:26054428
  11. 11. Sun P, Jiang Y, Xie E, Shao W, Yuan Z, Wang C, et al. What makes for end-to-end object detection? In: Proceedings of the 38th international conference on machine learning. PMLR;139; 2021. pp. 9934–9944. Available from: arXiv:2012.05780.
  12. 12. Wang J, Song L, Li Z, Sun H, Sun J, Zheng N. End-to-end object detection with fully convolutional network. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition; June 20–25, 2021. Nashville, TN. IEEE Publications; 2021. pp. 15844–15853. https://doi.org/10.1109/CVPR46437.2021.01559
  13. 13. Xu M, Zhang Z, Hu H, Wang J, Wang L, Wei F, et al. End-to-end semi-supervised object detection with soft teacher. In: Proceedings of the IEEE/CVF international conference on computer vision; 2021. Available from: arXiv:2106.09018.
  14. 14. Shaha M, Pawar M. Transfer learning for image classification. In: Proceedings of the second international conference on electronics, communication and aerospace technology; March 29–31, 2018; Coimbatore. India. IEEE Publications; 2018. pp. 656–660. https://doi.org/10.1109/ICECA.2018.8474802
  15. 15. Kim HE, Cosa-Linan A, Santhanam N, Jannesari M, Maros ME, Ganslandt T. Transfer learning for medical image classification: a literature review. BMC Med Imaging. 2022;22: 69. pmid:35418051
  16. 16. Brust CA, Käding C, Denzler J. Active learning for deep object detection. arXiv preprint arXiv:180909875; 2018. https://doi.org/10.48550/arXiv.1809.09875
  17. 17. Budd S, Robinson EC, Kainz B. A survey on active learning and human-in-the-loop deep learning for medical image analysis. Med Image Anal. 2021;71: 102062. pmid:33901992
  18. 18. Nazir S, Dickson DM, Akram MU. Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks. Comput Biol Med. 2023;156: 106668. pmid:36863192
  19. 19. Dhar T, Dey N, Borra S, Sherratt RS. Challenges of deep learning in medical image analysis—improving explainability and trust. In: IEEE Transactions on Technology and Society; Jan 4, 2023; New York, USA. IEEE Publications; 2023. pp. 68–75.
  20. 20. Chen H, Zhang K, Lyu P, Li H, Zhang L, Wu J, et al. A deep learning approach to automatic teeth detection and numbering based on object detection in dental periapical films. Sci Rep. 2019;9: 3840. pmid:30846758
  21. 21. Choi HR, Siadari TS, Kim JE, Huh KH, Yi WJ, Lee SS, et al. Automatic detection of teeth and dental treatment patterns on dental panoramic radiographs using deep neural networks. Forensic Sci Res. 2022;7: 456–466. pmid:36353329
  22. 22. Görürgöz C, Orhan K, Bayrakdar IS, Çelik Ö, Bilgir E, Odabaş A, et al. Performance of a convolutional neural network algorithm for tooth detection and numbering on periapical radiographs. Dentomaxillofac Radiol. 2022;51: 20210246. pmid:34623893
  23. 23. Kim CG, Kim DH, Jeong HG, Yoon SJ, Youm SK. Automatic tooth detection and numbering using a combination of a CNN and heuristic algorithm. Appl Sci. 2020;10: 5624.
  24. 24. Kim YH, Ha EG, Jeon KJ, Lee C, Han SS. A fully automated method of human identification based on dental panoramic radiographs using a convolutional neural network. Dentomaxillofac Radiol. 2022;51: 20210383. pmid:34826252
  25. 25. Lee JH, Lee C, Battulga B, Na JY, Hwang JJ, Kim YH, et al. Morphological analysis of the lower second premolar for age estimation of Korean adults. Forensic Sci Int. 2017;281: 186.e1–186.e6. pmid:29103902
  26. 26. Mahdi FP, Motoki K, Kobashi S. Optimization technique combined with deep learning method for teeth recognition in dental panoramic radiographs. Sci Rep. 2020;10: 19261. pmid:33159125
  27. 27. Matsuda S, Miyamoto T, Yoshimura H, Hasegawa T. Personal identification with orthopantomography using simple convolutional neural networks: A preliminary study. Sci Rep. 2020;10: 13559. pmid:32782269
  28. 28. Takahashi T, Nozaki K, Gonda T, Mameno T, Ikebe K. Deep learning-based detection of dental prostheses and restorations. Sci Rep. 2021;11: 1960. pmid:33479303
  29. 29. Thanathornwong B, Suebnukarn S. Automatic detection of periodontal compromised teeth in digital panoramic radiographs using faster regional convolutional neural networks. Imaging Sci Dent. 2020;50: 169–174. pmid:32601592
  30. 30. Tuzoff DV, Tuzova LN, Bornstein MM, Krasnov AS, Kharchenko MA, Nikolenko SI, et al. Tooth detection and numbering in panoramic radiographs using convolutional neural networks. Dentomaxillofac Radiol. 2019;48: 20180051. pmid:30835551
  31. 31. Xu M, Wu Y, Xu Z, Ding P, Bai H, Deng X. Robust automated teeth identification from dental radiographs using deep learning. J Dent. 2023;136: 104607. pmid:37422206
  32. 32. Chandrashekar G, AlQarni S, Bumann EE, Lee Y. Collaborative deep learning model for tooth segmentation and identification using panoramic radiographs. Comput Biol Med. 2022;148: 105829. pmid:35868047
  33. 33. Du H, Li M, Li G, Lyu T, Tian XM. Specific oral and maxillofacial identifiers in panoramic radiographs used for human identification. J Forensic Sci. 2021;66: 910–918. pmid:33506528
  34. 34. Enomoto A, Lee A-D, Sukedai M, Shimoide T, Katada R, Sugimoto K, et al. Automatic identification of individuals using deep learning method on panoramic radiographs. J Dent Sci. 2023;18: 696–701. pmid:37021248
  35. 35. Wu Q, Fan F, Liao P, Lai Y, Ke W, Du W, et al. Human identification with dental panoramic images based on deep learning. Sens Imaging. 2021;22: 4.
  36. 36. Lai Y, Fan F, Wu Q, Ke W, Liao P, Deng Z, et al. LCAnet: Learnable connected attention network for human identification using dental images. IEEE Trans Med Imaging. 2021;40: 905–915. pmid:33259294
  37. 37. Lee C, Lim SH, Huh KH, Han SS, Kim JE, Heo MS, et al. Performance of dental pattern analysis system with treatment chronology on panoramic radiography. Forensic Sci Int. 2019;299: 229–234. pmid:31078124
  38. 38. Lee HJ. Sex/Gender Differences in Dental Diseases. Sex/Gender-Specific Medicine in Clinical Areas: Springer; 2024. p. 511–518.
  39. 39. Jain A, Bhavsar NV, Baweja A, Bhagat A, Ohri A, Grover V. Gender-Associated Oral and Periodontal Health Based on Retrospective Panoramic Radiographic Analysis of Alveolar Bone Loss. Clinical Concepts and Practical Management Techniques in Dentistry. 2020.
  40. 40. Pethani F. Promises and perils of artificial intelligence in dentistry. Aust Dent J. 2021;66: 124–135. pmid:33340123
  41. 41. Schwendicke F, Samek W, Krois J. Artificial intelligence in dentistry: Chances and challenges. J Dent Res. 2020;99: 769–774. pmid:32315260