Minireviews Open Access
Copyright ©The Author(s) 2021. Published by Baishideng Publishing Group Inc. All rights reserved.
Artif Intell Med Imaging. Dec 28, 2021; 2(6): 104-114
Published online Dec 28, 2021. doi: 10.35711/aimi.v2.i6.104
Application of machine learning in oral and maxillofacial surgery
Kai-Xin Yan, Lei Liu, Hui Li, State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, Sichuan Province, China
ORCID number: Kai-Xin Yan (0000-0002-1041-494X); Lei Liu (0000-0001-5309-1979); Hui Li (0000-0001-6841-2229).
Author contributions: Yan KX, Liu L, and Li H contributed to drafting the paper; Yan KX and Liu L contributed to the literature review; Yan KX wrote this paper as the first author; Li H contributed to critical revision and editing of the manuscript, and gave approval to the final version as the corresponding author.
Supported by National Natural Science Foundation of China, No. 82100961.
Conflict-of-interest statement: There is no conflict of interest associated with any of the senior author or other coauthors who contributed their efforts in this manuscript.
Open-Access: This article is an open-access article that was selected by an in-house editor and fully peer-reviewed by external reviewers. It is distributed in accordance with the Creative Commons Attribution NonCommercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: https://creativecommons.org/Licenses/by-nc/4.0/
Corresponding author: Hui Li, MD, PhD, Assistant Professor, State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, No. 14 Section 3 Renminnan Road, Chengdu 610041, Sichuan Province, China. 475393040@qq.com
Received: December 7, 2021
Peer-review started: December 7, 2021
First decision: December 13, 2021
Revised: December 20, 2021
Accepted: December 28, 2021
Article in press: December 28, 2021
Published online: December 28, 2021
Processing time: 21 Days and 4.5 Hours

Abstract

Oral and maxillofacial anatomy is extremely complex, and medical imaging is critical in the diagnosis and treatment of soft and bone tissue lesions. Hence, there exists accumulating imaging data without being properly utilized over the last decades. As a result, problems are emerging regarding how to integrate and interpret a large amount of medical data and alleviate clinicians’ workload. Recently, artificial intelligence has been developing rapidly to analyze complex medical data, and machine learning is one of the specific methods of achieving this goal, which is based on a set of algorithms and previous results. Machine learning has been considered useful in assisting early diagnosis, treatment planning, and prognostic estimation through extracting key features and building mathematical models by computers. Over the past decade, machine learning techniques have been applied to the field of oral and maxillofacial surgery and increasingly achieved expert-level performance. Thus, we hold a positive attitude towards developing machine learning for reducing the number of medical errors, improving the quality of patient care, and optimizing clinical decision-making in oral and maxillofacial surgery. In this review, we explore the clinical application of machine learning in maxillofacial cysts and tumors, maxillofacial defect reconstruction, orthognathic surgery, and dental implant and discuss its current problems and solutions.

Key Words: Radiography; Artificial intelligence; Machine learning; Deep learning; Oral surgery; Maxillofacial surgery

Core Tip: A dramatic increase in medical imaging data has exceeded the ability of clinicians to process and analyze, which calls for higher-level analytic tools. Machine learning-based image analysis is useful for extracting key information to improve diagnostic accuracy and treatment efficacy. In this review, we summarize the applications of machine learning in oral and maxillofacial surgery as well as its current problems and solutions.



INTRODUCTION

The oral and maxillofacial region is extremely complex, including many critical anatomical structures such as the maxillofacial bone, parotid gland, facial nerve, and major vessels. Computed tomography (CT), magnetic resonance imaging (MRI; an imaging technique mainly used for the examination of soft tissue), and other radiological examinations are commonly applied to improve the understanding of the three-dimensional spatial positional relationships among these anatomical structures. It is unavoidable to face rapid growth in the amount and complexity of medical imaging data, leading to increased workload for clinicians[1-2].

In recent years, artificial intelligence (AI) has been implemented in medicine to explore these enormous datasets and extract key information[1,3]. AI is a field focused on completing intellectual tasks normally performed by humans, and machine learning (ML) is one of the specific methods of achieving this goal[4]. AI models based on ML algorithms have demonstrated excellent performance in imaging data extraction and analysis and have increasingly matched specialist performance in medical imaging applications[5]. The integration of ML in oral and maxillofacial surgery has been proved to improve diagnostic accuracy, treatment efficacy, and prognostic estimation and reduce health care costs[6,7]. The purpose of this review is to explore the clinical application of ML in maxillofacial cysts and tumors, maxillofacial defect reconstruction, orthognathic surgery, and dental implant and discuss the current problems and solutions.

Arthur Samuel[6-8] first described the term ML in 1952. ML is a technique to build prediction outcomes by statistical algorithms learning from experience. According to the training types of the algorithms, ML can be divided into three categories: Supervised, unsupervised, and reinforcement learning[9]. Currently, supervised learning is the most commonly used training style in medical image analysis[10].

In supervised learning, labels are inputted simultaneously with the training data and then algorithms predict the known outcome[10]. Examples of supervised learning methods include classic Naive Bayes, decision tree (DF), support vector machine (SVM), random forest (RF), logistic regression, artificial neural network (ANN), and deep learning (DL). Specifically, SVM results in data classification by setting up an imaginary high-dimensional space and then separating labeled samples by a hyperplane[4,11]. RF is an extension of DF, in which each DF is independently trained and subsequently combined with others[4,12]. ANN has one hidden layer in addition to the input and output layer. Each layer is composed of neurons and sequentially stacked one after the other via weighted connections. The signals are transformed among neurons from the previous layer to the next and DL is comprised of multilayered ANN[13].

In unsupervised learning[10], the algorithm system will not be provided with labels but depends on itself for the detection of the hidden patterns in the data. Examples of algorithms of unsupervised learning include K-means, affinity propagation, and fuzzy C-means systems. Besides, reinforcement learning[14] holds a system including unlabeled data, agent, and environment. It aims to repeatedly optimize parameters based on environmental feedback through reward and punishment mechanisms. By accumulating the rewards, the models can keep adapting to the changing environment and obtaining the best return. Examples of reinforcement learning algorithms include Maja and Teaching-Box systems.

The protocol of ML comprises data procession and model construction, and the workflow of the model construction can be further divided into the training phase and the validating/testing phase. Due to the impact of data volume and quality on the performance of machine-learning models, raw data should be standardized in advance for the following aspects: (1) Reducing noise without losing the important features[15]; (2) Splitting the image into parts and delineating the region of interest; and (3) Accumulating enough data[16]. Effective methods have been proposed for achieving the tasks, including image denoising, segment, and augment[15,17-20].

APPLICATION IN ORAL AND MAXILLOFACIAL SURGERY
Maxillofacial cystic lesions and benign tumors

Maxillofacial cysts and benign tumors are common lesions in the oral and maxillofacial region. In most cases, maxillofacial cysts and benign tumors cause facial swelling, tooth displacement, large bone cavity, and even pathological fracture when diagnosed. Surgery is the only treatment option, including enucleation, decompression, and resection. And the choice of treatment modality is based on the final diagnosis, lesion size, and age of selected patients. However, these lesions are asymptomatic at the early stage. Consequently, early detection and diagnosis of maxillofacial cysts and benign tumors are crucial for avoiding serious surgery and achieving satisfactory treatment outcomes[21,22]. Numerous studies have demonstrated the usefulness of ML in early screening, accurate diagnosis, proper treatment, and morbidity prevention in maxillofacial cysts and benign tumors.

Frydenlund et al[23] applied two ML classifiers (a SVM and bagging with logistic regression) to distinguish among lateral periodontal cysts, odontogenic keratocysts, and glandular odontogenic cysts in hematoxylin and eosin-stained digital micrographs. The results proved the effectiveness of the ML-based classifiers in predicting these three types of odontogenic cysts (96.2% correct classification for both classifiers). Moreover, Okada et al[24] demonstrated the usefulness of a semiautomatic computer-aided diagnosis framework to differentiate between periapical cysts and granulomas in cone-beam CT (CBCT) data. And the 94.1% best accuracy was yielded with the integration of graph-based random walks segmentation and ML-based boosted classification algorithms. Similarly, Endres et al[25] compared the performance of the DL algorithm with that of 24 oral and maxillofacial surgeons in detecting periapical radiolucencies in panoramic radiographs, demonstrating the reliable diagnoses of ML algorithms in dentistry. In addition, Kwon et al[26] developed a deep convolution neural network (DCNN) to automatically diagnose jaw odontogenic cysts and tumors in panoramic images, showing higher diagnostic sensitivity, specificity, and accuracy with augmented datasets. Liu et al[27] applied deep transfer learning to classify ameloblastoma and odontogenic keratocyst in panoramic radiographs and achieved an accuracy of 90.36%. Yang et al[28] also showed that the diagnostic performance of CNN You OnlyLook Once v2 was similar to that of experienced dentists in detecting odontogenic cysts and tumors on panoramic radiographs.

Maxillofacial malignant tumors

Oral cancer is the most common malignancy in the oral and maxillofacial region, which can exert a severe impact on the survival and quality of life of the patients[29]. The most effective method for reducing mortality rates is early detection. However, the optimal strategy for early screening remains debated. The advent of high-quality ML provides potential to improve early diagnosis, prognostic evaluation, and accurate prediction of treatment associated toxicity in oral cancer patients.

Aubreville et al[30] presented a novel automatic identification of oral squamous cell carcinoma (OSCC) in confocal laser endomicroscopy images, using a deep ANN. The accuracy of this deep ANN-based method was 88.3%, with a sensitivity of 86.6% and specificity of 90%. It outperformed textural feature-based classification. DL algorithms, including the DenseNet121 and faster R-CNN algorithm, have also been applied to automatically classify and detect oral cancer in photographic images, achieving acceptable precision[31]. Furthermore, Kar et al[29] and Jeyaraj and Samuel Nadar[32] developed regression-based partitioned CNN using hyperspectral image datasets for automated detecting oral cancer, obtaining improved quality of diagnosis compared to traditional image classifiers including the SVM and the deep belief network.

In addition, ML has also been applied to predict cancer outcomes using the following prognostic variables: (1) Histological grade; (2) Five-year survival; (3) Cervical lymph node metastases; and (4) Distant metastasis. Ren et al[33] included 80 patients finally diagnosed with OSCC and performed ML-based MRI texture analysis using a minimum-redundancy maximum-relevance algorithm, achieving the best performance with an accuracy of 86.3%. Others also concluded that the predictive performance of DL-based survival prediction algorithms exceeded that of conventional statistical methods[34-38]. Chu et al[17] and Ariji et al[39] have achieved a DL accuracy of extranodal extension of 84% on 703 CT images. The diagnostic performance outranked that of radiologists. Others also proved the effectiveness of ML in predicting lymph node metastasis in patients with early-stage oral cancer and thus guiding proper treatment plans[32,40,41]. Keek et al[42] found that compared with peritumoral radiomics based prediction models, a clinical model was useful for the prediction of distant metastasis in oropharyngeal cancer patients.

ML also contributes to the evaluation of treatment complications. Chu et al[17] and Men et al[43] have introduced a 3D residual CNN for the prediction of xerostomia in patients with head and neck cancer and achieved satisfying performance with an area under the curve value of 0.84 (0.74-0.91), an index for reflecting the authenticity of the detection method (the closer the numerical value to 1.0, the higher the authenticity of the detection method).

Nasopharyngeal carcinoma is a malignancy of the head and neck, and radiotherapy is the primary treatment option for the suffered patients[44]. To avoid unnecessary toxicities derived from radiotherapy, radiation oncologists propose the concepts of precise radiotherapy and adaptive radiotherapy. Recently, advanced ML techniques have mainly been applied to auto-recognition, early diagnosis, target contouring, and complication prediction in patients with nasopharyngeal carcinoma[45].

Li et al[46] developed an endoscopic image-based model to detect nasopharyngeal malignancies. And this DL model outperformed experts in detecting malignancies. Du et al[47] investigated the diagnostic performance of seven ML classifiers cross-combined with six feature selection methods for distinguishing inflammation and recurrence based on post-treatment nasopharyngeal positron emission tomography/ X-ray CT images (a high-level imaging method that can make an early diagnosis of tumors) and identified the optimal methods in the diagnosis of nasopharyngeal carcinoma.

Lin et al[48] constructed a 3D CNN on MRI data sets and validated the performance of automated primary gross tumor (GTV) contouring in patients with nasopharyngeal carcinoma, demonstrating improved contouring accuracy and efficacy with the assistance of a DL-based contouring tool. Men et al[49] proposed an end-to-end deep deconvolutional neural network for segmentation of nasopharyngeal carcinoma in planning CT images, showing a high-level performance than that of the VGG-16 model in the segmentation of the nasopharynx GTV, the metastatic lymph node GTV, and the clinical target volume. In addition, Liang et al[44] developed a fully automated DL-based method for the accurate detection and segmentation of organs at risk in nasopharyngeal carcinoma CT images and achieved excellent performance. The results showed a sensitivity of 0.997 to 1 and specificity of 0.983 to 0.999. For early detecting the radiotherapy complication in nasopharyngeal carcinoma patients, Zhang et al[50] applied the RF method to early predict radiation-induced temporal lobe injury (RTLI) based on MRI examinations. The results demonstrated that the RF models can successfully predict RTLI in advance, which can allow clinicians to take measures to stop or slow down the deterioration of RTLI.

Altogether, ML techniques have been shown well-performed in early screening and prognosis evaluation of maxillofacial malignant tumors.

Maxillofacial bone defect reconstruction

Maxillofacial bone defects after congenital deformities, trauma, and oncological resection greatly decrease patients’ quality of life. The goal of reconstruction of maxillofacial bone defects is to restore optimal function and facial appearance using free tissue, vascularized autogenous bone flap transplantation, or prostheses. Maxillofacial reconstructive surgery remains challenging, especially in the cases of massive maxillofacial bone defects across the midline. Most recently, ML algorithms have achieved major success in virtual surgical planning and thus posed great potential in the reconstruction of facial defects.

Jie et al[51] proposed an iterative closest point (ICP) algorithm based on normal people database (a database comprised of normal and healthy adults) to predict the reference data of missing bone and performed symmetry evaluation between the postoperative skull and its mirrored model. The result showed that the ICP model achieved similar accuracy to that of navigation-guided surgery. Dalvit Carvalho da Silva et al[52] combined CNN with geometric moments to identify the midline symmetry plane of the facial skeleton from CT scans, which aided the surgeons in the maxillofacial reconstructive surgery.

With the development of an imaging database, ML is a promising tool to assist the maxillofacial bone defect reconstruction.

Orthognathic surgery

Orthognathic surgery is used for the treatment of dental malocclusion, facial deformities, and obstructive sleep apnea to improve facial aesthetics and function. Traditionally, surgical planning is based on clinical examination, two-dimensional cephalometric analysis, and manually made splints. However, these procedures require considerable labor efforts and lack precision[53-56]. With the rapid development of technologies and materials, 3D printers, digital software, and ML are increasingly used in orthognathic surgery and greatly improve surgical outcomes. Hence, the applications of ML are promising in orthognathic surgery.

According to the study of Shin et al[57], the authors extracted the features from posteroanterior and lateral cephalogram and evaluated the necessity for orthognathic surgery using DL networks. The results showed that the accuracy, sensitivity, and specificity were 0.954, 0.844, and 0.993, respectively, proving the excellent performance. Lin et al[58] used a CNN with a transfer learning approach on 3D CBCT images for the assessment of the facial symmetry before and after orthognathic surgery. In a retrospective cohort study, Lo et al[59] first applied a ML model based on the 3D contour images to automatically assess the facial symmetry before and after orthognathic surgery. According to the study by Knoops et al[60], a 3D morphable model, a ML-based framework involving supervised learning, was trained with 4216 3D scans of healthy volunteers and orthognathic surgery patients. The model showed high diagnostic accuracy with a sensitivity of 95.5% and specificity of 95.2%, satisfying treatment simulation. In addition, Patcas et al[61] demonstrated that patients’ facial appearance and attractiveness improved after orthognathic surgery using a CNN model.

To sum up, ML has been considered a useful tool in orthognathic surgery for establishing a precise diagnosis, evaluating surgical necessity, and predicting treatment outcomes.

Dental implant

The dental implant has been considered a reliable treatment option for the replacement of missing teeth. Undoubtedly, an excellent bone environment and implant planning are key to the success rate of dental implants. It is crucial to have a basic understanding of the quality and quantity of bone at the planned site and site of placement[62]. In recent decades, ML is growing in the field of dental implants and its use has been applied to improve the success rate of implants and identify dental implants.

Kurt et al[63] applied a DL approach on three-dimensional CBCT images to perform implant planning and compared the performance of this method with manual assessment, achieving similarly acceptable results in the measurements in the maxilla molar/premolar region, as well as in the mandible premolar region. A pilot study by Ha et al[64] demonstrated that the mesiodistal position of the inserted implant is the most significant factor predicting implant prognosis using ML methods.

Besides, Lee et al[65] evaluated the performance of three different DCNN architectures for the detection and classification of a fractured dental implant using panoramic and periapical radiographic images. The results showed the best performance by the automated DCNN architecture based on only periapical images. Mameno et al[66] applied three ML methods for the prediction of peri-implantitis and analyzed the risk indicators. RF model achieved the highest performance in the prediction. And the results demonstrated that implant functional time influenced most on prediction.

In addition, several investigations proved the effectiveness of ML methods for implant type recognition using radiographic images[67-69]. As for the application of ML models for implant design optimization, Roy et al[70] used an ANN combined with genetic algorithms for the prediction of the optimum implant dimension.

ML models have demonstrated great potential in the field of dental implants for assisting implant planning, evaluating implant performance, improving implant designs, and identifying dental implants.

PROBLEMS AND SOLUTIONS

ML has shown great potential in the field of oral and maxillofacial surgery for improving detection accuracy, optimizing treatment plans, and providing reliable prognostic prediction. Despite all the potential, there still exist some limitations.

First, the performance of ML mainly depends on the volume and quality of data and superior algorithms. The scattered distribution of dental databases across healthcare settings often leads to the problem of relatively small datasets, exerting an impact on real clinical decision-making. Efforts should be made for the development of cloud-based image databases and large open-access databases from diverse settings and populations[71].

Second, it is quite difficult for ML to analyze a large number of different and heterogeneous datasets. A set of well-standardized, segmented, and enhanced training data will enhance the performance of the ML model. Thus, the involved data should get properly pre-processed for maximally achieving homogenization of the data sets and reducing errors[15,17,72].

Third, the performance of ML algorithms in completing various common clinical tasks is similar to or outmatches that of experts. However, when dealing with cases of rare and complicated diseases, existing algorithms may have inferior performance[73,74]. Consequently, further improvement of ML algorithms is required for computing enormous and complex medical data.

Lastly, there exist many ethical challenges, including privacy protection, data security, and legal and regulatory issue. Patients’ informed consent has to be obtained before using their clinical data for ML. Moreover, relevant guidelines should be developed for data acquisition and data sharing. Meanwhile, data should be transparent and traceable without the disclosure of personal information. Strict legal requirements should be made regarding health data privacy.

CONCLUSION

ML will have an immense impact in the field of oral and maxillofacial surgery in the following aspects. First, ML is useful in early screening, accurate diagnosis, proper treatment, morbidity prevention, and accurate prediction of treatment associated toxicity in the treatment of maxillofacial cysts, benign tumors, and malignant tumors. Second, ML algorithms have achieved major success in virtual surgical planning and thus posed great potential in the reconstruction of facial defects. Third, ML has been considered a useful tool in orthognathic surgery for establishing a precise diagnosis, evaluating surgical necessity, and predicting treatment outcomes. Lastly, ML models have demonstrated great potential in the field of dental implants for assisting implant planning, evaluating implant performance, improving implant designs, and identifying dental implants (Table 1).

Table 1 Machine learning applications in oral and maxillofacial surgery.
Ref.
Applications
Purpose
Method
[23]Maxillofacial cystic lesions and benign tumorsAccurate diagnosisA support vector machine and bagging with logistic regression
[24]Integration of graph-based random walks segmentation and machine learning-based boosted classification algorithms
[26]Deep convolution neural network
[27]Deep transfer learning
[28]Convolution neural work You OnlyLook Once v2’s
[25]Early detectionDeep learning
[30]Maxillofacial malignant tumorsEarly diagnosisDeep artificial neural network
[31]Deep learning (DenseNet121 and faster R-Convolution neural work)
[29,32]Regression-based partitioned convolution neural network
[46]Deep learning
[47]Machine learning
[48]Early detectionConvolution neural network
[49]End-to-end deep deconvolutional neural network
[44]Deep learning
[33]Prognosis estimationMinimum-redundancy maximum-relevance algorithm
[34-39]Deep learning
[40-42]Machine learning
[43]Treatment complication evaluationConvolution neural network
[50]Random forest
[51]Maxillofacial bone defect reconstructionMissing bone prediction and facia symmetry evaluationIterative closest point
[52]Midline symmetry plane identificationConvolution neural network
[57]Orthognathic surgerySurgery necessity evaluationDeep learning
[58]Facial symmetry assessmentConvolution neural network
[59]Machine learning
[60]DiagnosisMachine learning
[61]Facial appearance and attractiveness evaluationConvolution neural network
[63]Dental implantImplant planning designing Deep learning
[70]Implant planning optimizingArtificial neural network
[64]Prognosis estimationMachine learning
[65]Detection and classification of fractured dental implantDeep convolution neural network
[66]ComplicationpredictionMachine learning
[67-69]Implant type recognitionMachine learning

Nonetheless, it remains vital to evaluate the reliability, accuracy, and repeatability of ML in medicine. Further studies should continually focus on improving the usability of algorithms for different diseases. Moreover, there exists an urgent need to develop guidelines for many ethical challenges, including privacy protection, data security, and legal and regulatory issue. Despite these issues, ML is still considered to be a powerful tool for clinicians. We believe that this review may provide detailed information regarding ML applications in oral and maxillofacial surgery and help assist clinicians to facilitate the clinical practices.

ACKNOWLEDGEMENTS

We are grateful to professor Ji-Xiang Guo, an IT specialist, for her assistance with the editing of this article.

Footnotes

Provenance and peer review: Invited article; Externally peer reviewed.

Peer-review model: Single blind

Specialty type: Dentistry, oral surgery and medicine

Country/Territory of origin: China

Peer-review report’s scientific quality classification

Grade A (Excellent): 0

Grade B (Very good): 0

Grade C (Good): C

Grade D (Fair): 0

Grade E (Poor): 0

P-Reviewer: Anysz H S-Editor: Liu M L-Editor: Wang TQ P-Editor: Liu M

References
1.  Fujima N, Andreu-Arasa VC, Meibom SK, Mercier GA, Salama AR, Truong MT, Sakai O. Prediction of the treatment outcome using machine learning with FDG-PET image-based multiparametric approach in patients with oral cavity squamous cell carcinoma. Clin Radiol. 2021;76:711.e1-711.e7.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 2]  [Cited by in F6Publishing: 1]  [Article Influence: 0.3]  [Reference Citation Analysis (0)]
2.  Creff G, Devillers A, Depeursinge A, Palard-Novello X, Acosta O, Jegoux F, Castelli J. Evaluation of the prognostic value of FDG PET/CT parameters for patients with surgically treated head and neck cancer: A systematic review. JAMA Otolaryngol Head Neck Surg. 2020;146:471-479.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 19]  [Cited by in F6Publishing: 29]  [Article Influence: 9.7]  [Reference Citation Analysis (0)]
3.  Heo MS, Kim JE, Hwang JJ, Han SS, Kim JS, Yi WJ, Park IW. Artificial intelligence in oral and maxillofacial radiology: What is currently possible? Dentomaxillofac Radiol. 2021;50:20200375.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 36]  [Cited by in F6Publishing: 46]  [Article Influence: 15.3]  [Reference Citation Analysis (0)]
4.  Choi RY, Coyner AS, Kalpathy-Cramer J, Chiang MF, Campbell JP. Introduction to Machine Learning, Neural Networks, and Deep Learning. Transl Vis Sci Technol. 2020;9:14.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in F6Publishing: 150]  [Reference Citation Analysis (0)]
5.  Seyyed-Kalantari L, Zhang H, McDermott MBA, Chen IY, Ghassemi M. Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations. Nat Med. 2021;27:2176-2182.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 25]  [Cited by in F6Publishing: 199]  [Article Influence: 66.3]  [Reference Citation Analysis (0)]
6.  Shan T, Tay FR, Gu L. Application of artificial intelligence in dentistry. J Dent Res. 2021;100:232-244.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 44]  [Cited by in F6Publishing: 114]  [Article Influence: 28.5]  [Reference Citation Analysis (0)]
7.  Bichu YM, Hansa I, Bichu AY, Premjani P, Flores-Mir C, Vaid NR. Applications of artificial intelligence and machine learning in orthodontics: a scoping review. Prog Orthod. 2021;22:18.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 13]  [Cited by in F6Publishing: 63]  [Article Influence: 21.0]  [Reference Citation Analysis (0)]
8.  Schwendicke F, Samek W, Krois J. Artificial intelligence in dentistry: Chances and challenges. J Dent Res. 2020;99:769-774.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 385]  [Cited by in F6Publishing: 293]  [Article Influence: 73.3]  [Reference Citation Analysis (0)]
9.  Mak KK, Lee K, Park C. Applications of machine learning in addiction studies: A systematic review. Psychiatry Res. 2019;275:53-60.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 65]  [Cited by in F6Publishing: 43]  [Article Influence: 8.6]  [Reference Citation Analysis (0)]
10.  Erickson BJ, Korfiatis P, Akkus Z, Kline TL. Machine learning for medical imaging. Radiographics. 2017;37:505-515.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 647]  [Cited by in F6Publishing: 703]  [Article Influence: 100.4]  [Reference Citation Analysis (0)]
11.  Amasya H, Yildirim D, Aydogan T, Kemaloglu N, Orhan K. Cervical vertebral maturation assessment on lateral cephalometric radiographs using artificial intelligence: comparison of machine learning classifier models. Dentomaxillofac Radiol. 2020;49:20190441.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 21]  [Cited by in F6Publishing: 36]  [Article Influence: 9.0]  [Reference Citation Analysis (0)]
12.  Krittanawong C, Zhang H, Wang Z, Aydar M, Kitai T. Artificial intelligence in precision cardiovascular medicine. J Am Coll Cardiol. 2017;69:2657-2664.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 426]  [Cited by in F6Publishing: 434]  [Article Influence: 62.0]  [Reference Citation Analysis (0)]
13.  Alhazmi A, Alhazmi Y, Makrami A, Masmali A, Salawi N, Masmali K, Patil S. Application of artificial intelligence and machine learning for prediction of oral cancer risk. J Oral Pathol Med. 2021;50:444-450.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 15]  [Cited by in F6Publishing: 30]  [Article Influence: 10.0]  [Reference Citation Analysis (0)]
14.  Saha A, Tso S, Rabski J, Sadeghian A, Cusimano MD. Machine learning applications in imaging analysis for patients with pituitary tumors: a review of the current literature and future directions. Pituitary. 2020;23:273-293.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 16]  [Cited by in F6Publishing: 8]  [Article Influence: 2.0]  [Reference Citation Analysis (0)]
15.  Diwakar M, Kumar M. A review on CT image noise and its denoising. Biomed Signal Process Control. 2018;42:73-88.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 131]  [Cited by in F6Publishing: 136]  [Article Influence: 22.7]  [Reference Citation Analysis (2)]
16.  Hussain Z, Gimenez F, Yi D, Rubin D. Differential Data Augmentation Techniques for Medical Imaging Classification Tasks. AMIA Annu Symp Proc. 2017;2017:979-984.  [PubMed]  [DOI]  [Cited in This Article: ]
17.  Chu CS, Lee NP, Ho JWK, Choi SW, Thomson PJ. Deep learning for clinical image analyses in oral squamous cell carcinoma: A review. JAMA Otolaryngol Head Neck Surg. 2021;147:893-900.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 6]  [Cited by in F6Publishing: 6]  [Article Influence: 2.0]  [Reference Citation Analysis (0)]
18.  Mobadersany P, Yousefi S, Amgad M, Gutman DA, Barnholtz-Sloan JS, Velázquez Vega JE, Brat DJ, Cooper LAD. Predicting cancer outcomes from histology and genomics using convolutional networks. Proc Natl Acad Sci U S A. 2018;115:E2970-E2979.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 426]  [Cited by in F6Publishing: 514]  [Article Influence: 85.7]  [Reference Citation Analysis (0)]
19.  Shi JY, Wang X, Ding GY, Dong Z, Han J, Guan Z, Ma LJ, Zheng Y, Zhang L, Yu GZ, Wang XY, Ding ZB, Ke AW, Yang H, Wang L, Ai L, Cao Y, Zhou J, Fan J, Liu X, Gao Q. Exploring prognostic indicators in the pathological images of hepatocellular carcinoma based on deep learning. Gut. 2021;70:951-961.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 42]  [Cited by in F6Publishing: 87]  [Article Influence: 29.0]  [Reference Citation Analysis (0)]
20.  Akkus Z, Galimzianova A, Hoogi A, Rubin DL, Erickson BJ. Deep learning for brain MRI segmentation: State of the art and future directions. J Digit Imaging. 2017;30:449-459.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 774]  [Cited by in F6Publishing: 451]  [Article Influence: 64.4]  [Reference Citation Analysis (0)]
21.  Huang S, Yang J, Fong S, Zhao Q. Artificial intelligence in cancer diagnosis and prognosis: Opportunities and challenges. Cancer Lett. 2020;471:61-71.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 133]  [Cited by in F6Publishing: 230]  [Article Influence: 46.0]  [Reference Citation Analysis (1)]
22.  Simmons CPL, McMillan DC, McWilliams K, Sande TA, Fearon KC, Tuck S, Fallon MT, Laird BJ. Prognostic Tools in Patients With Advanced Cancer: A Systematic Review. J Pain Symptom Manage. 2017;53:962-970.e10.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 121]  [Cited by in F6Publishing: 116]  [Article Influence: 16.6]  [Reference Citation Analysis (0)]
23.  Frydenlund A, Eramian M, Daley T. Automated classification of four types of developmental odontogenic cysts. Comput Med Imaging Graph. 2014;38:151-162.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 6]  [Cited by in F6Publishing: 8]  [Article Influence: 0.7]  [Reference Citation Analysis (0)]
24.  Okada K, Rysavy S, Flores A, Linguraru MG. Noninvasive differential diagnosis of dental periapical lesions in cone-beam CT scans. Med Phys. 2015;42:1653-1665.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 29]  [Cited by in F6Publishing: 30]  [Article Influence: 3.3]  [Reference Citation Analysis (0)]
25.  Endres MG, Hillen F, Salloumis M, Sedaghat AR, Niehues SM, Quatela O, Hanken H, Smeets R, Beck-Broichsitter B, Rendenbach C, Lakhani K, Heiland M, Gaudin RA. Development of a Deep Learning Algorithm for Periapical Disease Detection in Dental Radiographs. Diagnostics (Basel). 2020;10:430.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 62]  [Cited by in F6Publishing: 41]  [Article Influence: 10.3]  [Reference Citation Analysis (0)]
26.  Kwon O, Yong TH, Kang SR, Kim JE, Huh KH, Heo MS, Lee SS, Choi SC, Yi WJ. Automatic diagnosis for cysts and tumors of both jaws on panoramic radiographs using a deep convolution neural network. Dentomaxillofac Radiol. 2020;49:20200185.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 83]  [Cited by in F6Publishing: 73]  [Article Influence: 18.3]  [Reference Citation Analysis (1)]
27.  Liu Z, Liu J, Zhou Z, Zhang Q, Wu H, Zhai G, Han J. Differential diagnosis of ameloblastoma and odontogenic keratocyst by machine learning of panoramic radiographs. Int J Comput Assist Radiol Surg. 2021;16:415-422.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 12]  [Cited by in F6Publishing: 20]  [Article Influence: 6.7]  [Reference Citation Analysis (0)]
28.  Yang H, Jo E, Kim HJ, Cha IH, Jung YS, Nam W, Kim JY, Kim JK, Kim YH, Oh TG, Han SS, Kim H, Kim D. Deep Learning for Automated Detection of Cyst and Tumors of the Jaw in Panoramic Radiographs. J Clin Med. 2020;9:1839.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 33]  [Cited by in F6Publishing: 68]  [Article Influence: 17.0]  [Reference Citation Analysis (0)]
29.  Kar A, Wreesmann VB, Shwetha V, Thakur S, Rao VUS, Arakeri G, Brennan PA. Improvement of oral cancer screening quality and reach: The promise of artificial intelligence. J Oral Pathol Med. 2020;49:727-730.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 18]  [Cited by in F6Publishing: 21]  [Article Influence: 5.3]  [Reference Citation Analysis (0)]
30.  Aubreville M, Knipfer C, Oetter N, Jaremenko C, Rodner E, Denzler J, Bohr C, Neumann H, Stelzle F, Maier A. Automatic Classification of Cancerous Tissue in Laserendomicroscopy Images of the Oral Cavity using Deep Learning. Sci Rep. 2017;7:11979.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 130]  [Cited by in F6Publishing: 113]  [Article Influence: 16.1]  [Reference Citation Analysis (0)]
31.  Warin K, Limprasert W, Suebnukarn S, Jinaporntham S, Jantana P. Automatic classification and detection of oral cancer in photographic images using deep learning algorithms. J Oral Pathol Med. 2021;50:911-918.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 10]  [Cited by in F6Publishing: 25]  [Article Influence: 8.3]  [Reference Citation Analysis (0)]
32.  Jeyaraj PR, Samuel Nadar ER. Computer-assisted medical image classification for early diagnosis of oral cancer employing deep learning algorithm. J Cancer Res Clin Oncol. 2019;145:829-837.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 109]  [Cited by in F6Publishing: 75]  [Article Influence: 15.0]  [Reference Citation Analysis (0)]
33.  Ren J, Qi M, Yuan Y, Duan S, Tao X. Machine Learning-Based MRI Texture Analysis to Predict the Histologic Grade of Oral Squamous Cell Carcinoma. AJR Am J Roentgenol. 2020;215:1184-1190.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 8]  [Cited by in F6Publishing: 14]  [Article Influence: 3.5]  [Reference Citation Analysis (0)]
34.  Alkhadar H, Macluskey M, White S, Ellis I, Gardner A. Comparison of machine learning algorithms for the prediction of five-year survival in oral squamous cell carcinoma. J Oral Pathol Med. 2021;50:378-384.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 14]  [Cited by in F6Publishing: 16]  [Article Influence: 4.0]  [Reference Citation Analysis (0)]
35.  Karadaghy OA, Shew M, New J, Bur AM. Development and Assessment of a Machine Learning Model to Help Predict Survival Among Patients With Oral Squamous Cell Carcinoma. JAMA Otolaryngol Head Neck Surg. 2019;145:1115-1120.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 36]  [Cited by in F6Publishing: 61]  [Article Influence: 20.3]  [Reference Citation Analysis (0)]
36.  Fujima N, Andreu-Arasa VC, Meibom SK, Mercier GA, Salama AR, Truong MT, Sakai O. Deep learning analysis using FDG-PET to predict treatment outcome in patients with oral cavity squamous cell carcinoma. Eur Radiol. 2020;30:6322-6330.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 13]  [Cited by in F6Publishing: 10]  [Article Influence: 2.5]  [Reference Citation Analysis (0)]
37.  Kim DW, Lee S, Kwon S, Nam W, Cha IH, Kim HJ. Deep learning-based survival prediction of oral cancer patients. Sci Rep. 2019;9:6994.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 93]  [Cited by in F6Publishing: 137]  [Article Influence: 27.4]  [Reference Citation Analysis (0)]
38.  Pan X, Zhang T, Yang Q, Yang D, Rwigema JC, Qi XS. Survival prediction for oral tongue cancer patients via probabilistic genetic algorithm optimized neural network models. Br J Radiol. 2020;93:20190825.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 7]  [Cited by in F6Publishing: 13]  [Article Influence: 3.3]  [Reference Citation Analysis (0)]
39.  Ariji Y, Sugita Y, Nagao T, Nakayama A, Fukuda M, Kise Y, Nozawa M, Nishiyama M, Katumata A, Ariji E. CT evaluation of extranodal extension of cervical lymph node metastases in patients with oral squamous cell carcinoma using deep learning classification. Oral Radiol. 2020;36:148-155.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 27]  [Cited by in F6Publishing: 20]  [Article Influence: 4.0]  [Reference Citation Analysis (0)]
40.  Bur AM, Holcomb A, Goodwin S, Woodroof J, Karadaghy O, Shnayder Y, Kakarala K, Brant J, Shew M. Machine learning to predict occult nodal metastasis in early oral squamous cell carcinoma. Oral Oncol. 2019;92:20-25.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 60]  [Cited by in F6Publishing: 78]  [Article Influence: 15.6]  [Reference Citation Analysis (1)]
41.  Yuan Y, Ren J, Tao X. Machine learning-based MRI texture analysis to predict occult lymph node metastasis in early-stage oral tongue squamous cell carcinoma. Eur Radiol. 2021;31:6429-6437.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 7]  [Cited by in F6Publishing: 27]  [Article Influence: 9.0]  [Reference Citation Analysis (0)]
42.  Keek S, Sanduleanu S, Wesseling F, de Roest R, van den Brekel M, van der Heijden M, Vens C, Giuseppina C, Licitra L, Scheckenbach K, Vergeer M, Leemans CR, Brakenhoff RH, Nauta I, Cavalieri S, Woodruff HC, Poli T, Leijenaar R, Hoebers F, Lambin P. Computed tomography-derived radiomic signature of head and neck squamous cell carcinoma (peri)tumoral tissue for the prediction of locoregional recurrence and distant metastasis after concurrent chemo-radiotherapy. PLoS One. 2020;15:e0232639.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 18]  [Cited by in F6Publishing: 28]  [Article Influence: 7.0]  [Reference Citation Analysis (0)]
43.  Men K, Geng H, Zhong H, Fan Y, Lin A, Xiao Y. A Deep Learning Model for Predicting Xerostomia Due to Radiation Therapy for Head and Neck Squamous Cell Carcinoma in the RTOG 0522 Clinical Trial. Int J Radiat Oncol Biol Phys. 2019;105:440-447.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 29]  [Cited by in F6Publishing: 47]  [Article Influence: 9.4]  [Reference Citation Analysis (0)]
44.  Liang S, Tang F, Huang X, Yang K, Zhong T, Hu R, Liu S, Yuan X, Zhang Y. Deep-learning-based detection and segmentation of organs at risk in nasopharyngeal carcinoma computed tomographic images for radiotherapy planning. Eur Radiol. 2019;29:1961-1967.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 69]  [Cited by in F6Publishing: 76]  [Article Influence: 12.7]  [Reference Citation Analysis (0)]
45.  Sun XS, Li XY, Chen QY, Tang LQ, Mai HQ. Future of Radiotherapy in Nasopharyngeal Carcinoma. Br J Radiol. 2019;92:20190209.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 48]  [Cited by in F6Publishing: 62]  [Article Influence: 12.4]  [Reference Citation Analysis (0)]
46.  Li C, Jing B, Ke L, Li B, Xia W, He C, Qian C, Zhao C, Mai H, Chen M, Cao K, Mo H, Guo L, Chen Q, Tang L, Qiu W, Yu Y, Liang H, Huang X, Liu G, Li W, Wang L, Sun R, Zou X, Guo S, Huang P, Luo D, Qiu F, Wu Y, Hua Y, Liu K, Lv S, Miao J, Xiang Y, Sun Y, Guo X, Lv X. Development and validation of an endoscopic images-based deep learning model for detection with nasopharyngeal malignancies. Cancer Commun (Lond). 2018;38:59.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 23]  [Cited by in F6Publishing: 36]  [Article Influence: 6.0]  [Reference Citation Analysis (0)]
47.  Du D, Feng H, Lv W, Ashrafinia S, Yuan Q, Wang Q, Yang W, Feng Q, Chen W, Rahmim A, Lu L. Machine Learning Methods for Optimal Radiomics-Based Differentiation Between Recurrence and Inflammation: Application to Nasopharyngeal Carcinoma Post-therapy PET/CT Images. Mol Imaging Biol. 2020;22:730-738.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 35]  [Cited by in F6Publishing: 40]  [Article Influence: 8.0]  [Reference Citation Analysis (0)]
48.  Lin L, Dou Q, Jin YM, Zhou GQ, Tang YQ, Chen WL, Su BA, Liu F, Tao CJ, Jiang N, Li JY, Tang LL, Xie CM, Huang SM, Ma J, Heng PA, Wee JTS, Chua MLK, Chen H, Sun Y. Deep Learning for Automated Contouring of Primary Tumor Volumes by MRI for Nasopharyngeal Carcinoma. Radiology. 2019;291:677-686.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 131]  [Cited by in F6Publishing: 193]  [Article Influence: 38.6]  [Reference Citation Analysis (0)]
49.  Men K, Chen X, Zhang Y, Zhang T, Dai J, Yi J, Li Y. Deep Deconvolutional Neural Network for Target Segmentation of Nasopharyngeal Cancer in Planning Computed Tomography Images. Front Oncol. 2017;7:315.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 157]  [Cited by in F6Publishing: 121]  [Article Influence: 17.3]  [Reference Citation Analysis (1)]
50.  Zhang B, Lian Z, Zhong L, Zhang X, Dong Y, Chen Q, Zhang L, Mo X, Huang W, Yang W, Zhang S. Machine-learning based MRI radiomics models for early detection of radiation-induced brain injury in nasopharyngeal carcinoma. BMC Cancer. 2020;20:502.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 18]  [Cited by in F6Publishing: 37]  [Article Influence: 9.3]  [Reference Citation Analysis (0)]
51.  Jie B, Han B, Yao B, Zhang Y, Liao H, He Y. Automatic virtual reconstruction of maxillofacial bone defects assisted by ICP (iterative closest point) algorithm and normal people database. Clin Oral Investig. 2021;epub ahead of print.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 2]  [Cited by in F6Publishing: 2]  [Article Influence: 0.7]  [Reference Citation Analysis (0)]
52.  Dalvit Carvalho da Silva R, Jenkyn TR, Carranza VA. Convolutional neural networks and geometric moments to identify the bilateral symmetric midplane in facial skeletons from CT scans. Biology (Basel). 2021;10:182.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 5]  [Cited by in F6Publishing: 6]  [Article Influence: 2.0]  [Reference Citation Analysis (0)]
53.  Lee SJ, Yoo JY, Woo SY, Yang HJ, Kim JE, Huh KH, Lee SS, Heo MS, Hwang SJ, Yi WJ. A Complete Digital Workflow for Planning, Simulation, and Evaluation in Orthognathic Surgery. J Clin Med. 2021;10:4000.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 2]  [Cited by in F6Publishing: 6]  [Article Influence: 2.0]  [Reference Citation Analysis (0)]
54.  Shaheen E, Sun Y, Jacobs R, Politis C. Three-dimensional printed final occlusal splint for orthognathic surgery: design and validation. Int J Oral Maxillofac Surg. 2017;46:67-71.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 55]  [Cited by in F6Publishing: 58]  [Article Influence: 7.3]  [Reference Citation Analysis (0)]
55.  Fawzy HH, Choi JW. Evaluation of virtual surgical plan applicability in 3D simulation-guided two-jaw surgery. J Craniomaxillofac Surg. 2019;47:860-866.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 9]  [Cited by in F6Publishing: 9]  [Article Influence: 1.8]  [Reference Citation Analysis (0)]
56.  Lin HH, Lonic D, Lo LJ. 3D printing in orthognathic surgery - A literature review. J Formos Med Assoc. 2018;117:547-558.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 80]  [Cited by in F6Publishing: 85]  [Article Influence: 14.2]  [Reference Citation Analysis (0)]
57.  Shin W, Yeom HG, Lee GH, Yun JP, Jeong SH, Lee JH, Kim HK, Kim BC. Deep learning based prediction of necessity for orthognathic surgery of skeletal malocclusion using cephalogram in Korean individuals. BMC Oral Health. 2021;21:130.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 8]  [Cited by in F6Publishing: 34]  [Article Influence: 11.3]  [Reference Citation Analysis (0)]
58.  Lin HH, Chiang WC, Yang CT, Cheng CT, Zhang T, Lo LJ. On construction of transfer learning for facial symmetry assessment before and after orthognathic surgery. Comput Methods Programs Biomed. 2021;200:105928.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 16]  [Cited by in F6Publishing: 18]  [Article Influence: 6.0]  [Reference Citation Analysis (0)]
59.  Lo LJ, Yang CT, Ho CT, Liao CH, Lin HH. Automatic Assessment of 3-Dimensional Facial Soft Tissue Symmetry Before and After Orthognathic Surgery Using a Machine Learning Model: A Preliminary Experience. Ann Plast Surg. 2021;86:S224-S228.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 18]  [Cited by in F6Publishing: 18]  [Article Influence: 6.0]  [Reference Citation Analysis (0)]
60.  Knoops PGM, Papaioannou A, Borghi A, Breakey RWF, Wilson AT, Jeelani O, Zafeiriou S, Steinbacher D, Padwa BL, Dunaway DJ, Schievano S. A machine learning framework for automated diagnosis and computer-assisted planning in plastic and reconstructive surgery. Sci Rep. 2019;9:13597.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 42]  [Cited by in F6Publishing: 54]  [Article Influence: 10.8]  [Reference Citation Analysis (0)]
61.  Patcas R, Bernini DAJ, Volokitin A, Agustsson E, Rothe R, Timofte R. Applying artificial intelligence to assess the impact of orthognathic treatment on facial attractiveness and estimated age. Int J Oral Maxillofac Surg. 2019;48:77-83.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 49]  [Cited by in F6Publishing: 70]  [Article Influence: 11.7]  [Reference Citation Analysis (0)]
62.  Alghamdi HS, Jansen JA. The development and future of dental implants. Dent Mater J. 2020;39:167-172.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 31]  [Cited by in F6Publishing: 39]  [Article Influence: 9.8]  [Reference Citation Analysis (0)]
63.  Kurt Bayrakdar S, Orhan K, Bayrakdar IS, Bilgir E, Ezhov M, Gusarev M, Shumilov E. A deep learning approach for dental implant planning in cone-beam computed tomography images. BMC Med Imaging. 2021;21:86.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 14]  [Cited by in F6Publishing: 56]  [Article Influence: 18.7]  [Reference Citation Analysis (0)]
64.  Ha SR, Park HS, Kim EH, Kim HK, Yang JY, Heo J, Yeo IL. A pilot study using machine learning methods about factors influencing prognosis of dental implants. J Adv Prosthodont. 2018;10:395-400.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 5]  [Cited by in F6Publishing: 6]  [Article Influence: 1.0]  [Reference Citation Analysis (0)]
65.  Lee DW, Kim SY, Jeong SN, Lee JH. Artificial Intelligence in Fractured Dental Implant Detection and Classification: Evaluation Using Dataset from Two Dental Hospitals. Diagnostics (Basel). 2021;11:233.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 6]  [Cited by in F6Publishing: 27]  [Article Influence: 9.0]  [Reference Citation Analysis (0)]
66.  Mameno T, Wada M, Nozaki K, Takahashi T, Tsujioka Y, Akema S, Hasegawa D, Ikebe K. Predictive modeling for peri-implantitis by using machine learning techniques. Sci Rep. 2021;11:11090.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 3]  [Cited by in F6Publishing: 3]  [Article Influence: 1.0]  [Reference Citation Analysis (0)]
67.  Sukegawa S, Yoshii K, Hara T, Yamashita K, Nakano K, Yamamoto N, Nagatsuka H, Furuki Y. Deep Neural Networks for Dental Implant System Classification. Biomolecules. 2020;10:984.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 46]  [Cited by in F6Publishing: 69]  [Article Influence: 17.3]  [Reference Citation Analysis (0)]
68.  Sukegawa S, Yoshii K, Hara T, Matsuyama T, Yamashita K, Nakano K, Takabatake K, Kawai H, Nagatsuka H, Furuki Y. Multi-Task Deep Learning Model for Classification of Dental Implant Brand and Treatment Stage Using Dental Panoramic Radiograph Images. Biomolecules. 2021;11:815.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 10]  [Cited by in F6Publishing: 34]  [Article Influence: 11.3]  [Reference Citation Analysis (0)]
69.  Hadj Saïd M, Le Roux MK, Catherine JH, Lan R. Development of an Artificial Intelligence Model to Identify a Dental Implant from a Radiograph. Int J Oral Maxillofac Implants. 2020;36:1077-1082.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 17]  [Cited by in F6Publishing: 26]  [Article Influence: 6.5]  [Reference Citation Analysis (0)]
70.  Roy S, Dey S, Khutia N, Chowdhury AR, Datta S. Design of patient specific dental implant using FE analysis and computational intelligence techniques. Appl Soft Comput. 2018;65:272-9.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 36]  [Cited by in F6Publishing: 38]  [Article Influence: 6.3]  [Reference Citation Analysis (0)]
71.  Mupparapu M, Wu CW, Chen YC. Artificial intelligence, machine learning, neural networks, and deep learning: Futuristic concepts for new dental diagnosis. Quintessence Int. 2018;49:687-688.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in F6Publishing: 9]  [Reference Citation Analysis (0)]
72.  Rizzo S, Botta F, Raimondi S, Origgi D, Fanciullo C, Morganti AG, Bellomi M. Radiomics: the facts and the challenges of image analysis. Eur Radiol Exp. 2018;2:36.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 413]  [Cited by in F6Publishing: 595]  [Article Influence: 99.2]  [Reference Citation Analysis (0)]
73.  Ma Q, Kobayashi E, Fan B, Nakagawa K, Sakuma I, Masamune K, Suenaga H. Automatic 3D landmarking model using patch-based deep neural networks for CT image of oral and maxillofacial surgery. Int J Med Robot. 2020;16:e2093.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 19]  [Cited by in F6Publishing: 21]  [Article Influence: 5.3]  [Reference Citation Analysis (0)]
74.  Leite AF, Vasconcelos KF, Willems H, Jacobs R. Radiomics and Machine Learning in Oral Healthcare. Proteomics Clin Appl. 2020;14:e1900040.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 38]  [Cited by in F6Publishing: 54]  [Article Influence: 13.5]  [Reference Citation Analysis (0)]