1
|
Jani G, Patel B. Charting the growth through intelligence: A SWOC analysis on AI-assisted radiologic bone age estimation. Int J Legal Med 2025; 139:679-694. [PMID: 39460772 DOI: 10.1007/s00414-024-03356-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2024] [Accepted: 10/21/2024] [Indexed: 10/28/2024]
Abstract
Bone age estimation (BAE) is based on skeletal maturity and degenerative process of the skeleton. The clinical importance of BAE is in understanding the pediatric and growth-related disorders; whereas medicolegally it is important in determining criminal responsibility and establishing identification. Artificial Intelligence (AI) has been used in the field of the field of medicine and specifically in diagnostics using medical images. AI can greatly benefit the BAE techniques by decreasing the intra observer and inter observer variability as well as by reducing the analytical time. The AI techniques rely on object identification, feature extraction and segregation. Bone age assessment is the classical example where the concepts of AI such as object recognition and segregation can be used effectively. The paper describes various AI based algorithms developed for the purpose of radiologic BAE and the performances of the models. In the current paper we have also carried out qualitative analysis using Strength, Weakness, Opportunities and Challenges (SWOC) to examine critical factors that contribute to the application of AI in BAE. To best of our knowledge, the SWOC analysis is being carried out for the first time to assess the applicability of AI in BAE. Based on the SWOC analysis we have provided strategies for successful implementation of AI in BAE in forensic and medicolegal context.
Collapse
Affiliation(s)
- Gargi Jani
- School of Medico-Legal Studies, National Forensic Sciences University, Sector 9, Gandhinagar, 382007, Gujarat, India
| | - Bhoomika Patel
- School of Medico-Legal Studies, National Forensic Sciences University, Sector 9, Gandhinagar, 382007, Gujarat, India.
| |
Collapse
|
2
|
Barszcz M, Woźniak KJ. A review of methods of age estimation based on postmortem computed tomography. Forensic Sci Res 2025; 10:owae036. [PMID: 39990697 PMCID: PMC11839505 DOI: 10.1093/fsr/owae036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Accepted: 07/17/2024] [Indexed: 02/25/2025] Open
Abstract
Age at death is one of the key elements of the "biological profile" prepared when analysing unidentified human remains. Biological age is determined according to physiological indicators and developmental stage, which can be determined by bone assessment. It is worth remembering that the researcher must interpret each case individually and in accordance with the current state of knowledge. One of the most developed tools for analysing human remains is postmortem computed tomography. This allows for the visualization not only of bones without maceration but also of the entire body under various altered states, including corpses in advanced stages of decomposition and burnt bodies. The aim of this review is to present the current methods for age estimation based on postmortem computed tomography evaluation, comparing the results presented in 18 research projects published between 2013 and 2023 on foetuses, children, and adults from contemporary populations. Recent literature includes assessment of bones and characteristics such as skulls, teeth, vertebrae, pelvises, and long bones to estimate age at death. We cover the methods used in this recent literature, including machine learning, and discuss the advantages and disadvantages of them. Key points Postmortem computed tomography allows the analysis of several areas of the body at the same time, which may not be possible in the case of clinical trials (where the examination area should be limited).Postmortem computed tomography may enable the collection of data from people whose clinical examinations are relatively rare (e.g., pregnant women, children).Artificial intelligence should increasingly be used in studies on age estimation.Further research on modern populations is necessary to verify and refine the methods used to estimate age at death.
Collapse
Affiliation(s)
- Marta Barszcz
- Department of Forensic Medicine, Jagiellonian University Medical College, Krakow, Poland
- Doctoral School of Medical and Health Sciences, Jagiellonian University Medical College, Krakow, Poland
| | | |
Collapse
|
3
|
Pape J, Rosolowski M, Pfäffle R, Beeskow AB, Gräfe D. A critical comparative study of the performance of three AI-assisted programs for bone age determination. Eur Radiol 2025; 35:1190-1196. [PMID: 39499301 PMCID: PMC11835896 DOI: 10.1007/s00330-024-11169-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2024] [Revised: 08/14/2024] [Accepted: 09/30/2024] [Indexed: 11/07/2024]
Abstract
OBJECTIVES To date, AI-supported programs for bone age (BA) determination for medical use in Europe have almost only been validated separately, according to Greulich and Pyle (G&P). Therefore, the current study aimed to compare the performance of three programs, namely BoneXpert, PANDA, and BoneView, on a single Central European population. MATERIALS AND METHODS For this retrospective study, hand radiographs of 306 children aged 1-18 years, stratified by gender and age, were included. A subgroup consisting of the age group accounting for 90% of examinations in clinical practice was formed. The G&P BA was estimated by three human experts-as ground truth-and three AI-supported programs. The mean absolute deviation, the root mean squared error (RMSE), and dropouts by the AI were calculated. RESULTS The correlation between all programs and the ground truth was prominent (R2 ≥ 0.98). In the total group, BoneXpert had a lower RMSE than BoneView and PANDA (0.62 vs. 0.65 and 0.75 years) with a dropout rate of 2.3%, 20.3% and 0%, respectively. In the subgroup, there was less difference in RMSE (0.66 vs. 0.68 and 0.65 years, max. 4% dropouts). The standard deviation between the AI readers was lower than that between the human readers (0.54 vs. 0.62 years, p < 0.01). CONCLUSION All three AI programs predict BA after G&P in the main age range with similar high reliability. Differences arise at the boundaries of childhood. KEY POINTS Question There is a lack of comparative, independent validation for artificial intelligence-based bone age estimation in children. Findings Three commercially available programs estimate bone age after Greulich and Pyle with similarly high reliability in a central European cohort. Clinical relevance The comparative study will help the reader choose a software for bone age estimation approved for the European market depending on the targeted age group and economic considerations.
Collapse
Affiliation(s)
- Johanna Pape
- Department of Pediatric Radiology, University Hospital, 04103, Leipzig, Germany
| | - Maciej Rosolowski
- Institute for Medical Informatics, Statistics and Epidemiology, Leipzig University, 04107, Leipzig, Germany
| | - Roland Pfäffle
- Department of Pediatrics, University Hospital, 04103, Leipzig, Germany
| | - Anne B Beeskow
- Department of Diagnostic and Interventional Radiology, University Hospital, 04103, Leipzig, Germany
| | - Daniel Gräfe
- Department of Pediatric Radiology, University Hospital, 04103, Leipzig, Germany.
| |
Collapse
|
4
|
Sharifi G, Hajibeygi R, Zamani SAM, Easa AM, Bahrami A, Eshraghi R, Moafi M, Ebrahimi MJ, Fathi M, Mirjafari A, Chan JS, Dixe de Oliveira Santo I, Anar MA, Rezaei O, Tu LH. Diagnostic performance of neural network algorithms in skull fracture detection on CT scans: a systematic review and meta-analysis. Emerg Radiol 2025; 32:97-111. [PMID: 39680295 DOI: 10.1007/s10140-024-02300-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2024] [Accepted: 11/08/2024] [Indexed: 12/17/2024]
Abstract
BACKGROUND AND AIM The potential intricacy of skull fractures as well as the complexity of underlying anatomy poses diagnostic hurdles for radiologists evaluating computed tomography (CT) scans. The necessity for automated diagnostic tools has been brought to light by the shortage of radiologists and the growing demand for rapid and accurate fracture diagnosis. Convolutional Neural Networks (CNNs) are a potential new class of medical imaging technologies that use deep learning (DL) to improve diagnosis accuracy. The objective of this systematic review and meta-analysis is to assess how well CNN models diagnose skull fractures on CT images. METHODS PubMed, Scopus, and Web of Science were searched for studies published before February 2024 that used CNN models to detect skull fractures on CT scans. Meta-analyses were conducted for area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and accuracy. Egger's and Begg's tests were used to assess publication bias. RESULTS Meta-analysis was performed for 11 studies with 20,798 patients. Pooled average AUC for implementing pre-training for transfer learning in CNN models within their training model's architecture was 0.96 ± 0.02. The pooled averages of the studies' sensitivity and specificity were 1.0 and 0.93, respectively. The accuracy was obtained 0.92 ± 0.04. Studies showed heterogeneity, which was explained by differences in model topologies, training models, and validation techniques. There was no significant publication bias detected. CONCLUSION CNN models perform well in identifying skull fractures on CT scans. Although there is considerable heterogeneity and possibly publication bias, the results suggest that CNNs have the potential to improve diagnostic accuracy in the imaging of acute skull trauma. To further enhance these models' practical applicability, future studies could concentrate on the utility of DL models in prospective clinical trials.
Collapse
Affiliation(s)
- Guive Sharifi
- Skull base Research Center, Loghman Hakim Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Ramtin Hajibeygi
- Tehran University of Medical Sciences, School of Medicine, Tehran, Iran
| | | | - Ahmed Mohamedbaqer Easa
- Department of Radiology Technology, Collage of Health and Medical Technology, Al-Ayen Iraqi University, Thi-Qar, 64001, Iraq
| | | | | | - Maral Moafi
- Cell Biology and Anatomical Sciences, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mohammad Javad Ebrahimi
- Cell Biology and Anatomical Sciences, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mobina Fathi
- Skull base Research Center, Loghman Hakim Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Arshia Mirjafari
- Department of Radiological Sciences, University of California, Los Angeles, CA, USA
- College of Osteopathic Medicine of The Pacific, Western University of Health Sciences, Pomona, CA, USA
| | - Janine S Chan
- Keck School of Medicine of USC, Los Angeles, CA, USA
| | | | | | - Omidvar Rezaei
- Skull base Research Center, Loghman Hakim Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Long H Tu
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, CT, USA.
| |
Collapse
|
5
|
Goertz L, Jünger ST, Reinecke D, von Spreckelsen N, Shahzad R, Thiele F, Laukamp KR, Timmer M, Gertz RJ, Gietzen C, Kaya K, Grunz JP, Schlamann M, Kabbasch C, Borggrefe J, Pennig L. Deep learning-assistance significantly increases the detection sensitivity of neurosurgery residents for intracranial aneurysms in subarachnoid hemorrhage. J Clin Neurosci 2025; 132:110971. [PMID: 39673838 DOI: 10.1016/j.jocn.2024.110971] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Revised: 12/04/2024] [Accepted: 12/04/2024] [Indexed: 12/16/2024]
Abstract
OBJECTIVE The purpose of this study was to evaluate the effectiveness of a deep learning model (DLM) in improving the sensitivity of neurosurgery residents to detect intracranial aneurysms on CT angiography (CTA) in patients with aneurysmal subarachnoid hemorrhage (aSAH). METHODS In this diagnostic accuracy study, a set of 104 CTA scans of aSAH patients containing a total of 126 aneurysms were presented to three blinded neurosurgery residents (a first-year, third-year, and fifth-year resident), who individually assessed them for aneurysms. After the initial reading, the residents were given the predictions of a dedicated DLM previously established for automated detection and segmentation of intracranial aneurysms. The detection sensitivities for aneurysms of the DLM and the residents with and without the assistance of the DLM were compared. RESULTS The DLM had a detection sensitivity of 85.7%, while the residents showed detection sensitivities of 77.8%, 86.5%, and 87.3% without DLM assistance. After being provided with the DLM's results, the residents' individual detection sensitivities increased to 97.6%, 95.2%, and 98.4%, respectively, yielding an average increase of 13.2%. The DLM was particularly useful in detecting small aneurysms. In addition, interrater agreement among residents increased from a Fleiss κ of 0.394 without DLM assistance to 0.703 with DLM assistance. CONCLUSIONS The results of this pilot study suggest that deep learning models can help neurosurgeons detect aneurysms on CTA and make appropriate treatment decisions when immediate radiological consultation is not possible.
Collapse
Affiliation(s)
- Lukas Goertz
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany; Center for Neurosurgery, Department of General Neurosurgery, University of Cologne, Faculty of Medicine and University Hospital, Cologne, Germany.
| | - Stephanie T Jünger
- Center for Neurosurgery, Department of General Neurosurgery, University of Cologne, Faculty of Medicine and University Hospital, Cologne, Germany
| | - David Reinecke
- Center for Neurosurgery, Department of General Neurosurgery, University of Cologne, Faculty of Medicine and University Hospital, Cologne, Germany
| | - Niklas von Spreckelsen
- Center for Neurosurgery, Department of General Neurosurgery, University of Cologne, Faculty of Medicine and University Hospital, Cologne, Germany
| | - Rahil Shahzad
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany; Innovative Technologies, Philips Healthcare, Aachen, Germany
| | - Frank Thiele
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany; Innovative Technologies, Philips Healthcare, Aachen, Germany
| | - Kai Roman Laukamp
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Marco Timmer
- Center for Neurosurgery, Department of General Neurosurgery, University of Cologne, Faculty of Medicine and University Hospital, Cologne, Germany
| | - Roman Johannes Gertz
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Carsten Gietzen
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Kenan Kaya
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Jan-Peter Grunz
- Department of Diagnostic and Interventional Radiology, University Hospital Würzburg, Würzburg, Germany
| | - Marc Schlamann
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Christoph Kabbasch
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Jan Borggrefe
- Department of Radiology, Neuroradiology and Nuclear Medicine, Johannes Wesling University Hospital Minden, Ruhr University Bochum, Bochum, Germany
| | - Lenhard Pennig
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| |
Collapse
|
6
|
Yuan W, Fan P, Zhang L, Pan W, Zhang L. Bone Age Assessment Using Various Medical Imaging Techniques Enhanced by Artificial Intelligence. Diagnostics (Basel) 2025; 15:257. [PMID: 39941187 PMCID: PMC11817689 DOI: 10.3390/diagnostics15030257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2024] [Revised: 01/05/2025] [Accepted: 01/17/2025] [Indexed: 02/16/2025] Open
Abstract
Bone age (BA) reflects skeletal maturity and is crucial in clinical and forensic contexts, particularly for growth assessment, adult height prediction, and managing conditions like short stature and precocious puberty, often using X-ray, MRI, CT, or ultrasound imaging. Traditional BA assessment methods, including the Greulich-Pyle and Tanner-Whitehouse techniques, compare morphological changes to reference atlases. Despite their effectiveness, factors like genetics and environment complicate evaluations, emphasizing the need for new methods that account for comprehensive variations in skeletal maturity. The limitations of classical BA assessment methods increase the demand for automated solutions. The first automated tool, HANDX, was introduced in 1989. Researchers now focus on developing reliable artificial intelligence (AI)-driven tools, utilizing machine learning and deep learning techniques to improve accuracy and efficiency in BA evaluations, addressing traditional methods' shortcomings. Recent reviews on BA assessment methods rarely compare AI-based approaches across imaging technologies. This article explores advancements in BA estimation, focusing on machine learning methods and their clinical implications while providing a historical context and highlighting each approach's benefits and limitations.
Collapse
Affiliation(s)
- Wenhao Yuan
- Information Technology Center, Wenzhou Medical University, Wenzhou 325035, China; (W.Y.)
- Department of Mathematics and Statistics, Chonnam National University, Gwangju 61186, Republic of Korea
| | - Pei Fan
- Department of Orthopaedics, The Second Affiliated Hospital and Yuying Children’s Hospital of Wenzhou Medical University, Wenzhou 325027, China
| | - Le Zhang
- Information Technology Center, Wenzhou Medical University, Wenzhou 325035, China; (W.Y.)
| | - Wenbiao Pan
- Information Technology Center, Wenzhou Medical University, Wenzhou 325035, China; (W.Y.)
| | - Liwei Zhang
- State-Owned Assets and Laboratory Management Office, Wenzhou University, Wenzhou 325035, China
| |
Collapse
|
7
|
You W, Feng J, Lu J, Chen T, Liu X, Wu Z, Gong G, Sui Y, Wang Y, Zhang Y, Ye W, Chen X, Lv J, Wei D, Tang Y, Deng D, Gui S, Lin J, Chen P, Wang Z, Gong W, Wang Y, Zhu C, Zhang Y, Saloner DA, Mitsouras D, Guan S, Li Y, Jiang Y, Wang Y. Diagnosis of intracranial aneurysms by computed tomography angiography using deep learning-based detection and segmentation. J Neurointerv Surg 2024; 17:e132-e138. [PMID: 38238009 DOI: 10.1136/jnis-2023-021022] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2023] [Accepted: 11/11/2023] [Indexed: 12/28/2024]
Abstract
BACKGROUND Detecting and segmenting intracranial aneurysms (IAs) from angiographic images is a laborious task. OBJECTIVE To evaluates a novel deep-learning algorithm, named vessel attention (VA)-Unet, for the efficient detection and segmentation of IAs. METHODS This retrospective study was conducted using head CT angiography (CTA) examinations depicting IAs from two hospitals in China between 2010 and 2021. Training included cases with subarachnoid hemorrhage (SAH) and arterial stenosis, common accompanying vascular abnormalities. Testing was performed in cohorts with reference-standard digital subtraction angiography (cohort 1), with SAH (cohort 2), acquired outside the time interval of training data (cohort 3), and an external dataset (cohort 4). The algorithm's performance was evaluated using sensitivity, recall, false positives per case (FPs/case), and Dice coefficient, with manual segmentation as the reference standard. RESULTS The study included 3190 CTA scans with 4124 IAs. Sensitivity, recall, and FPs/case for detection of IAs were, respectively, 98.58%, 96.17%, and 2.08 in cohort 1; 95.00%, 88.8%, and 3.62 in cohort 2; 96.00%, 93.77%, and 2.60 in cohort 3; and, 96.17%, 94.05%, and 3.60 in external cohort 4. The segmentation accuracy, as measured by the Dice coefficient, was 0.78, 0.71, 0.71, and 0.66 for cohorts 1-4, respectively. VA-Unet detection recall and FPs/case and segmentation accuracy were affected by several clinical factors, including aneurysm size, bifurcation aneurysms, and the presence of arterial stenosis and SAH. CONCLUSIONS VA-Unet accurately detected and segmented IAs in head CTA comparably to expert interpretation. The proposed algorithm has significant potential to assist radiologists in efficiently detecting and segmenting IAs from CTA images.
Collapse
Affiliation(s)
- Wei You
- Department of Neurosurgery, Beijing Tiantan Hospital and Beijing Neurosurgical Institute, Capital Medical University, Beijing, China
| | - Junqiang Feng
- Department of Neurosurgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Jing Lu
- Department of Radiology, Third Medical Center of Chinese PLA General Hospital, Beijing, China
| | - Ting Chen
- School of Biomedical Engineering, Capital Medical University, Beijing, China
| | - Xinke Liu
- Department of Neurosurgery, Beijing Tiantan Hospital and Beijing Neurosurgical Institute, Capital Medical University, Beijing, China
| | - Zhenzhou Wu
- Artificial Intelligence Research Center, China National Clinical Research Center for Neurological Diseases, Beijing, China
| | - Guoyang Gong
- Artificial Intelligence Research Center, China National Clinical Research Center for Neurological Diseases, Beijing, China
| | - Yutong Sui
- Artificial Intelligence Research Center, China National Clinical Research Center for Neurological Diseases, Beijing, China
| | - Yanwen Wang
- Artificial Intelligence Research Center, China National Clinical Research Center for Neurological Diseases, Beijing, China
| | - Yifan Zhang
- Artificial Intelligence Research Center, China National Clinical Research Center for Neurological Diseases, Beijing, China
| | - Wanxing Ye
- Artificial Intelligence Research Center, China National Clinical Research Center for Neurological Diseases, Beijing, China
| | - Xiheng Chen
- Department of Neurosurgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Jian Lv
- Department of Neurosurgery, Beijing Tiantan Hospital and Beijing Neurosurgical Institute, Capital Medical University, Beijing, China
| | - Dachao Wei
- Department of Neurosurgery, Beijing Tiantan Hospital and Beijing Neurosurgical Institute, Capital Medical University, Beijing, China
| | - Yudi Tang
- Department of Neurosurgery, Beijing Tiantan Hospital and Beijing Neurosurgical Institute, Capital Medical University, Beijing, China
| | - Dingwei Deng
- Department of Intervention, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Siming Gui
- Department of Neurosurgery, Beijing Tiantan Hospital and Beijing Neurosurgical Institute, Capital Medical University, Beijing, China
| | - Jun Lin
- Department of Neurosurgery, Beijing Tiantan Hospital and Beijing Neurosurgical Institute, Capital Medical University, Beijing, China
| | - Peike Chen
- Department of Neurosurgery, Beijing Tiantan Hospital and Beijing Neurosurgical Institute, Capital Medical University, Beijing, China
| | - Ziyao Wang
- Department of Interventional Neuroradiology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Wentao Gong
- Department of Interventional Neuroradiology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Yang Wang
- Department of Neurosurgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Chengcheng Zhu
- Department of Radiology, University of Washington, Seattle, Washington, USA
| | - Yue Zhang
- San Francisco Veterans Affairs Medical Center, San Francisco, California, USA
| | - David A Saloner
- San Francisco Veterans Affairs Medical Center, San Francisco, California, USA
- Department of Radiology and Biomedical Imaging, University California, San Francisco, San Francisco, California, USA
| | - Dimitrios Mitsouras
- San Francisco Veterans Affairs Medical Center, San Francisco, California, USA
- Department of Radiology and Biomedical Imaging, University California, San Francisco, San Francisco, California, USA
| | - Sheng Guan
- Department of Interventional Neuroradiology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Youxiang Li
- Department of Neurosurgery, Beijing Tiantan Hospital and Beijing Neurosurgical Institute, Capital Medical University, Beijing, China
- Department of Neurointerventional Engineering and Technology (NO: BG0287), Beijing Engineering Research Center, Beijing, China
| | - Yuhua Jiang
- Department of Neurosurgery, Beijing Tiantan Hospital and Beijing Neurosurgical Institute, Capital Medical University, Beijing, China
- Department of Neurointerventional Engineering and Technology (NO: BG0287), Beijing Engineering Research Center, Beijing, China
| | - Yan Wang
- San Francisco Veterans Affairs Medical Center, San Francisco, California, USA
- Department of Radiology and Biomedical Imaging, University California, San Francisco, San Francisco, California, USA
| |
Collapse
|
8
|
Gao C, Hu C, Qian Q, Li Y, Xing X, Gong P, Lin M, Ding Z. Artificial intelligence model system for bone age assessment of preschool children. Pediatr Res 2024; 96:1822-1828. [PMID: 38802611 PMCID: PMC11772234 DOI: 10.1038/s41390-024-03282-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 05/04/2024] [Accepted: 05/07/2024] [Indexed: 05/29/2024]
Abstract
BACKGROUD Our study aimed to assess the impact of inter- and intra-observer variations when utilizing an artificial intelligence (AI) system for bone age assessment (BAA) of preschool children. METHODS A retrospective study was conducted involving a total sample of 53 female individuals and 41 male individuals aged 3-6 years in China. Radiographs were assessed by four mid-level radiology reviewers using the TW3 and RUS-CHN methods. Bone age (BA) was analyzed in two separate situations, with/without the assistance of AI. Following a 4-week wash-out period, radiographs were reevaluated in the same manner. Accuracy metrics, the correlation coefficient (ICC)and Bland-Altman plots were employed. RESULTS The accuracy of BAA by the reviewers was significantly improved with AI. The results of RMSE and MAE decreased in both methods (p < 0.001). When comparing inter-observer agreement in both methods and intra-observer reproducibility in two interpretations, the ICC results were improved with AI. The ICC values increased in both two interpretations for both methods and exceeded 0.99 with AI. CONCLUSION In the assessment of BA for preschool children, AI was found to be capable of reducing inter-observer variability and enhancing intra-observer reproducibility, which can be considered an important tool for clinical work by radiologists. IMPACT The RUS-CHN method is a special bone age method devised to be suitable for Chinese children. The preschool stage is a critical phase for children, marked by a high degree of variability that renders BA prediction challenging. The accuracy of BAA by the reviewers can be significantly improved with the aid of an AI model system. This study is the first to assess the impact of inter- and intra-observer variations when utilizing an AI model system for BAA of preschool children using both the TW3 and RUS-CHN methods.
Collapse
Affiliation(s)
- Chengcheng Gao
- Department of Radiology, Hangzhou First People's Hospital, Hangzhou, China
| | - Chunfeng Hu
- Department of Radiology, Hangzhou First People's Hospital, Hangzhou, China
- The Fourth School of Clinical Medicine, Zhejiang Chinese Medicine University, Hangzhou, China
| | - Qi Qian
- Department of Radiology, The Third Affiliated Hospital of Zhejiang Chinese Medicine University, Hangzhou, China
| | - Yangsheng Li
- Department of Radiology, Hangzhou First People's Hospital, Hangzhou, China
| | - Xiaowei Xing
- Rehabilitation Medicine Center, Department of Radiology, Zhejiang Provincial People's Hospital, Affiliated People's Hospital, Hangzhou Medical College, Hangzhou, China
| | | | - Min Lin
- Department of Radiology, The Third Affiliated Hospital of Zhejiang Chinese Medicine University, Hangzhou, China.
- College of Humanities and Management, Zhejiang Chinese Medical University, Hangzhou, China.
| | - Zhongxiang Ding
- Department of Radiology, Hangzhou First People's Hospital, Hangzhou, China.
- Key Laboratory of Clinical Cancer Pharmacology and Toxicology Research of Zhejiang Province, Hangzhou, China.
| |
Collapse
|
9
|
Liu X, Wang R, Jiang W, Lu Z, Chen N, Wang H. Automated Distal Radius and Ulna Skeletal Maturity Grading from Hand Radiographs with an Attention Multi-Task Learning Method. Tomography 2024; 10:1915-1929. [PMID: 39728901 DOI: 10.3390/tomography10120139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2024] [Revised: 11/21/2024] [Accepted: 11/23/2024] [Indexed: 12/28/2024] Open
Abstract
Background: Assessment of skeletal maturity is a common clinical practice to investigate adolescent growth and endocrine disorders. The distal radius and ulna (DRU) maturity classification is a practical and easy-to-use scheme that was designed for adolescent idiopathic scoliosis clinical management and presents high sensitivity in predicting the growth peak and cessation among adolescents. However, time-consuming and error-prone manual assessment limits DRU in clinical application. Methods: In this study, we propose a multi-task learning framework with an attention mechanism for the joint segmentation and classification of the distal radius and ulna in hand X-ray images. The proposed framework consists of two sub-networks: an encoder-decoder structure with attention gates for segmentation and a slight convolutional network for classification. Results: With a transfer learning strategy, the proposed framework improved DRU segmentation and classification over the single task learning counterparts and previously reported methods, achieving an accuracy of 94.3% and 90.8% for radius and ulna maturity grading. Findings: Our automatic DRU assessment platform covers the whole process of growth acceleration and cessation during puberty. Upon incorporation into advanced scoliosis progression prognostic tools, clinical decision making will be potentially improved in the conservative and operative management of scoliosis patients.
Collapse
Affiliation(s)
- Xiaowei Liu
- School of Computing and Artificial Intelligence, Shandong University of Finance and Economics, Jinan 250000, China
| | - Rulan Wang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518000, China
| | - Wenting Jiang
- Department of Diagnostic Radiology, The University of Hong Kong, Hong Kong 999077
| | - Zhaohua Lu
- School of Computing and Artificial Intelligence, Shandong University of Finance and Economics, Jinan 250000, China
| | - Ningning Chen
- Department of Orthopedic Surgery, The Seventh Affiliated Hospital, Sun Yat-Sen University, Shenzhen 518000, China
- Shenzhen Key Laboratory of Bone Tissue Repair and Translational Research, Shenzhen 518000, China
| | - Hongfei Wang
- Department of Orthopaedics and Traumatology, The University of Hong Kong, Hong Kong 999077
| |
Collapse
|
10
|
Wu Y, Chen X, Dong F, He L, Cheng G, Zheng Y, Ma C, Yao H, Zhou S. Performance evaluation of a deep learning-based cascaded HRNet model for automatic measurement of X-ray imaging parameters of lumbar sagittal curvature. EUROPEAN SPINE JOURNAL : OFFICIAL PUBLICATION OF THE EUROPEAN SPINE SOCIETY, THE EUROPEAN SPINAL DEFORMITY SOCIETY, AND THE EUROPEAN SECTION OF THE CERVICAL SPINE RESEARCH SOCIETY 2024; 33:4104-4118. [PMID: 37787781 DOI: 10.1007/s00586-023-07937-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 04/03/2023] [Accepted: 08/30/2023] [Indexed: 10/04/2023]
Abstract
PURPOSE To develop a deep learning-based cascaded HRNet model, in order to automatically measure X-ray imaging parameters of lumbar sagittal curvature and to evaluate its prediction performance. METHODS A total of 3730 lumbar lateral digital radiography (DR) images were collected from picture archiving and communication system (PACS). Among them, 3150 images were randomly selected as the training dataset and validation dataset, and 580 images as the test dataset. The landmarks of the lumbar curve index (LCI), lumbar lordosis angle (LLA), sacral slope (SS), lumbar lordosis index (LLI), and the posterior edge tangent angle of the vertebral body (PTA) were identified and marked. The measured results of landmarks on the test dataset were compared with the mean values of manual measurement as the reference standard. Percentage of correct key-points (PCK), intra-class correlation coefficient (ICC), Pearson correlation coefficient (r), mean absolute error (MAE), mean square error (MSE), root-mean-square error (RMSE), and Bland-Altman plot were used to evaluate the performance of the cascade HRNet model. RESULTS The PCK of the cascaded HRNet model was 97.9-100% in the 3 mm distance threshold. The mean differences between the reference standard and the predicted values for LCI, LLA, SS, LLI, and PTA were 0.43 mm, 0.99°, 1.11°, 0.01 mm, and 0.23°, respectively. There were strong correlation and consistency of the five parameters between the cascaded HRNet model and manual measurements (ICC = 0.989-0.999, R = 0.991-0.999, MAE = 0.63-1.65, MSE = 0.61-4.06, RMSE = 0.78-2.01). CONCLUSION The cascaded HRNet model based on deep learning algorithm could accurately identify the sagittal curvature-related landmarks on lateral lumbar DR images and automatically measure the relevant parameters, which is of great significance in clinical application.
Collapse
Affiliation(s)
- Yuhua Wu
- The First Clinical Medical College of Gansu University of Chinese Medicine, Lanzhou, 730000, Gansu, China
| | - Xiaofei Chen
- Department of Radiology, Gansu Provincial Hospital of Traditional Chinese Medicine (The first affiliated hospital of Gansu University of Traditional Chinese Medicine), Lanzhou, 730050, Gansu, China
| | - Fuwen Dong
- Department of Radiology, Gansu Provincial Hospital of Traditional Chinese Medicine (The first affiliated hospital of Gansu University of Traditional Chinese Medicine), Lanzhou, 730050, Gansu, China
| | - Linyang He
- Hangzhou Jianpei Technology Company Ltd, Hangzhou, 311200, Zhejiang, China
| | - Guohua Cheng
- Hangzhou Jianpei Technology Company Ltd, Hangzhou, 311200, Zhejiang, China
| | - Yuwen Zheng
- The First Clinical Medical College of Gansu University of Chinese Medicine, Lanzhou, 730000, Gansu, China
| | - Chunyu Ma
- The First Clinical Medical College of Gansu University of Chinese Medicine, Lanzhou, 730000, Gansu, China
| | - Hongyan Yao
- Department of Radiology, Gansu Provincial Hospital, No. 204, Donggang West Road, Lanzhou, 730000, Gansu, China
| | - Sheng Zhou
- Department of Radiology, Gansu Provincial Hospital, No. 204, Donggang West Road, Lanzhou, 730000, Gansu, China.
| |
Collapse
|
11
|
Hayes DS, Foster BK, Makar G, Manzar S, Ozdag Y, Shultz M, Klena JC, Grandizio LC. Artificial Intelligence in Orthopaedics: Performance of ChatGPT on Text and Image Questions on a Complete AAOS Orthopaedic In-Training Examination (OITE). JOURNAL OF SURGICAL EDUCATION 2024; 81:1645-1649. [PMID: 39284250 DOI: 10.1016/j.jsurg.2024.08.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 03/27/2024] [Accepted: 08/10/2024] [Indexed: 10/11/2024]
Abstract
OBJECTIVE Artificial intelligence (AI) is capable of answering complex medical examination questions, offering the potential to revolutionize medical education and healthcare delivery. In this study we aimed to assess ChatGPT, a model that has demonstrated exceptional performance on standardized exams. Specifically, our focus was on evaluating ChatGPT's performance on the complete 2019 Orthopaedic In-Training Examination (OITE), including questions with an image component. Furthermore, we explored difference in performance when questions varied by text only or text with an associated image, including whether the image was described using AI or a trained orthopaedist. DESIGN AND SETTING Questions from the 2019 OITE were input into ChatGPT version 4.0 (GPT-4) using 3 response variants. As the capacity to input or interpret images is not publicly available in ChatGPT at the time of this study, questions with an image component were described and added to the OITE question using descriptions generated by Microsoft Azure AI Vision Studio or authors of the study. RESULTS ChatGPT performed equally on OITE questions with or without imaging components, with an average correct answer choice of 49% and 48% across all 3 input methods. Performance dropped by 6% when using image descriptions generated by AI. When using single answer multiple-choice input methods, ChatGPT performed nearly double the rate of random guessing, answering 49% of questions correctly. The performance of ChatGPT was worse than all resident classes on the 2019 exam, scoring 4% lower than PGY-1 residents. DISCUSSION ChatGT performed below all resident classes on the 2019 OITE. Performance on text only questions and questions with images was nearly equal if the image was described by a trained orthopaedic specialist but decreased when using an AI generated description. Recognizing the performance abilities of AI software may provide insight into the current and future applications of this technology into medical education.
Collapse
Affiliation(s)
- Daniel S Hayes
- Department of Orthopaedic Surgery, Geisinger Commonwealth School of Medicine, Geisinger Musculoskeletal Institute, Danville, PA
| | - Brian K Foster
- Department of Orthopaedic Surgery, Geisinger Commonwealth School of Medicine, Geisinger Musculoskeletal Institute, Danville, PA
| | - Gabriel Makar
- Department of Orthopaedic Surgery, Geisinger Commonwealth School of Medicine, Geisinger Musculoskeletal Institute, Danville, PA
| | - Shahid Manzar
- Department of Orthopaedic Surgery, Geisinger Commonwealth School of Medicine, Geisinger Musculoskeletal Institute, Danville, PA
| | - Yagiz Ozdag
- Department of Orthopaedic Surgery, Geisinger Commonwealth School of Medicine, Geisinger Musculoskeletal Institute, Danville, PA
| | - Mason Shultz
- Department of Orthopaedic Surgery, Geisinger Commonwealth School of Medicine, Geisinger Musculoskeletal Institute, Danville, PA
| | - Joel C Klena
- Department of Orthopaedic Surgery, Geisinger Commonwealth School of Medicine, Geisinger Musculoskeletal Institute, Danville, PA
| | - Louis C Grandizio
- Department of Orthopaedic Surgery, Geisinger Commonwealth School of Medicine, Geisinger Musculoskeletal Institute, Danville, PA.
| |
Collapse
|
12
|
Weller JH, Scheese D, Tragesser C, Yi PH, Alaish SM, Hackam DJ. Artificial Intelligence vs. Doctors: Diagnosing Necrotizing Enterocolitis on Abdominal Radiographs. J Pediatr Surg 2024; 59:161592. [PMID: 38955625 PMCID: PMC11401766 DOI: 10.1016/j.jpedsurg.2024.06.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Revised: 05/30/2024] [Accepted: 06/03/2024] [Indexed: 07/04/2024]
Abstract
BACKGROUND Radiographic diagnosis of necrotizing enterocolitis (NEC) is challenging. Deep learning models may improve accuracy by recognizing subtle imaging patterns. We hypothesized it would perform with comparable accuracy to that of senior surgical residents. METHODS This cohort study compiled 494 anteroposterior neonatal abdominal radiographs (214 images NEC, 280 other) and randomly divided them into training, validation, and test sets. Transfer learning was utilized to fine-tune a ResNet-50 deep convolutional neural network (DCNN) pre-trained on ImageNet. Gradient-weighted Class Activation Mapping (Grad-CAM) heatmaps visualized image regions of greatest relevance to the pretrained neural network. Senior surgery residents at a single institution examined the test set. Resident and DCNN ability to identify pneumatosis on radiographic images were measured via area under the receiver operating curves (AUROC) and compared using DeLong's method. RESULTS The pretrained neural network achieved AUROC of 0.918 (95% CI, 0.837-0.978) with an accuracy of 87.8% with five false negative and one false positive prediction. Heatmaps confirmed appropriate image region emphasis by the pretrained neural network. Senior surgical residents had a median area under the receiver operating curve of 0.896, ranging from 0.778 (95% CI 0.615-0.941) to 0.991 (95% CI 0.971-0.999) with zero to five false negatives and one to eleven false positive predictions. The deep convolutional neural network performed comparably to each surgical resident's performance (p > 0.05 for all comparisons). CONCLUSIONS A deep convolutional neural network trained to recognize pneumatosis can quickly and accurately assist clinicians in promptly identifying NEC in clinical practice. LEVEL OF EVIDENCE III (study type: Study of Diagnostic Test, study of nonconsecutive patients without a universally applied "gold standard").
Collapse
Affiliation(s)
- Jennine H Weller
- Division of Pediatric Surgery, Department of Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Daniel Scheese
- Division of Pediatric Surgery, Department of Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Cody Tragesser
- Division of Pediatric Surgery, Department of Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Paul H Yi
- Malone Center for Engineering in Healthcare, Johns Hopkins University School of Medicine, Baltimore, MD, USA; Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Samuel M Alaish
- Division of Pediatric Surgery, Department of Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - David J Hackam
- Division of Pediatric Surgery, Department of Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
| |
Collapse
|
13
|
Wang X, Huang X. Risk factors and predictive indicators of rupture in cerebral aneurysms. Front Physiol 2024; 15:1454016. [PMID: 39301423 PMCID: PMC11411460 DOI: 10.3389/fphys.2024.1454016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2024] [Accepted: 08/23/2024] [Indexed: 09/22/2024] Open
Abstract
Cerebral aneurysms are abnormal dilations of blood vessels in the brain that have the potential to rupture, leading to subarachnoid hemorrhage and other serious complications. Early detection and prediction of aneurysm rupture are crucial for effective management and prevention of rupture-related morbidities and mortalities. This review aims to summarize the current knowledge on risk factors and predictive indicators of rupture in cerebral aneurysms. Morphological characteristics such as aneurysm size, shape, and location, as well as hemodynamic factors including blood flow patterns and wall shear stress, have been identified as important factors influencing aneurysm stability and rupture risk. In addition to these traditional factors, emerging evidence suggests that biological and genetic factors, such as inflammation, extracellular matrix remodeling, and genetic polymorphisms, may also play significant roles in aneurysm rupture. Furthermore, advancements in computational fluid dynamics and machine learning algorithms have enabled the development of novel predictive models for rupture risk assessment. However, challenges remain in accurately predicting aneurysm rupture, and further research is needed to validate these predictors and integrate them into clinical practice. By elucidating and identifying the various risk factors and predictive indicators associated with aneurysm rupture, we can enhance personalized risk assessment and optimize treatment strategies for patients with cerebral aneurysms.
Collapse
Affiliation(s)
- Xiguang Wang
- Department of Research & Development Management, Shanghai Aohua Photoelectricity Endoscope Co., Ltd., Shanghai, China
| | - Xu Huang
- Department of Research & Development Management, Shanghai Aohua Photoelectricity Endoscope Co., Ltd., Shanghai, China
| |
Collapse
|
14
|
Zhang Q, Zhao F, Zhang Y, Huang M, Gong X, Deng X. Automated measurement of lumbar pedicle screw parameters using deep learning algorithm on preoperative CT scans. J Bone Oncol 2024; 47:100627. [PMID: 39188420 PMCID: PMC11345936 DOI: 10.1016/j.jbo.2024.100627] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Revised: 07/13/2024] [Accepted: 07/22/2024] [Indexed: 08/28/2024] Open
Abstract
Purpose This study aims to devise and assess an automated measurement framework for lumbar pedicle screw parameters leveraging preoperative computed tomography (CT) scans and a deep learning algorithm. Methods A deep learning model was constructed employing a dataset comprising 1410 axial preoperative CT images of lumbar pedicles sourced from 282 patients. The model was trained to predict several screw parameters, including the axial angle and width of pedicles, the length of pedicle screw paths, and the interpedicular distance. The mean values of these parameters, as determined by two radiologists and one spinal surgeon, served as the reference standard. Results The deep learning model achieved high agreement with the reference standard for the axial angle of the left pedicle (ICC = 0.92) and right pedicle (ICC = 0.93), as well as for the length of the left pedicle screw path (ICC = 0.82) and right pedicle (ICC = 0.87). Similarly, high agreement was observed for pedicle width (left ICC = 0.97, right ICC = 0.98) and interpedicular distance (ICC = 0.91). Overall, the model's performance paralleled that of manual determination of lumbar pedicle screw parameters. Conclusion The developed deep learning-based model demonstrates proficiency in accurately identifying landmarks on preoperative CT scans and autonomously generating parameters relevant to lumbar pedicle screw placement. These findings suggest its potential to offer efficient and precise measurements for clinical applications.
Collapse
Affiliation(s)
- Qian Zhang
- Department of Radiology, The 901st Hospital of the Joint Logistics Support Force of PLA, Hefei 230031, China
- Soochow University, Soochow 215000, China
- Department of Radiology, Zhejiang Provincial People’s Hospital (Affiliated People’s Hospital), Hangzhou Medical College, Hangzhou 310014, China
| | - Fanfan Zhao
- Department of Radiology, Zhejiang Provincial People’s Hospital (Affiliated People’s Hospital), Hangzhou Medical College, Hangzhou 310014, China
| | - Yu Zhang
- Department of Radiology, The 901st Hospital of the Joint Logistics Support Force of PLA, Hefei 230031, China
| | - Man Huang
- Department of Radiology, The 901st Hospital of the Joint Logistics Support Force of PLA, Hefei 230031, China
| | - Xiangyang Gong
- Soochow University, Soochow 215000, China
- Department of Radiology, Zhejiang Provincial People’s Hospital (Affiliated People’s Hospital), Hangzhou Medical College, Hangzhou 310014, China
| | - Xuefei Deng
- Department of Anatomy, Anhui Medical University, Hefei 230032, China
| |
Collapse
|
15
|
Hamd ZY, Alorainy AI, Alharbi MA, Hamdoun A, Alkhedeiri A, Alhegail S, Absar N, Khandaker MU, Osman AFI. Deep learning-based automated bone age estimation for Saudi patients on hand radiograph images: a retrospective study. BMC Med Imaging 2024; 24:199. [PMID: 39090563 PMCID: PMC11295702 DOI: 10.1186/s12880-024-01378-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Accepted: 07/24/2024] [Indexed: 08/04/2024] Open
Abstract
PURPOSE In pediatric medicine, precise estimation of bone age is essential for skeletal maturity evaluation, growth disorder diagnosis, and therapeutic intervention planning. Conventional techniques for determining bone age depend on radiologists' subjective judgments, which may lead to non-negligible differences in the estimated bone age. This study proposes a deep learning-based model utilizing a fully connected convolutional neural network(CNN) to predict bone age from left-hand radiographs. METHODS The data set used in this study, consisting of 473 patients, was retrospectively retrieved from the PACS (Picture Achieving and Communication System) of a single institution. We developed a fully connected CNN consisting of four convolutional blocks, three fully connected layers, and a single neuron as output. The model was trained and validated on 80% of the data using the mean-squared error as a cost function to minimize the difference between the predicted and reference bone age values through the Adam optimization algorithm. Data augmentation was applied to the training and validation sets yielded in doubling the data samples. The performance of the trained model was evaluated on a test data set (20%) using various metrics including, the mean absolute error (MAE), median absolute error (MedAE), root-mean-squared error (RMSE), and mean absolute percentage error (MAPE). The code of the developed model for predicting the bone age in this study is available publicly on GitHub at https://github.com/afiosman/deep-learning-based-bone-age-estimation . RESULTS Experimental results demonstrate the sound capabilities of our model in predicting the bone age on the left-hand radiographs as in the majority of the cases, the predicted bone ages and reference bone ages are nearly close to each other with a calculated MAE of 2.3 [1.9, 2.7; 0.95 confidence level] years, MedAE of 2.1 years, RMAE of 3.0 [1.5, 4.5; 0.95 confidence level] years, and MAPE of 0.29 (29%) on the test data set. CONCLUSION These findings highlight the usability of estimating the bone age from left-hand radiographs, helping radiologists to verify their own results considering the margin of error on the model. The performance of our proposed model could be improved with additional refining and validation.
Collapse
Affiliation(s)
- Zuhal Y Hamd
- Department of Radiological Sciences, College of Health and Rehabilitation Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Saudi Arabia
| | - Amal I Alorainy
- Department of Radiological Sciences, College of Health and Rehabilitation Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Saudi Arabia
| | | | - Anas Hamdoun
- Medical Imaging Department, KAAUH, Riyadh, Saudi Arabia
| | | | | | - Nurul Absar
- Department of Computer Science & Engineering, BGC Trust University Bangladesh, Chittagong, 4301, Bangladesh
| | - Mayeen Uddin Khandaker
- Applied Physics and Radiation Technologies Group, CCDCU, School of Engineering and Technology, Sunway University, Bandar Sunway, Subang jaya, 47500, Malaysia
- Faculty of Graduate Studies, Daffodil International University, Daffodil Smart City, Birulia, Savar, Dhaka, 1216, Bangladesh
| | - Alexander F I Osman
- Department of Medical Physics, Al-Neelain University, Khartoum, 11121, Sudan.
| |
Collapse
|
16
|
Gräfe D, Beeskow AB, Pfäffle R, Rosolowski M, Chung TS, DiFranco MD. Automated bone age assessment in a German pediatric cohort: agreement between an artificial intelligence software and the manual Greulich and Pyle method. Eur Radiol 2024; 34:4407-4413. [PMID: 38151536 PMCID: PMC11213793 DOI: 10.1007/s00330-023-10543-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 11/12/2023] [Accepted: 12/08/2023] [Indexed: 12/29/2023]
Abstract
OBJECTIVES This study aimed to evaluate the performance of artificial intelligence (AI) software in bone age (BA) assessment, according to the Greulich and Pyle (G&P) method in a German pediatric cohort. MATERIALS AND METHODS Hand radiographs of 306 pediatric patients aged 1-18 years (153 boys, 153 girls, 18 patients per year of life)-including a subgroup of patients in the age group for which the software is declared (243 patients)-were analyzed retrospectively. Two pediatric radiologists and one endocrinologist made independent blinded BA reads. Subsequently, AI software estimated BA from the same images. Both agreements, accuracy, and interchangeability between AI and expert readers were assessed. RESULTS The mean difference between the average of three expert readers and AI software was 0.39 months with a mean absolute difference (MAD) of 6.8 months (1.73 months for the mean difference and 6.0 months for MAD in the intended use subgroup). Performance in boys was slightly worse than in girls (MAD 6.3 months vs. 5.6 months). Regression analyses showed constant bias (slope of 1.01 with a 95% CI 0.99-1.02). The estimated equivalence index for interchangeability was - 14.3 (95% CI -27.6 to - 1.1). CONCLUSION In terms of BA assessment, the new AI software was interchangeable with expert readers using the G&P method. CLINICAL RELEVANCE STATEMENT The use of AI software enables every physician to provide expert reader quality in bone age assessment. KEY POINTS • A novel artificial intelligence-based software for bone age estimation has not yet been clinically validated. • Artificial intelligence showed a good agreement and high accuracy with expert radiologists performing bone age assessment. • Artificial intelligence showed to be interchangeable with expert readers.
Collapse
Affiliation(s)
- Daniel Gräfe
- Department of Pediatric Radiology, University Hospital, Leipzig, Germany.
| | | | - Roland Pfäffle
- Department of Pediatrics, University Hospital, Leipzig, Germany
| | | | | | | |
Collapse
|
17
|
La Rosa S, Quinzi V, Palazzo G, Ronsivalle V, Lo Giudice A. The Implications of Artificial Intelligence in Pedodontics: A Scoping Review of Evidence-Based Literature. Healthcare (Basel) 2024; 12:1311. [PMID: 38998846 PMCID: PMC11240988 DOI: 10.3390/healthcare12131311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2024] [Revised: 06/19/2024] [Accepted: 06/29/2024] [Indexed: 07/14/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) has emerged as a revolutionary technology with several applications across different dental fields, including pedodontics. This systematic review has the objective to catalog and explore the various uses of artificial intelligence in pediatric dentistry. METHODS A thorough exploration of scientific databases was carried out to identify studies addressing the usage of AI in pediatric dentistry until December 2023 in the Embase, Scopus, PubMed, and Web of Science databases by two researchers, S.L.R. and A.L.G. RESULTS From a pool of 1301 articles, only 64 met the predefined criteria and were considered for inclusion in this review. From the data retrieved, it was possible to provide a narrative discussion of the potential implications of AI in the specialized area of pediatric dentistry. The use of AI algorithms and machine learning techniques has shown promising results in several applications of daily dental pediatric practice, including the following: (1) assisting the diagnostic and recognizing processes of early signs of dental pathologies, (2) enhancing orthodontic diagnosis by automating cephalometric tracing and estimating growth and development, (3) assisting and educating children to develop appropriate behavior for dental hygiene. CONCLUSION AI holds significant potential in transforming clinical practice, improving patient outcomes, and elevating the standards of care in pediatric patients. Future directions may involve developing cloud-based platforms for data integration and sharing, leveraging large datasets for improved predictive results, and expanding AI applications for the pediatric population.
Collapse
Affiliation(s)
- Salvatore La Rosa
- Section of Orthodontics, Department of Medical-Surgical Specialties, School of Dentistry, University of Catania, Via Santa Sofia 78, 95123 Catania, Italy; (G.P.); (A.L.G.)
| | - Vincenzo Quinzi
- Department of Life, Health & Environmental Sciences, Postgraduate School of Orthodontics, University of L’Aquila, 67100 L’Aquila, Italy
| | - Giuseppe Palazzo
- Section of Orthodontics, Department of Medical-Surgical Specialties, School of Dentistry, University of Catania, Via Santa Sofia 78, 95123 Catania, Italy; (G.P.); (A.L.G.)
| | - Vincenzo Ronsivalle
- Section of Oral Surgery, Department of General Surgery and Medical-Surgical Specialties, School of Dentistry, Policlinico Universitario “Gaspare Rodolico—San Marco”, University of Catania, Via Santa Sofia 78, 95123 Catania, Italy;
| | - Antonino Lo Giudice
- Section of Orthodontics, Department of Medical-Surgical Specialties, School of Dentistry, University of Catania, Via Santa Sofia 78, 95123 Catania, Italy; (G.P.); (A.L.G.)
| |
Collapse
|
18
|
Santomartino SM, Putman K, Beheshtian E, Parekh VS, Yi PH. Evaluating the Robustness of a Deep Learning Bone Age Algorithm to Clinical Image Variation Using Computational Stress Testing. Radiol Artif Intell 2024; 6:e230240. [PMID: 38477660 PMCID: PMC11140516 DOI: 10.1148/ryai.230240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 02/08/2024] [Accepted: 02/26/2024] [Indexed: 03/14/2024]
Abstract
Purpose To evaluate the robustness of an award-winning bone age deep learning (DL) model to extensive variations in image appearance. Materials and Methods In December 2021, the DL bone age model that won the 2017 RSNA Pediatric Bone Age Challenge was retrospectively evaluated using the RSNA validation set (1425 pediatric hand radiographs; internal test set in this study) and the Digital Hand Atlas (DHA) (1202 pediatric hand radiographs; external test set). Each test image underwent seven types of transformations (rotations, flips, brightness, contrast, inversion, laterality marker, and resolution) to represent a range of image appearances, many of which simulate real-world variations. Computational "stress tests" were performed by comparing the model's predictions on baseline and transformed images. Mean absolute differences (MADs) of predicted bone ages compared with radiologist-determined ground truth on baseline versus transformed images were compared using Wilcoxon signed rank tests. The proportion of clinically significant errors (CSEs) was compared using McNemar tests. Results There was no evidence of a difference in MAD of the model on the two baseline test sets (RSNA = 6.8 months, DHA = 6.9 months; P = .05), indicating good model generalization to external data. Except for the RSNA dataset images with an appended radiologic laterality marker (P = .86), there were significant differences in MAD for both the DHA and RSNA datasets among other transformation groups (rotations, flips, brightness, contrast, inversion, and resolution). There were significant differences in proportion of CSEs for 57% of the image transformations (19 of 33) performed on the DHA dataset. Conclusion Although an award-winning pediatric bone age DL model generalized well to curated external images, it had inconsistent predictions on images that had undergone simple transformations reflective of several real-world variations in image appearance. Keywords: Pediatrics, Hand, Convolutional Neural Network, Radiography Supplemental material is available for this article. © RSNA, 2024 See also commentary by Faghani and Erickson in this issue.
Collapse
Affiliation(s)
- Samantha M Santomartino
- From the Drexel University College of Medicine, Philadelphia, Pa (S.M.S.); University of Maryland Medical Intelligent Imaging (UM2ii) Center, Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, 670 W Baltimore St, 1st Fl, Room 1172, Baltimore, MD 21201 (S.M.S., K.P., E.B., V.S.P., P.H.Y.); and Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, Md (P.H.Y.)
| | - Kristin Putman
- From the Drexel University College of Medicine, Philadelphia, Pa (S.M.S.); University of Maryland Medical Intelligent Imaging (UM2ii) Center, Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, 670 W Baltimore St, 1st Fl, Room 1172, Baltimore, MD 21201 (S.M.S., K.P., E.B., V.S.P., P.H.Y.); and Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, Md (P.H.Y.)
| | - Elham Beheshtian
- From the Drexel University College of Medicine, Philadelphia, Pa (S.M.S.); University of Maryland Medical Intelligent Imaging (UM2ii) Center, Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, 670 W Baltimore St, 1st Fl, Room 1172, Baltimore, MD 21201 (S.M.S., K.P., E.B., V.S.P., P.H.Y.); and Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, Md (P.H.Y.)
| | - Vishwa S Parekh
- From the Drexel University College of Medicine, Philadelphia, Pa (S.M.S.); University of Maryland Medical Intelligent Imaging (UM2ii) Center, Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, 670 W Baltimore St, 1st Fl, Room 1172, Baltimore, MD 21201 (S.M.S., K.P., E.B., V.S.P., P.H.Y.); and Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, Md (P.H.Y.)
| | - Paul H Yi
- From the Drexel University College of Medicine, Philadelphia, Pa (S.M.S.); University of Maryland Medical Intelligent Imaging (UM2ii) Center, Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, 670 W Baltimore St, 1st Fl, Room 1172, Baltimore, MD 21201 (S.M.S., K.P., E.B., V.S.P., P.H.Y.); and Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, Md (P.H.Y.)
| |
Collapse
|
19
|
Fan F, Liu H, Dai X, Liu G, Liu J, Deng X, Peng Z, Wang C, Zhang K, Chen H, Yin C, Zhan M, Deng Z. Automated bone age assessment from knee joint by integrating deep learning and MRI-based radiomics. Int J Legal Med 2024; 138:927-938. [PMID: 38129687 DOI: 10.1007/s00414-023-03148-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Accepted: 12/09/2023] [Indexed: 12/23/2023]
Abstract
Bone age assessment (BAA) is a crucial task in clinical, forensic, and athletic fields. Since traditional age estimation methods are suffered from potential radiation damage, this study aimed to develop and evaluate a deep learning radiomics method based on multiparametric knee MRI for noninvasive and automatic BAA. This retrospective study enrolled 598 patients (age range,10.00-29.99 years) who underwent MR examinations of the knee joint (T1/T2*/PD-weighted imaging). Three-dimensional convolutional neural networks (3D CNNs) were trained to extract and fuse multimodal and multiscale MRI radiomic features for age estimation and compared to traditional machine learning models based on hand-crafted features. The age estimation error was greater in individuals aged 25-30 years; thus, this method may not be suitable for individuals over 25 years old. In the test set aged 10-25 years (n = 95), the 3D CNN (a fusion of T1WI, T2*WI, and PDWI) demonstrated the lowest mean absolute error of 1.32 ± 1.01 years, which is higher than that of other MRI modalities and the hand-crafted models. In the classification for 12-, 14-, 16-, and 18- year thresholds, accuracies and the areas under the ROC curves were all over 0.91 and 0.96, which is similar to the manual methods. Visualization of important features showed that 3D CNN estimated age by focusing on the epiphyseal plates. The deep learning radiomics method enables non-invasive and automated BAA from multimodal knee MR images. The use of 3D CNN and MRI-based radiomics has the potential to assist radiologists or medicolegists in age estimation.
Collapse
Affiliation(s)
- Fei Fan
- West China School of Basic Medical Sciences & Forensic Medicine, Sichuan University, Chengdu, 610041, People's Republic of China
| | - Han Liu
- College of Computer Science, Sichuan University, Chengdu, 610064, People's Republic of China
| | - Xinhua Dai
- Department of Laboratory Medicine, West China Hospital, Sichuan University, Chengdu, 610041, People's Republic of China
| | - Guangfeng Liu
- West China School of Basic Medical Sciences & Forensic Medicine, Sichuan University, Chengdu, 610041, People's Republic of China
| | - Junhong Liu
- West China School of Basic Medical Sciences & Forensic Medicine, Sichuan University, Chengdu, 610041, People's Republic of China
| | - Xiaodong Deng
- West China School of Basic Medical Sciences & Forensic Medicine, Sichuan University, Chengdu, 610041, People's Republic of China
| | - Zhao Peng
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, 610041, People's Republic of China
| | - Chang Wang
- Department of Radiology, Anhui Provincial Children's Hospital, Hefei, 230054, People's Republic of China
| | - Kui Zhang
- West China School of Basic Medical Sciences & Forensic Medicine, Sichuan University, Chengdu, 610041, People's Republic of China
| | - Hu Chen
- College of Computer Science, Sichuan University, Chengdu, 610064, People's Republic of China
| | - Chuangao Yin
- Department of Radiology, Anhui Provincial Children's Hospital, Hefei, 230054, People's Republic of China.
| | - Mengjun Zhan
- West China School of Basic Medical Sciences & Forensic Medicine, Sichuan University, Chengdu, 610041, People's Republic of China.
| | - Zhenhua Deng
- West China School of Basic Medical Sciences & Forensic Medicine, Sichuan University, Chengdu, 610041, People's Republic of China.
| |
Collapse
|
20
|
Chen W, Lim LJR, Lim RQR, Yi Z, Huang J, He J, Yang G, Liu B. Artificial intelligence powered advancements in upper extremity joint MRI: A review. Heliyon 2024; 10:e28731. [PMID: 38596104 PMCID: PMC11002577 DOI: 10.1016/j.heliyon.2024.e28731] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 03/21/2024] [Accepted: 03/22/2024] [Indexed: 04/11/2024] Open
Abstract
Magnetic resonance imaging (MRI) is an indispensable medical imaging examination technique in musculoskeletal medicine. Modern MRI techniques achieve superior high-quality multiplanar imaging of soft tissue and skeletal pathologies without the harmful effects of ionizing radiation. Some current limitations of MRI include long acquisition times, artifacts, and noise. In addition, it is often challenging to distinguish abutting or closely applied soft tissue structures with similar signal characteristics. In the past decade, Artificial Intelligence (AI) has been widely employed in musculoskeletal MRI to help reduce the image acquisition time and improve image quality. Apart from being able to reduce medical costs, AI can assist clinicians in diagnosing diseases more accurately. This will effectively help formulate appropriate treatment plans and ultimately improve patient care. This review article intends to summarize AI's current research and application in musculoskeletal MRI, particularly the advancement of DL in identifying the structure and lesions of upper extremity joints in MRI images.
Collapse
Affiliation(s)
- Wei Chen
- Department of Hand Surgery, Beijing Jishuitan Hospital, Capital Medical University, Beijing, China
| | - Lincoln Jian Rong Lim
- Department of Medical Imaging, Western Health, Footscray Hospital, Victoria, Australia
- Department of Surgery, The University of Melbourne, Victoria, Australia
| | - Rebecca Qian Ru Lim
- Department of Hand & Reconstructive Microsurgery, Singapore General Hospital, Singapore
| | - Zhe Yi
- Department of Hand Surgery, Beijing Jishuitan Hospital, Capital Medical University, Beijing, China
| | - Jiaxing Huang
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Jia He
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Ge Yang
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Bo Liu
- Department of Hand Surgery, Beijing Jishuitan Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
21
|
Bajjad AA, Gupta S, Agarwal S, Pawar RA, Kothawade MU, Singh G. Use of artificial intelligence in determination of bone age of the healthy individuals: A scoping review. J World Fed Orthod 2024; 13:95-102. [PMID: 37968159 DOI: 10.1016/j.ejwf.2023.10.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 09/25/2023] [Accepted: 10/10/2023] [Indexed: 11/17/2023]
Abstract
BACKGROUND Bone age assessment, as an indicator of biological age, is widely used in orthodontics and pediatric endocrinology. Owing to significant subject variations in the manual method of assessment, artificial intelligence (AI), machine learning (ML), and deep learning (DL) play a significant role in this aspect. A scoping review was conducted to search the existing literature on the role of AI, ML, and DL in skeletal age or bone age assessment in healthy individuals. METHODS A literature search was conducted in PubMed, Scopus, and Web of Science from January 2012 to December 2022 using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses-Extension for Scoping Reviews (PRISMA-ScR) and Joanna Briggs Institute guidelines. Grey literature was searched using Google Scholar and OpenGrey. Hand-searching of the articles in all the reputed orthodontic journals and the references of the included articles were also searched for relevant articles for the present scoping review. RESULTS Nineteen articles that fulfilled the inclusion criteria were included. Ten studies used skeletal maturity indicators based on hand and wrist radiographs, two studies used magnetic resonance imaging and seven studies used cervical vertebrae maturity indicators based on lateral cephalograms to assess the skeletal age of the individuals. Most of these studies were published in non-orthodontic medical journals. BoneXpert automated software was the most commonly used software, followed by DL models, and ML models in the studies for assessment of bone age. The automated method was found to be as reliable as the manual method for assessment. CONCLUSIONS This scoping review validated the use of AI, ML, or DL in bone age assessment of individuals. A more uniform distribution of sufficient samples in different stages of maturation, use of three-dimensional inputs such as magnetic resonance imaging, and cone beam computed tomography is required for better training of the models to generalize the outputs for use in the target population.
Collapse
Affiliation(s)
- Adeel Ahmed Bajjad
- Department of Orthodontics, Kothiwal Dental College and Research Centre, Moradabad, India
| | - Seema Gupta
- Department of Orthodontics, Kothiwal Dental College and Research Centre, Moradabad, India.
| | - Soumitra Agarwal
- Department of Orthodontics, Kothiwal Dental College and Research Centre, Moradabad, India
| | - Rakesh A Pawar
- Department of Orthodontics, JMF ACPM Dental College, Dhule, India
| | | | - Gul Singh
- Department of Orthodontics, Kothiwal Dental College and Research Centre, Moradabad, India
| |
Collapse
|
22
|
Nam HK, Lea WWI, Yang Z, Noh E, Rhie YJ, Lee KH, Hong SJ. Clinical validation of a deep-learning-based bone age software in healthy Korean children. Ann Pediatr Endocrinol Metab 2024; 29:102-108. [PMID: 38271993 PMCID: PMC11076234 DOI: 10.6065/apem.2346050.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Revised: 04/19/2023] [Accepted: 04/28/2023] [Indexed: 01/27/2024] Open
Abstract
PURPOSE Bone age (BA) is needed to assess developmental status and growth disorders. We evaluated the clinical performance of a deep-learning-based BA software to estimate the chronological age (CA) of healthy Korean children. METHODS This retrospective study included 371 healthy children (217 boys, 154 girls), aged between 4 and 17 years, who visited the Department of Pediatrics for health check-ups between January 2017 and December 2018. A total of 553 left-hand radiographs from 371 healthy Korean children were evaluated using a commercial deep-learning-based BA software (BoneAge, Vuno, Seoul, Korea). The clinical performance of the deep learning (DL) software was determined using the concordance rate and Bland-Altman analysis via comparison with the CA. RESULTS A 2-sample t-test (P<0.001) and Fisher exact test (P=0.011) showed a significant difference between the normal CA and the BA estimated by the DL software. There was good correlation between the 2 variables (r=0.96, P<0.001); however, the root mean square error was 15.4 months. With a 12-month cutoff, the concordance rate was 58.8%. The Bland-Altman plot showed that the DL software tended to underestimate the BA compared with the CA, especially in children under the age of 8.3 years. CONCLUSION The DL-based BA software showed a low concordance rate and a tendency to underestimate the BA in healthy Korean children.
Collapse
Affiliation(s)
- Hyo-Kyoung Nam
- Department of Pediatrics, Korea University College of Medicine, Seoul, Korea
| | - Winnah Wu-In Lea
- Department of Radiology, Korea University College of Medicine, Seoul, Korea
| | - Zepa Yang
- Smart Health Care Center, Korea University Guro Hospital, Seoul, Korea
- Korea University Guro Hospital-Medical Image Data Center (KUGH-MIDC), Seoul, Korea
| | - Eunjin Noh
- Smart Health Care Center, Korea University Guro Hospital, Seoul, Korea
| | - Young-Jun Rhie
- Department of Pediatrics, Korea University College of Medicine, Seoul, Korea
| | - Kee-Hyoung Lee
- Department of Pediatrics, Korea University College of Medicine, Seoul, Korea
| | - Suk-Joo Hong
- Department of Radiology, Korea University College of Medicine, Seoul, Korea
- Korea University Guro Hospital-Medical Image Data Center (KUGH-MIDC), Seoul, Korea
| |
Collapse
|
23
|
Dimitri P, Savage MO. Artificial intelligence in paediatric endocrinology: conflict or cooperation. J Pediatr Endocrinol Metab 2024; 37:209-221. [PMID: 38183676 DOI: 10.1515/jpem-2023-0554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Accepted: 12/18/2023] [Indexed: 01/08/2024]
Abstract
Artificial intelligence (AI) in medicine is transforming healthcare by automating system tasks, assisting in diagnostics, predicting patient outcomes and personalising patient care, founded on the ability to analyse vast datasets. In paediatric endocrinology, AI has been developed for diabetes, for insulin dose adjustment, detection of hypoglycaemia and retinopathy screening; bone age assessment and thyroid nodule screening; the identification of growth disorders; the diagnosis of precocious puberty; and the use of facial recognition algorithms in conditions such as Cushing syndrome, acromegaly, congenital adrenal hyperplasia and Turner syndrome. AI can also predict those most at risk from childhood obesity by stratifying future interventions to modify lifestyle. AI will facilitate personalised healthcare by integrating data from 'omics' analysis, lifestyle tracking, medical history, laboratory and imaging, therapy response and treatment adherence from multiple sources. As data acquisition and processing becomes fundamental, data privacy and protecting children's health data is crucial. Minimising algorithmic bias generated by AI analysis for rare conditions seen in paediatric endocrinology is an important determinant of AI validity in clinical practice. AI cannot create the patient-doctor relationship or assess the wider holistic determinants of care. Children have individual needs and vulnerabilities and are considered in the context of family relationships and dynamics. Importantly, whilst AI provides value through augmenting efficiency and accuracy, it must not be used to replace clinical skills.
Collapse
Affiliation(s)
- Paul Dimitri
- Department of Paediatric Endocrinology, Sheffield Children's NHS Foundation Trust, Sheffield, UK
| | - Martin O Savage
- Centre for Endocrinology, William Harvey Research Institute, Barts and the London School of Medicine & Dentistry, Queen Mary University of London, London, UK
| |
Collapse
|
24
|
Chapke R, Mondkar S, Oza C, Khadilkar V, Aeppli TRJ, Sävendahl L, Kajale N, Ladkat D, Khadilkar A, Goel P. The automated Greulich and Pyle: a coming-of-age for segmental methods? Front Artif Intell 2024; 7:1326488. [PMID: 38533467 PMCID: PMC10963464 DOI: 10.3389/frai.2024.1326488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 02/14/2024] [Indexed: 03/28/2024] Open
Abstract
The well-known Greulich and Pyle (GP) method of bone age assessment (BAA) relies on comparing a hand X-ray against templates of discrete maturity classes collected in an atlas. Automated methods have recently shown great success with BAA, especially using deep learning. In this perspective, we first review the success and limitations of various automated BAA methods. We then offer a novel hypothesis: When networks predict bone age that is not aligned with a GP reference class, it is not simply statistical error (although there is that as well); they are picking up nuances in the hand X-ray that lie "outside that class." In other words, trained networks predict distributions around classes. This raises a natural question: How can we further understand the reasons for a prediction to deviate from the nominal class age? We claim that segmental aging, that is, ratings based on characteristic bone groups can be used to qualify predictions. This so-called segmental GP method has excellent properties: It can not only help identify differential maturity in the hand but also provide a systematic way to extend the use of the current GP atlas to various other populations.
Collapse
Affiliation(s)
- Rashmi Chapke
- Department of Biology, Indian Institute of Science Education and Research Pune, Pune, India
| | - Shruti Mondkar
- Hirabai Cowasji Jehangir Medical Research Institute, Pune, India
| | - Chirantap Oza
- Hirabai Cowasji Jehangir Medical Research Institute, Pune, India
| | - Vaman Khadilkar
- Hirabai Cowasji Jehangir Medical Research Institute, Pune, India
- Department of Health Sciences, Savitribai Phule Pune University, Pune, India
- Jehangir Hospital, Pune, India
| | - Tim R. J. Aeppli
- Division of Pediatric Endocrinology, Department of Women's and Children's Health, Karolinska Institutet, Stockholm, Sweden
| | - Lars Sävendahl
- Division of Pediatric Endocrinology, Department of Women's and Children's Health, Karolinska Institutet, Stockholm, Sweden
| | - Neha Kajale
- Hirabai Cowasji Jehangir Medical Research Institute, Pune, India
- Department of Health Sciences, Savitribai Phule Pune University, Pune, India
| | - Dipali Ladkat
- Hirabai Cowasji Jehangir Medical Research Institute, Pune, India
| | - Anuradha Khadilkar
- Hirabai Cowasji Jehangir Medical Research Institute, Pune, India
- Department of Health Sciences, Savitribai Phule Pune University, Pune, India
| | - Pranay Goel
- Department of Biology, Indian Institute of Science Education and Research Pune, Pune, India
| |
Collapse
|
25
|
Qiu L, Liu A, Dai X, Liu G, Peng Z, Zhan M, Liu J, Gui Y, Zhu H, Chen H, Deng Z, Fan F. Machine learning and deep learning enabled age estimation on medial clavicle CT images. Int J Legal Med 2024; 138:487-498. [PMID: 37940721 DOI: 10.1007/s00414-023-03115-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Accepted: 10/29/2023] [Indexed: 11/10/2023]
Abstract
The medial clavicle epiphysis is a crucial indicator for bone age estimation (BAE) after hand maturation. This study aimed to develop machine learning (ML) and deep learning (DL) models for BAE based on medial clavicle CT images and evaluate the performance on normal and variant clavicles. This study retrospectively collected 1049 patients (mean± SD: 22.50±4.34 years) and split them into normal training and test sets, and variant training and test sets. An additional 53 variant clavicles were incorporated into the variant test set. The development stages of normal MCE were used to build a linear model and support vector machine (SVM) for BAE. The CT slices of MCE were automatically segmented and used to train DL models for automated BAE. Comparisons were performed by linear versus ML versus DL, and normal versus variant clavicles. Mean absolute error (MAE) and classification accuracy was the primary parameter of comparison. For BAE, the SVM had the best MAE of 1.73 years, followed by the commonly-used CNNs (1.77-1.93 years), the linear model (1.94 years), and the hybrid neural network CoAt Net (2.01 years). In DL models, SE Net 18 was the best-performing DL model with similar results to SVM in the normal test set and achieved an MAE of 2.08 years in the external variant test. For age classification, all the models exhibit superior performance in the classification of 18-, 20-, 21-, and 22-year thresholds with limited value in the 16-year threshold. Both ML and DL models produce desirable performance in BAE based on medial clavicle CT.
Collapse
Affiliation(s)
- Lirong Qiu
- West China School of Basic Medical Sciences & Forensic Medicine, Sichuan University, Chengdu, 610041, People's Republic of China
| | - Anjie Liu
- West China School of Basic Medical Sciences & Forensic Medicine, Sichuan University, Chengdu, 610041, People's Republic of China
- University of Electronic Science and Technology of China, Chengdu, 611731, People's Republic of China
| | - Xinhua Dai
- Department of Laboratory Medicine, West China Hospital, Sichuan University, Chengdu, 610041, People's Republic of China
| | - Guangfeng Liu
- West China School of Basic Medical Sciences & Forensic Medicine, Sichuan University, Chengdu, 610041, People's Republic of China
| | - Zhao Peng
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, 610041, People's Republic of China
| | - Mengjun Zhan
- West China School of Basic Medical Sciences & Forensic Medicine, Sichuan University, Chengdu, 610041, People's Republic of China
| | - Junhong Liu
- West China School of Basic Medical Sciences & Forensic Medicine, Sichuan University, Chengdu, 610041, People's Republic of China
| | - Yufan Gui
- West China School of Basic Medical Sciences & Forensic Medicine, Sichuan University, Chengdu, 610041, People's Republic of China
| | - Haozhe Zhu
- College of Computer Science, Sichuan University, Chengdu, 610064, People's Republic of China
| | - Hu Chen
- College of Computer Science, Sichuan University, Chengdu, 610064, People's Republic of China
| | - Zhenhua Deng
- West China School of Basic Medical Sciences & Forensic Medicine, Sichuan University, Chengdu, 610041, People's Republic of China.
| | - Fei Fan
- West China School of Basic Medical Sciences & Forensic Medicine, Sichuan University, Chengdu, 610041, People's Republic of China.
| |
Collapse
|
26
|
Alam MK, Alftaikhah SAA, Issrani R, Ronsivalle V, Lo Giudice A, Cicciù M, Minervini G. Applications of artificial intelligence in the utilisation of imaging modalities in dentistry: A systematic review and meta-analysis of in-vitro studies. Heliyon 2024; 10:e24221. [PMID: 38317889 PMCID: PMC10838702 DOI: 10.1016/j.heliyon.2024.e24221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Revised: 01/02/2024] [Accepted: 01/04/2024] [Indexed: 02/07/2024] Open
Abstract
Background In the past, dentistry heavily relied on manual image analysis and diagnostic procedures, which could be time-consuming and prone to human error. The advent of artificial intelligence (AI) has brought transformative potential to the field, promising enhanced accuracy and efficiency in various dental imaging tasks. This systematic review and meta-analysis aimed to comprehensively evaluate the applications of AI in dental imaging modalities, focusing on in-vitro studies. Methods A systematic literature search was conducted, in accordance with the PRISMA guidelines. The following databases were systematically searched: PubMed/MEDLINE, Embase, Web of Science, Scopus, IEEE Xplore, Cochrane Library, CINAHL (Cumulative Index to Nursing and Allied Health Literature), and Google Scholar. The meta-analysis employed fixed-effects models to assess AI accuracy, calculating odds ratios (OR) for true positive rate (TPR), true negative rate (TNR), positive predictive value (PPV), and negative predictive value (NPV) with 95 % confidence intervals (CI). Heterogeneity and overall effect tests were applied to ensure the reliability of the findings. Results 9 studies were selected that encompassed various objectives, such as tooth segmentation and classification, caries detection, maxillofacial bone segmentation, and 3D surface model creation. AI techniques included convolutional neural networks (CNNs), deep learning algorithms, and AI-driven tools. Imaging parameters assessed in these studies were specific to the respective dental tasks. The analysis of combined ORs indicated higher odds of accurate dental image assessments, highlighting the potential for AI to improve TPR, TNR, PPV, and NPV. The studies collectively revealed a statistically significant overall effect in favor of AI in dental imaging applications. Conclusion In summary, this systematic review and meta-analysis underscore the transformative impact of AI on dental imaging. AI has the potential to revolutionize the field by enhancing accuracy, efficiency, and time savings in various dental tasks. While further research in clinical settings is needed to validate these findings and address study limitations, the future implications of integrating AI into dental practice hold great promise for advancing patient care and the field of dentistry.
Collapse
Affiliation(s)
- Mohammad Khursheed Alam
- Preventive Dentistry Department, College of Dentistry, Jouf University, Sakaka, 72345, Saudi Arabia
- Department of Dental Research Cell, Saveetha Institute of Medical and Technical Sciences, Saveetha Dental College and Hospitals, Chennai, 600077, India
- Department of Public Health, Faculty of Allied Health Sciences, Daffodil International University, Dhaka, 1207, Bangladesh
| | | | - Rakhi Issrani
- Preventive Dentistry Department, College of Dentistry, Jouf University, Sakaka, 72345, Saudi Arabia
| | - Vincenzo Ronsivalle
- Department of Biomedical and Surgical and Biomedical Sciences, Catania University, 95123, Catania, Italy
| | - Antonino Lo Giudice
- Department of Biomedical and Surgical and Biomedical Sciences, Catania University, 95123, Catania, Italy
| | - Marco Cicciù
- Department of Biomedical and Surgical and Biomedical Sciences, Catania University, 95123, Catania, Italy
| | - Giuseppe Minervini
- Multidisciplinary Department of Medical-Surgical and Odontostomatological Specialties, University of Campania “Luigi Vanvitelli”, 80121, Naples, Italy
- Saveetha Dental College and Hospitals, Saveetha Institute of Medical and Technical Science (SIMATS), Saveetha University, Chennai, Tamil Nadu, India
| |
Collapse
|
27
|
Liu Q, Wang H, Wangjiu C, Awang T, Yang M, Qiongda P, Yang X, Pan H, Wang F. An artificial intelligence-based bone age assessment model for Han and Tibetan children. Front Physiol 2024; 15:1329145. [PMID: 38426209 PMCID: PMC10902452 DOI: 10.3389/fphys.2024.1329145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2023] [Accepted: 02/02/2024] [Indexed: 03/02/2024] Open
Abstract
Background: Manual bone age assessment (BAA) is associated with longer interpretation time and higher cost and variability, thus posing challenges in areas with restricted medical facilities, such as the high-altitude Tibetan Plateau. The application of artificial intelligence (AI) for automating BAA could facilitate resolving this issue. This study aimed to develop an AI-based BAA model for Han and Tibetan children. Methods: A model named "EVG-BANet" was trained using three datasets, including the Radiology Society of North America (RSNA) dataset (training set n = 12611, validation set n = 1425, and test set n = 200), the Radiological Hand Pose Estimation (RHPE) dataset (training set n = 5491, validation set n = 713, and test set n = 79), and a self-established local dataset [training set n = 825 and test set n = 351 (Han n = 216 and Tibetan n = 135)]. An open-access state-of-the-art model BoNet was used for comparison. The accuracy and generalizability of the two models were evaluated using the abovementioned three test sets and an external test set (n = 256, all were Tibetan). Mean absolute difference (MAD) and accuracy within 1 year were used as indicators. Bias was evaluated by comparing the MAD between the demographic groups. Results: EVG-BANet outperformed BoNet in the MAD on the RHPE test set (0.52 vs. 0.63 years, p < 0.001), the local test set (0.47 vs. 0.62 years, p < 0.001), and the external test set (0.53 vs. 0.66 years, p < 0.001) and exhibited a comparable MAD on the RSNA test set (0.34 vs. 0.35 years, p = 0.934). EVG-BANet achieved accuracy within 1 year of 97.7% on the local test set (BoNet 90%, p < 0.001) and 89.5% on the external test set (BoNet 85.5%, p = 0.066). EVG-BANet showed no bias in the local test set but exhibited a bias related to chronological age in the external test set. Conclusion: EVG-BANet can accurately predict the bone age (BA) for both Han children and Tibetan children living in the Tibetan Plateau with limited healthcare facilities.
Collapse
Affiliation(s)
- Qixing Liu
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Huogen Wang
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Cidan Wangjiu
- Department of Radiology, Tibet Autonomous Region People’s Hospital, Lhasa, China
| | - Tudan Awang
- Department of Radiology, People’s Hospital of Nyima County, Nagqu, China
| | - Meijie Yang
- Department of Radiology, People’s Hospital of Nyima County, Nagqu, China
| | - Puqiong Qiongda
- Department of Radiology, People’s Hospital of Nagqu, Nagqu, China
| | - Xiao Yang
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Hui Pan
- Department of Endocrinology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Fengdan Wang
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
28
|
Fink A, Tran H, Reisert M, Rau A, Bayer J, Kotter E, Bamberg F, Russe MF. A deep learning approach for projection and body-side classification in musculoskeletal radiographs. Eur Radiol Exp 2024; 8:23. [PMID: 38353812 PMCID: PMC10866807 DOI: 10.1186/s41747-023-00417-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 11/29/2023] [Indexed: 02/16/2024] Open
Abstract
BACKGROUND The growing prevalence of musculoskeletal diseases increases radiologic workload, highlighting the need for optimized workflow management and automated metadata classification systems. We developed a large-scale, well-characterized dataset of musculoskeletal radiographs and trained deep learning neural networks to classify radiographic projection and body side. METHODS In this IRB-approved retrospective single-center study, a dataset of musculoskeletal radiographs from 2011 to 2019 was retrieved and manually labeled for one of 45 possible radiographic projections and the depicted body side. Two classification networks were trained for the respective tasks using the Xception architecture with a custom network top and pretrained weights. Performance was evaluated on a hold-out test sample, and gradient-weighted class activation mapping (Grad-CAM) heatmaps were computed to visualize the influential image regions for network predictions. RESULTS A total of 13,098 studies comprising 23,663 radiographs were included with a patient-level dataset split, resulting in 19,183 training, 2,145 validation, and 2,335 test images. Focusing on paired body regions, training for side detection included 16,319 radiographs (13,284 training, 1,443 validation, and 1,592 test images). The models achieved an overall accuracy of 0.975 for projection and 0.976 for body-side classification on the respective hold-out test sample. Errors were primarily observed in projections with seamless anatomical transitions or non-orthograde adjustment techniques. CONCLUSIONS The deep learning neural networks demonstrated excellent performance in classifying radiographic projection and body side across a wide range of musculoskeletal radiographs. These networks have the potential to serve as presorting algorithms, optimizing radiologic workflow and enhancing patient care. RELEVANCE STATEMENT The developed networks excel at classifying musculoskeletal radiographs, providing valuable tools for research data extraction, standardized image sorting, and minimizing misclassifications in artificial intelligence systems, ultimately enhancing radiology workflow efficiency and patient care. KEY POINTS • A large-scale, well-characterized dataset was developed, covering a broad spectrum of musculoskeletal radiographs. • Deep learning neural networks achieved high accuracy in classifying radiographic projection and body side. • Grad-CAM heatmaps provided insight into network decisions, contributing to their interpretability and trustworthiness. • The trained models can help optimize radiologic workflow and manage large amounts of data.
Collapse
Affiliation(s)
- Anna Fink
- Department of Diagnostic and Interventional Radiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Breisacher Str. 64, 79106, Freiburg, Germany.
| | - Hien Tran
- Department of Diagnostic and Interventional Radiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Breisacher Str. 64, 79106, Freiburg, Germany
| | - Marco Reisert
- Department of Stereotactic and Functional Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
- Medical Physics, Department of Diagnostic and Interventional Radiology, Medical Center, University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Alexander Rau
- Department of Diagnostic and Interventional Radiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Breisacher Str. 64, 79106, Freiburg, Germany
- Department of Neuroradiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Jörg Bayer
- Department of Trauma and Orthopaedic Surgery, Schwarzwald-Baar Hospital, Villingen-Schwenningen, Germany
| | - Elmar Kotter
- Department of Diagnostic and Interventional Radiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Breisacher Str. 64, 79106, Freiburg, Germany
| | - Fabian Bamberg
- Department of Diagnostic and Interventional Radiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Breisacher Str. 64, 79106, Freiburg, Germany
| | - Maximilian F Russe
- Department of Diagnostic and Interventional Radiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Breisacher Str. 64, 79106, Freiburg, Germany
| |
Collapse
|
29
|
Ma Y, Pan I, Kim SY, Wieschhoff GG, Andriole KP, Mandell JC. Deep learning discrimination of rheumatoid arthritis from osteoarthritis on hand radiography. Skeletal Radiol 2024; 53:377-383. [PMID: 37530866 DOI: 10.1007/s00256-023-04408-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/25/2023] [Revised: 07/19/2023] [Accepted: 07/19/2023] [Indexed: 08/03/2023]
Abstract
PURPOSE To develop a deep learning model to distinguish rheumatoid arthritis (RA) from osteoarthritis (OA) using hand radiographs and to evaluate the effects of changing pretraining and training parameters on model performance. MATERIALS AND METHODS A convolutional neural network was retrospectively trained on 9714 hand radiograph exams from 8387 patients obtained from 2017 to 2021 at seven hospitals within an integrated healthcare network. Performance was assessed using an independent test set of 250 exams from 146 patients. Binary discriminatory capacity (no arthritis versus arthritis; RA versus not RA) and three-way classification (no arthritis versus OA versus RA) were evaluated. The effects of additional pretraining using musculoskeletal radiographs, using all views as opposed to only the posteroanterior view, and varying image resolution on model performance were also investigated. Area under the receiver operating characteristic curve (AUC) and Cohen's kappa coefficient were used to evaluate diagnostic performance. RESULTS For no arthritis versus arthritis, the model achieved an AUC of 0.975 (95% CI: 0.957, 0.989). For RA versus not RA, the model achieved an AUC of 0.955 (95% CI: 0.919, 0.983). For three-way classification, the model achieved a kappa of 0.806 (95% CI: 0.742, 0.866) and accuracy of 87.2% (95% CI: 83.2%, 91.2%) on the test set. Increasing image resolution increased performance up to 1024 × 1024 pixels. Additional pretraining on musculoskeletal radiographs and using all views did not significantly affect performance. CONCLUSION A deep learning model can be used to distinguish no arthritis, OA, and RA on hand radiographs with high performance.
Collapse
Affiliation(s)
- Yuntong Ma
- Department of Radiology, Brigham and Women's Hospital, 75 Francis Street, Boston, MA, 02115, USA.
| | - Ian Pan
- Department of Radiology, Brigham and Women's Hospital, 75 Francis Street, Boston, MA, 02115, USA
| | - Stanley Y Kim
- Department of Radiology, Brigham and Women's Hospital, 75 Francis Street, Boston, MA, 02115, USA
| | - Ged G Wieschhoff
- Department of Radiology, Brigham and Women's Hospital, 75 Francis Street, Boston, MA, 02115, USA
| | - Katherine P Andriole
- Department of Radiology, Brigham and Women's Hospital, 75 Francis Street, Boston, MA, 02115, USA
- MGH & BWH Center for Clinical Data Science, Suite 1303, 100 Cambridge St, Boston, MA, 02114, USA
| | - Jacob C Mandell
- Department of Radiology, Brigham and Women's Hospital, 75 Francis Street, Boston, MA, 02115, USA
| |
Collapse
|
30
|
Wang P, Liu Y, Zhou Z. Supraspinatus extraction from MRI based on attention-dense spatial pyramid UNet network. J Orthop Surg Res 2024; 19:60. [PMID: 38216968 PMCID: PMC10787409 DOI: 10.1186/s13018-023-04509-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Accepted: 12/23/2023] [Indexed: 01/14/2024] Open
Abstract
BACKGROUND With potential of deep learning in musculoskeletal image interpretation being explored, this paper focuses on the common site of rotator cuff tears, the supraspinatus. It aims to propose and validate a deep learning model to automatically extract the supraspinatus, verifying its superiority through comparison with several classical image segmentation models. METHOD Imaging data were retrospectively collected from 60 patients who underwent inpatient treatment for rotator cuff tears at a hospital between March 2021 and May 2023. A dataset of the supraspinatus from MRI was constructed after collecting, filtering, and manually annotating at the pixel level. This paper proposes a novel A-DAsppUnet network that can automatically extract the supraspinatus after training and optimization. The analysis of model performance is based on three evaluation metrics: precision, intersection over union, and Dice coefficient. RESULTS The experimental results demonstrate that the precision, intersection over union, and Dice coefficients of the proposed model are 99.20%, 83.38%, and 90.94%, respectively. Furthermore, the proposed model exhibited significant advantages over the compared models. CONCLUSION The designed model in this paper accurately extracts the supraspinatus from MRI, and the extraction results are complete and continuous with clear boundaries. The feasibility of using deep learning methods for musculoskeletal extraction and assisting in clinical decision-making was verified. This research holds practical significance and application value in the field of utilizing artificial intelligence for assisting medical decision-making.
Collapse
Affiliation(s)
- Peng Wang
- Third Clinical Medical School, Nanjing University of Chinese Medicine, Nanjing, 210023, People's Republic of China
- Affiliated Hospital of Integrated Traditional Chinese and Western Medicine, Nanjing University of Chinese Medicine, No. 100 Maigaoqiao Cross Street, Qixia District, Nanjing City, 210028, Jiangsu Province, People's Republic of China
| | - Yang Liu
- School of Remote Sensing and Geomatics Engineering, Nanjing University of Information Science & Technology, Nanjing, 210044, People's Republic of China
| | - Zhong Zhou
- Affiliated Hospital of Integrated Traditional Chinese and Western Medicine, Nanjing University of Chinese Medicine, No. 100 Maigaoqiao Cross Street, Qixia District, Nanjing City, 210028, Jiangsu Province, People's Republic of China.
| |
Collapse
|
31
|
Zhu Y, Lyu X, Tao X, Wu L, Yin A, Liao F, Hu S, Wang Y, Zhang M, Huang L, Wang J, Zhang C, Gong D, Jiang X, Zhao L, Yu H. A newly developed deep learning-based system for automatic detection and classification of small bowel lesions during double-balloon enteroscopy examination. BMC Gastroenterol 2024; 24:10. [PMID: 38166722 PMCID: PMC10759410 DOI: 10.1186/s12876-023-03067-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/08/2023] [Accepted: 11/28/2023] [Indexed: 01/05/2024] Open
Abstract
BACKGROUND Double-balloon enteroscopy (DBE) is a standard method for diagnosing and treating small bowel disease. However, DBE may yield false-negative results due to oversight or inexperience. We aim to develop a computer-aided diagnostic (CAD) system for the automatic detection and classification of small bowel abnormalities in DBE. DESIGN AND METHODS A total of 5201 images were collected from Renmin Hospital of Wuhan University to construct a detection model for localizing lesions during DBE, and 3021 images were collected to construct a classification model for classifying lesions into four classes, protruding lesion, diverticulum, erosion & ulcer and angioectasia. The performance of the two models was evaluated using 1318 normal images and 915 abnormal images and 65 videos from independent patients and then compared with that of 8 endoscopists. The standard answer was the expert consensus. RESULTS For the image test set, the detection model achieved a sensitivity of 92% (843/915) and an area under the curve (AUC) of 0.947, and the classification model achieved an accuracy of 86%. For the video test set, the accuracy of the system was significantly better than that of the endoscopists (85% vs. 77 ± 6%, p < 0.01). For the video test set, the proposed system was superior to novices and comparable to experts. CONCLUSIONS We established a real-time CAD system for detecting and classifying small bowel lesions in DBE with favourable performance. ENDOANGEL-DBE has the potential to help endoscopists, especially novices, in clinical practice and may reduce the miss rate of small bowel lesions.
Collapse
Affiliation(s)
- Yijie Zhu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Xiaoguang Lyu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Xiao Tao
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Lianlian Wu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Anning Yin
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Fei Liao
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Shan Hu
- School of Computer Science, Wuhan University, Wuhan, China
| | - Yang Wang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Mengjiao Zhang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Li Huang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Junxiao Wang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Chenxia Zhang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Dexin Gong
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Xiaoda Jiang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Liang Zhao
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China.
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China.
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China.
| | - Honggang Yu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China.
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China.
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China.
| |
Collapse
|
32
|
Rassmann S, Keller A, Skaf K, Hustinx A, Gausche R, Ibarra-Arrelano MA, Hsieh TC, Madajieu YED, Nöthen MM, Pfäffle R, Attenberger UI, Born M, Mohnike K, Krawitz PM, Javanmardi B. Deeplasia: deep learning for bone age assessment validated on skeletal dysplasias. Pediatr Radiol 2024; 54:82-95. [PMID: 37953411 PMCID: PMC10776485 DOI: 10.1007/s00247-023-05789-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Revised: 10/04/2023] [Accepted: 10/05/2023] [Indexed: 11/14/2023]
Abstract
BACKGROUND Skeletal dysplasias collectively affect a large number of patients worldwide. Most of these disorders cause growth anomalies. Hence, evaluating skeletal maturity via the determination of bone age (BA) is a useful tool. Moreover, consecutive BA measurements are crucial for monitoring the growth of patients with such disorders, especially for timing hormonal treatment or orthopedic interventions. However, manual BA assessment is time-consuming and suffers from high intra- and inter-rater variability. This is further exacerbated by genetic disorders causing severe skeletal malformations. While numerous approaches to automate BA assessment have been proposed, few are validated for BA assessment on children with skeletal dysplasias. OBJECTIVE We present Deeplasia, an open-source prior-free deep-learning approach designed for BA assessment specifically validated on patients with skeletal dysplasias. MATERIALS AND METHODS We trained multiple convolutional neural network models under various conditions and selected three to build a precise model ensemble. We utilized the public BA dataset from the Radiological Society of North America (RSNA) consisting of training, validation, and test subsets containing 12,611, 1,425, and 200 hand and wrist radiographs, respectively. For testing the performance of our model ensemble on dysplastic hands, we retrospectively collected 568 radiographs from 189 patients with molecularly confirmed diagnoses of seven different genetic bone disorders including achondroplasia and hypochondroplasia. A subset of the dysplastic cohort (149 images) was used to estimate the test-retest precision of our model ensemble on longitudinal data. RESULTS The mean absolute difference of Deeplasia for the RSNA test set (based on the average of six different reference ratings) and dysplastic set (based on the average of two different reference ratings) were 3.87 and 5.84 months, respectively. The test-retest precision of Deeplasia on longitudinal data (2.74 months) is estimated to be similar to a human expert. CONCLUSION We demonstrated that Deeplasia is competent in assessing the age and monitoring the development of both normal and dysplastic bones.
Collapse
Affiliation(s)
- Sebastian Rassmann
- Institute for Genomic Statistics and Bioinformatics, University Hospital Bonn, Venusberg-Campus 1 Building 11, 2nd Floor, 53127, Bonn, Germany
| | | | - Kyra Skaf
- Medical Faculty, Otto-Von-Guericke-University Magdeburg, Magdeburg, Germany
| | - Alexander Hustinx
- Institute for Genomic Statistics and Bioinformatics, University Hospital Bonn, Venusberg-Campus 1 Building 11, 2nd Floor, 53127, Bonn, Germany
| | - Ruth Gausche
- CrescNet - Wachstumsnetzwerk, Medical Faculty, University Hospital Leipzig, Leipzig, Germany
| | - Miguel A Ibarra-Arrelano
- Institute for Genomic Statistics and Bioinformatics, University Hospital Bonn, Venusberg-Campus 1 Building 11, 2nd Floor, 53127, Bonn, Germany
| | - Tzung-Chien Hsieh
- Institute for Genomic Statistics and Bioinformatics, University Hospital Bonn, Venusberg-Campus 1 Building 11, 2nd Floor, 53127, Bonn, Germany
| | | | - Markus M Nöthen
- Institute of Human Genetics, University Hospital Bonn, Bonn, Germany
| | - Roland Pfäffle
- Department for Pediatrics, University Hospital Leipzig, Leipzig, Germany
| | - Ulrike I Attenberger
- Department of Diagnostic and Interventional Radiology, University Hospital Bonn, Bonn, Germany
| | - Mark Born
- Division of Paediatric Radiology, Department of Radiology, University Hospital Bonn, Bonn, Germany
| | - Klaus Mohnike
- Medical Faculty, Otto-Von-Guericke-University Magdeburg, Magdeburg, Germany
| | - Peter M Krawitz
- Institute for Genomic Statistics and Bioinformatics, University Hospital Bonn, Venusberg-Campus 1 Building 11, 2nd Floor, 53127, Bonn, Germany
| | - Behnam Javanmardi
- Institute for Genomic Statistics and Bioinformatics, University Hospital Bonn, Venusberg-Campus 1 Building 11, 2nd Floor, 53127, Bonn, Germany.
| |
Collapse
|
33
|
Guermazi A, Omoumi P, Tordjman M, Fritz J, Kijowski R, Regnard NE, Carrino J, Kahn CE, Knoll F, Rueckert D, Roemer FW, Hayashi D. How AI May Transform Musculoskeletal Imaging. Radiology 2024; 310:e230764. [PMID: 38165245 PMCID: PMC10831478 DOI: 10.1148/radiol.230764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2023] [Revised: 06/18/2023] [Accepted: 07/11/2023] [Indexed: 01/03/2024]
Abstract
While musculoskeletal imaging volumes are increasing, there is a relative shortage of subspecialized musculoskeletal radiologists to interpret the studies. Will artificial intelligence (AI) be the solution? For AI to be the solution, the wide implementation of AI-supported data acquisition methods in clinical practice requires establishing trusted and reliable results. This implementation will demand close collaboration between core AI researchers and clinical radiologists. Upon successful clinical implementation, a wide variety of AI-based tools can improve the musculoskeletal radiologist's workflow by triaging imaging examinations, helping with image interpretation, and decreasing the reporting time. Additional AI applications may also be helpful for business, education, and research purposes if successfully integrated into the daily practice of musculoskeletal radiology. The question is not whether AI will replace radiologists, but rather how musculoskeletal radiologists can take advantage of AI to enhance their expert capabilities.
Collapse
Affiliation(s)
- Ali Guermazi
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Patrick Omoumi
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Mickael Tordjman
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Jan Fritz
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Richard Kijowski
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Nor-Eddine Regnard
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - John Carrino
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Charles E. Kahn
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Florian Knoll
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Daniel Rueckert
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Frank W. Roemer
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Daichi Hayashi
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| |
Collapse
|
34
|
Forestieri M, Napolitano A, Tomà P, Bascetta S, Cirillo M, Tagliente E, Fracassi D, D’Angelo P, Casazza I. Machine Learning Algorithm: Texture Analysis in CNO and Application in Distinguishing CNO and Bone Marrow Growth-Related Changes on Whole-Body MRI. Diagnostics (Basel) 2023; 14:61. [PMID: 38201370 PMCID: PMC10804385 DOI: 10.3390/diagnostics14010061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 12/17/2023] [Accepted: 12/22/2023] [Indexed: 01/12/2024] Open
Abstract
OBJECTIVE The purpose of this study is to analyze the texture characteristics of chronic non-bacterial osteomyelitis (CNO) bone lesions, identified as areas of altered signal intensity on short tau inversion recovery (STIR) sequences, and to distinguish them from bone marrow growth-related changes through Machine Learning (ML) and Deep Learning (DL) analysis. MATERIALS AND METHODS We included a group of 66 patients with confirmed diagnosis of CNO and a group of 28 patients with suspected extra-skeletal systemic disease. All examinations were performed on a 1.5 T MRI scanner. Using the opensource 3D Slicer software version 4.10.2, the ROIs on CNO lesions and on the red bone marrow were sampled. Texture analysis (TA) was carried out using Pyradiomics. We applied an optimization search grid algorithm on nine classic ML classifiers and a Deep Learning (DL) Neural Network (NN). The model's performance was evaluated using Accuracy (ACC), AUC-ROC curves, F1-score, Positive Predictive Value (PPV), Mean Absolute Error (MAE) and Root-Mean-Square Error (RMSE). Furthermore, we used Shapley additive explanations to gain insight into the behavior of the prediction model. RESULTS Most predictive characteristics were selected by Boruta algorithm for each combination of ROI sequences for the characterization and classification of the two types of signal hyperintensity. The overall best classification result was obtained by the NN with ACC = 0.91, AUC = 0.93 with 95% CI 0.91-0.94, F1-score = 0.94 and PPV = 93.8%. Between classic ML methods, ensemble learners showed high model performance; specifically, the best-performing classifier was the Stack (ST) with ACC = 0.85, AUC = 0.81 with 95% CI 0.8-0.84, F1-score = 0.9, PPV = 90%. CONCLUSIONS Our results show the potential of ML methods in discerning edema-like lesions, in particular by distinguishing CNO lesions from hematopoietic bone marrow changes in a pediatric population. The Neural Network showed the overall best results, while a Stacking classifier, based on Gradient Boosting and Random Forest as principal estimators and Logistic Regressor as final estimator, achieved the best results between the other ML methods.
Collapse
Affiliation(s)
- Marta Forestieri
- Imaging Department, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy; (P.T.); (S.B.); (P.D.); (I.C.)
| | - Antonio Napolitano
- Medical Physics Department, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy; (A.N.); (E.T.); (D.F.)
| | - Paolo Tomà
- Imaging Department, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy; (P.T.); (S.B.); (P.D.); (I.C.)
| | - Stefano Bascetta
- Imaging Department, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy; (P.T.); (S.B.); (P.D.); (I.C.)
| | - Marco Cirillo
- Imaging Department, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy; (P.T.); (S.B.); (P.D.); (I.C.)
| | - Emanuela Tagliente
- Medical Physics Department, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy; (A.N.); (E.T.); (D.F.)
| | - Donatella Fracassi
- Medical Physics Department, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy; (A.N.); (E.T.); (D.F.)
| | - Paola D’Angelo
- Imaging Department, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy; (P.T.); (S.B.); (P.D.); (I.C.)
| | - Ines Casazza
- Imaging Department, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy; (P.T.); (S.B.); (P.D.); (I.C.)
| |
Collapse
|
35
|
Abdollahifard S, Farrokhi A, Kheshti F, Jalali M, Mowla A. Application of convolutional network models in detection of intracranial aneurysms: A systematic review and meta-analysis. Interv Neuroradiol 2023; 29:738-747. [PMID: 35549574 PMCID: PMC10680951 DOI: 10.1177/15910199221097475] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Accepted: 04/11/2022] [Indexed: 11/15/2022] Open
Abstract
INTRODUCTION Intracranial aneurysms have a high prevalence in human population. It also has a heavy burden of disease and high mortality rate in the case of rupture. Convolutional neural network(CNN) is a type of deep learning architecture which has been proven powerful to detect intracranial aneurysms. METHODS Four databases were searched using artificial intelligence, intracranial aneurysms, and synonyms to find eligible studies. Articles which had applied CNN for detection of intracranial aneurisms were included in this review. Sensitivity and specificity of the models and human readers regarding modality, size, and location of aneurysms were sought to be extracted. Random model was the preferred model for analyses using CMA 2 to determine pooled sensitivity and specificity. RESULTS Overall, 20 studies were used in this review. Deep learning models could detect intracranial aneurysms with a sensitivity of 90/6% (CI: 87/2-93/2%) and specificity of 94/6% (CI: 0/914-0/966). CTA was the most sensitive modality (92.0%(CI:85/2-95/8%)). Overall sensitivity of the models for aneurysms more than 3 mm was above 98% (98%-100%) and 74.6 for aneurysms less than 3 mm. With the aid of AI, the clinicians' sensitivity increased to 12/8% and interrater agreement to 0/193. CONCLUSION CNN models had an acceptable sensitivity for detection of intracranial aneurysms, surpassing human readers in some fields. The logical approach for application of deep learning models would be its use as a highly capable assistant. In essence, deep learning models are a groundbreaking technology that can assist clinicians and allow them to diagnose intracranial aneurysms more accurately.
Collapse
Affiliation(s)
- Saeed Abdollahifard
- Research center for neuromodulation and pain, Shiraz, Iran
- Student research committee, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Amirmohammad Farrokhi
- Research center for neuromodulation and pain, Shiraz, Iran
- Student research committee, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Fatemeh Kheshti
- Research center for neuromodulation and pain, Shiraz, Iran
- Student research committee, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Mahtab Jalali
- Research center for neuromodulation and pain, Shiraz, Iran
- Student research committee, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Ashkan Mowla
- Division of Stroke and Endovascular Neurosurgery, Department of Neurological Surgery, Keck School of Medicine, University of Southern California (USC), Los Angeles, CA, USA
| |
Collapse
|
36
|
Wang J, Sun J, Xu J, Lu S, Wang H, Huang C, Zhang F, Yu Y, Gao X, Wang M, Wang Y, Ruan X, Pan Y. Detection of Intracranial Aneurysms Using Multiphase CT Angiography with a Deep Learning Model. Acad Radiol 2023; 30:2477-2486. [PMID: 36737273 DOI: 10.1016/j.acra.2022.12.043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 12/27/2022] [Accepted: 12/27/2022] [Indexed: 02/04/2023]
Abstract
RATIONALE AND OBJECTIVES Determine the effect of a multiphase fusion deep-learning model with automatic phase selection in detection of intracranial aneurysm (IA) from computed tomography angiography (CTA) images. MATERIALS AND METHODS CTA images of intracranial arteries from patients at Ningbo First Hospital were retrospectively analyzed. Images were randomly classified as training data, internal validation data, or test data. CTA images from cases examined by digital subtraction angiography (DSA) were examined for independent validation. A deep-learning model was constructed by automatic phase selection of multiphase fusion, and compared to the single-phase algorithm to evaluate algorithm sensitivity. RESULTS We analyzed 1110 patients (1493 aneurysms) as training data, 139 patients (174 aneurysms) as internal validation data, and 134 patients (175 aneurysms) as test data. The sensitivity of the multiphase analysis of the internal validation data, test data, and independent validation data were greater than from the single-phase analysis. The recall of the multiphase selection was greater or equal to that of single-phase selection in the aneurysm position, shape, size, and rupture status. Use of the test data to determine the presence and absence of aneurysm rupture led to a recall from multiphase selection of 94.8% and 87.6% respectively; both of these values were greater than those from single-phase selection (89.6% and 79.4%). CONCLUSION A multiphase fusion deep learning model with automatic phase selection provided automated detection of IAs with high sensitivity.
Collapse
Affiliation(s)
- Jinglu Wang
- Department of Radiology, Ningbo First Hospital, Ningbo, Zhejiang Province, People's Republic of China
| | - Jie Sun
- Department of Neurosurgery, Ningbo First Hospital, Ningbo, Zhejiang Province, People's Republic of China
| | - Jingxu Xu
- Deepwise AI Lab, Beijing Deepwise & League of PHD Technology Co., Ltd, Beijing, People's Republic of China
| | - Shiyu Lu
- Deepwise AI Lab, Beijing Deepwise & League of PHD Technology Co., Ltd, Beijing, People's Republic of China
| | - Hao Wang
- Deepwise AI Lab, Beijing Deepwise & League of PHD Technology Co., Ltd, Beijing, People's Republic of China
| | - Chencui Huang
- Deepwise AI Lab, Beijing Deepwise & League of PHD Technology Co., Ltd, Beijing, People's Republic of China
| | - Fandong Zhang
- Deepwise AI Lab, Beijing Deepwise & League of PHD Technology Co., Ltd, Beijing, People's Republic of China
| | - Yizhou Yu
- Deepwise AI Lab, Beijing Deepwise & League of PHD Technology Co., Ltd, Beijing, People's Republic of China
| | - Xiang Gao
- Department of Neurosurgery, Ningbo First Hospital, Ningbo, Zhejiang Province, People's Republic of China
| | - Ming Wang
- Department of Radiology, Ningbo First Hospital, Ningbo, Zhejiang Province, People's Republic of China
| | - Yu Wang
- Department of Radiology, Ningbo First Hospital, Ningbo, Zhejiang Province, People's Republic of China
| | - Xinzhong Ruan
- Department of Radiology, Ningbo First Hospital, Ningbo, Zhejiang Province, People's Republic of China
| | - Yuning Pan
- Department of Radiology, Ningbo First Hospital, Ningbo, Zhejiang Province, People's Republic of China; Key Laboratory of Precision Medicine for Atherosclerotic Diseases of Zhejiang Province, People's Republic of China.
| |
Collapse
|
37
|
Kim PH, Yoon HM, Kim JR, Hwang JY, Choi JH, Hwang J, Lee J, Sung J, Jung KH, Bae B, Jung AY, Cho YA, Shim WH, Bak B, Lee JS. Bone Age Assessment Using Artificial Intelligence in Korean Pediatric Population: A Comparison of Deep-Learning Models Trained With Healthy Chronological and Greulich-Pyle Ages as Labels. Korean J Radiol 2023; 24:1151-1163. [PMID: 37899524 PMCID: PMC10613838 DOI: 10.3348/kjr.2023.0092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Revised: 08/01/2023] [Accepted: 08/06/2023] [Indexed: 10/31/2023] Open
Abstract
OBJECTIVE To develop a deep-learning-based bone age prediction model optimized for Korean children and adolescents and evaluate its feasibility by comparing it with a Greulich-Pyle-based deep-learning model. MATERIALS AND METHODS A convolutional neural network was trained to predict age according to the bone development shown on a hand radiograph (bone age) using 21036 hand radiographs of Korean children and adolescents without known bone development-affecting diseases/conditions obtained between 1998 and 2019 (median age [interquartile range {IQR}], 9 [7-12] years; male:female, 11794:9242) and their chronological ages as labels (Korean model). We constructed 2 separate external datasets consisting of Korean children and adolescents with healthy bone development (Institution 1: n = 343; median age [IQR], 10 [4-15] years; male: female, 183:160; Institution 2: n = 321; median age [IQR], 9 [5-14] years; male: female, 164:157) to test the model performance. The mean absolute error (MAE), root mean square error (RMSE), and proportions of bone age predictions within 6, 12, 18, and 24 months of the reference age (chronological age) were compared between the Korean model and a commercial model (VUNO Med-BoneAge version 1.1; VUNO) trained with Greulich-Pyle-based age as the label (GP-based model). RESULTS Compared with the GP-based model, the Korean model showed a lower RMSE (11.2 vs. 13.8 months; P = 0.004) and MAE (8.2 vs. 10.5 months; P = 0.002), a higher proportion of bone age predictions within 18 months of chronological age (88.3% vs. 82.2%; P = 0.031) for Institution 1, and a lower MAE (9.5 vs. 11.0 months; P = 0.022) and higher proportion of bone age predictions within 6 months (44.5% vs. 36.4%; P = 0.044) for Institution 2. CONCLUSION The Korean model trained using the chronological ages of Korean children and adolescents without known bone development-affecting diseases/conditions as labels performed better in bone age assessment than the GP-based model in the Korean pediatric population. Further validation is required to confirm its accuracy.
Collapse
Affiliation(s)
- Pyeong Hwa Kim
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Hee Mang Yoon
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| | - Jeong Rye Kim
- Department of Radiology, Dankook University Hospital, Dankook University College of Medicine, Cheonan, Republic of Korea
| | - Jae-Yeon Hwang
- Department of Radiology, Research Institute for Convergence of Biomedical Science and Technology, Pusan National University Yangsan Hospital, Pusan National University School of Medicine, Yangsan, Republic of Korea
| | - Jin-Ho Choi
- Department of Pediatrics, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Jisun Hwang
- Department of Radiology, Ajou University Hospital, Ajou University School of Medicine, Suwon, Republic of Korea
| | | | | | | | | | - Ah Young Jung
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Young Ah Cho
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Woo Hyun Shim
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
- Department of Medical Science, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Boram Bak
- University of Ulsan Foundation for Industry Cooperation, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Jin Seong Lee
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| |
Collapse
|
38
|
Winkelman J, Nguyen D, vanSonnenberg E, Kirk A, Lieberman S. Artificial Intelligence (AI) in pediatric endocrinology. J Pediatr Endocrinol Metab 2023; 36:903-908. [PMID: 37589444 DOI: 10.1515/jpem-2023-0287] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/18/2023] [Accepted: 08/03/2023] [Indexed: 08/18/2023]
Abstract
Artificial Intelligence (AI) is integrating itself throughout the medical community. AI's ability to analyze complex patterns and interpret large amounts of data will have considerable impact on all areas of medicine, including pediatric endocrinology. In this paper, we review and update the current studies of AI in pediatric endocrinology. Specific topics that are addressed include: diabetes management, bone growth, metabolism, obesity, and puberty. Becoming knowledgeable and comfortable with AI will assist pediatric endocrinologists, the goal of the paper.
Collapse
Affiliation(s)
| | - Diep Nguyen
- University of Arizona College of Medicine Phoenix, Phoenix, USA
| | - Eric vanSonnenberg
- University of Arizona College of Medicine Phoenix, Phoenix, USA
- From the Departments of Radiology, University of Arizona College of Medicine Phoenix, Phoenix, USA
- Student Affairs, University of Arizona College of Medicine Phoenix, Phoenix, USA
| | - Alison Kirk
- University of Arizona College of Medicine Phoenix, Phoenix, USA
- Student Affairs, University of Arizona College of Medicine Phoenix, Phoenix, USA
- Pediatrics, University of Arizona College of Medicine Phoenix, Phoenix, USA
| | - Steven Lieberman
- University of Arizona College of Medicine Phoenix, Phoenix, USA
- Internal Medicine (Division of Endocrinology), University of Arizona College of Medicine Phoenix, Phoenix, USA
| |
Collapse
|
39
|
Qu J, Zhang W, Shu X, Wang Y, Wang L, Xu M, Yao L, Hu N, Tang B, Zhang L, Lui S. Construction and evaluation of a gated high-resolution neural network for automatic brain metastasis detection and segmentation. Eur Radiol 2023; 33:6648-6658. [PMID: 37186214 DOI: 10.1007/s00330-023-09648-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 01/23/2023] [Accepted: 02/08/2023] [Indexed: 05/17/2023]
Abstract
OBJECTIVES To construct and evaluate a gated high-resolution convolutional neural network for detecting and segmenting brain metastasis (BM). METHODS This retrospective study included craniocerebral MRI scans of 1392 patients with 14,542 BMs and 200 patients with no BM between January 2012 and April 2022. A primary dataset including 1000 cases with 11,686 BMs was employed to construct the model, while an independent dataset including 100 cases with 1069 BMs from other hospitals was used to examine the generalizability. The potential of the model for clinical use was also evaluated by comparing its performance in BM detection and segmentation to that of radiologists, and comparing radiologists' lesion detecting performances with and without model assistance. RESULTS Our model yielded a recall of 0.88, a dice similarity coefficient (DSC) of 0.90, a positive predictive value (PPV) of 0.93 and a false positives per patient (FP) of 1.01 in the test set, and a recall of 0.85, a DSC of 0.89, a PPV of 0.93, and a FP of 1.07 in dataset from other hospitals. With the model's assistance, the BM detection rates of 4 radiologists improved significantly, ranging from 5.2 to 15.1% (all p < 0.001), and also for detecting small BMs with diameter ≤ 5 mm (ranging from 7.2 to 27.0%, all p < 0.001). CONCLUSIONS The proposed model enables accurate BM detection and segmentation with higher sensitivity and less time consumption, showing the potential to augment radiologists' performance in detecting BM. CLINICAL RELEVANCE STATEMENT This study offers a promising computer-aided tool to assist the brain metastasis detection and segmentation in routine clinical practice for cancer patients. KEY POINTS • The GHR-CNN could accurately detect and segment BM on contrast-enhanced 3D-T1W images. • The GHR-CNN improved the BM detection rate of radiologists, including the detection of small lesions. • The GHR-CNN enabled automated segmentation of BM in a very short time.
Collapse
Affiliation(s)
- Jiao Qu
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Wenjing Zhang
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Xin Shu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Ying Wang
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
- Department of Nuclear Medicine, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Lituan Wang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Mengyuan Xu
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Li Yao
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Na Hu
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Biqiu Tang
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Lei Zhang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Su Lui
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China.
| |
Collapse
|
40
|
Jiang C, Jiang F, Xie Z, Sun J, Sun Y, Zhang M, Zhou J, Feng Q, Zhang G, Xing K, Mei H, Li J. Evaluation of automated detection of head position on lateral cephalometric radiographs based on deep learning techniques. Ann Anat 2023; 250:152114. [PMID: 37302431 DOI: 10.1016/j.aanat.2023.152114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 05/13/2023] [Accepted: 05/20/2023] [Indexed: 06/13/2023]
Abstract
BACKGROUND Lateral cephalometric radiograph (LCR) is crucial to diagnosis and treatment planning of maxillofacial diseases, but inappropriate head position, which reduces the accuracy of cephalometric measurements, can be challenging to detect for clinicians. This non-interventional retrospective study aims to develop two deep learning (DL) systems to efficiently, accurately, and instantly detect the head position on LCRs. METHODS LCRs from 13 centers were reviewed and a total of 3000 radiographs were collected and divided into 2400 cases (80.0 %) in the training set and 600 cases (20.0 %) in the validation set. Another 300 cases were selected independently as the test set. All the images were evaluated and landmarked by two board-certified orthodontists as references. The head position of the LCR was classified by the angle between the Frankfort Horizontal (FH) plane and the true horizontal (HOR) plane, and a value within - 3°- 3° was considered normal. The YOLOv3 model based on the traditional fixed-point method and the modified ResNet50 model featuring a non-linear mapping residual network were constructed and evaluated. Heatmap was generated to visualize the performances. RESULTS The modified ResNet50 model showed a superior classification accuracy of 96.0 %, higher than 93.5 % of the YOLOv3 model. The sensitivity&recall and specificity of the modified ResNet50 model were 0.959, 0.969, and those of the YOLOv3 model were 0.846, 0.916. The area under the curve (AUC) values of the modified ResNet50 and the YOLOv3 model were 0.985 ± 0.04 and 0.942 ± 0.042, respectively. Saliency maps demonstrated that the modified ResNet50 model considered the alignment of cervical vertebras, not just the periorbital and perinasal areas, as the YOLOv3 model did. CONCLUSIONS The modified ResNet50 model outperformed the YOLOv3 model in classifying head position on LCRs and showed promising potential in facilitating making accurate diagnoses and optimal treatment plans.
Collapse
Affiliation(s)
- Chen Jiang
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Fulin Jiang
- Chongqing University Three Gorges Hospital, Chongqing 404031, China
| | - Zhuokai Xie
- University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Jikui Sun
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Yan Sun
- University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Mei Zhang
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Jiawei Zhou
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Qingchen Feng
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Guanning Zhang
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Ke Xing
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Hongxiang Mei
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Juan Li
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China.
| |
Collapse
|
41
|
Rana SS, Nath B, Chaudhari PK, Vichare S. Cervical Vertebral Maturation Assessment using various Machine Learning techniques on Lateral cephalogram: A systematic literature review. J Oral Biol Craniofac Res 2023; 13:642-651. [PMID: 37663368 PMCID: PMC10470275 DOI: 10.1016/j.jobcr.2023.08.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Revised: 05/12/2023] [Accepted: 08/16/2023] [Indexed: 09/05/2023] Open
Abstract
Importance For the assessment of optimum treatment timing in dentofacial orthopedics, understanding the growth process is of paramount importance. The evaluation of skeletal maturity based on study of the morphology of the cervical vertebrae has been devised to minimize radiation exposure of a patient due to hand wrist radiography. Cervical vertebral maturation assessment (CVMA) predictions have been examined in the state-of-the-art machine learning techniques in the recent past which require more attention and validation by clinicians and practitioners. Objectives This paper aimed to answer the question "How are machine learning techniques being employed in studies concerning cervical vertebral maturation assessment using lateral cephalograms?" Method A systematic search through the available literature was performed for this work based upon the Population, Intervention, Comparison and Outcome (PICO) framework. Data sources study selection data extraction and synthesis The searches were performed in Ovid Medline, Embase, PubMed and Cochrane Central Register of Controlled Trials (CENTRAL) and Cochrane Database of Systematic Reviews (CDSR). A search of the grey literature was also performed in Google Scholar and OpenGrey. We also did a hand-searching in the Angle Orthodontist, Journal of Orthodontics and Craniofacial Research, Progress in Orthodontics, and the American Journal of Orthodontics and Dentofacial Orthopedics. References from the included articles were also searched. Main outcomes and measures results A total of 25 papers which were assessed for full text, and 13 papers were included for the systematic review. The machine learning methods used were scrutinized according to their performance and comparison to human observers/experts. The accuracy of the models ranged between 60 and 90% or above, and satisfactory agreement and correlation with the human observers. Conclusions and relevance Machine learning models can be used for detection and classification of the cervical vertebrae maturation. In this systematic review (SR), the studies were summarized in terms of ML techniques applied, sample data, age range of sample and conventional method for CVMA, which showed that further studies with a uniform distribution of samples equally in stages of maturation and according to the gender is required for better training of the models in order to generalize the outputs for prolific use to target population.
Collapse
Affiliation(s)
- Shailendra Singh Rana
- Department of Dentistry, All India Institute of Medical Sciences, Bhatinda, Punjab, India
| | - Bhola Nath
- Department of Community Medicine, All India Institute of Medical Sciences, Bhatinda, Punjab, India
| | - Prabhat Kumar Chaudhari
- Division of Orthodontics and Dentofacial Deformities, Centre for Dental Education and Research, All India Institute of Medical Sciences, New Delhi, 110029, India
| | - Sharvari Vichare
- Department of Dentistry, All India Institute of Medical Sciences, Bhatinda, Punjab, India
| |
Collapse
|
42
|
Patton D, Ghosh A, Farkas A, Sotardi S, Francavilla M, Venkatakrishna S, Bose S, Ouyang M, Huang H, Davidson R, Sze R, Nguyen J. Automating Angle Measurements on Foot Radiographs in Young Children: Feasibility and Performance of a Convolutional Neural Network Model. J Digit Imaging 2023; 36:1419-1430. [PMID: 37099224 PMCID: PMC10406755 DOI: 10.1007/s10278-023-00824-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 03/24/2023] [Accepted: 03/27/2023] [Indexed: 04/27/2023] Open
Abstract
Measurement of angles on foot radiographs is an important step in the evaluation of malalignment. The objective is to develop a CNN model to measure angles on radiographs, using radiologists' measurements as the reference standard. This IRB-approved retrospective study included 450 radiographs from 216 patients (< 3 years of age). Angles were automatically measured by means of image segmentation followed by angle calculation, according to Simon's approach for measuring pediatric foot angles. A multiclass U-Net model with a ResNet-34 backbone was used for segmentation. Two pediatric radiologists independently measured anteroposterior and lateral talocalcaneal and talo-1st metatarsal angles using the test dataset and recorded the time used for each study. Intraclass correlation coefficients (ICC) were used to compare angle and paired Wilcoxon signed-rank test to compare time between radiologists and the CNN model. There was high spatial overlap between manual and CNN-based automatic segmentations with dice coefficients ranging between 0.81 (lateral 1st metatarsal) and 0.94 (lateral calcaneus). Agreement was higher for angles on the lateral view when compared to the AP view, between radiologists (ICC: 0.93-0.95, 0.85-0.92, respectively) and between radiologists' mean and CNN calculated (ICC: 0.71-0.73, 0.41-0.52, respectively). Automated angle calculation was significantly faster when compared to radiologists' manual measurements (3 ± 2 vs 114 ± 24 s, respectively; P < 0.001). A CNN model can selectively segment immature ossification centers and automatically calculate angles with a high spatial overlap and moderate to substantial agreement when compared to manual methods, and 39 times faster.
Collapse
Affiliation(s)
- Daniella Patton
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Adarsh Ghosh
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Amy Farkas
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Susan Sotardi
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Michael Francavilla
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Shyam Venkatakrishna
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Saurav Bose
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Minhui Ouyang
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Hao Huang
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Richard Davidson
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Divison of Orthopaedics, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Raymond Sze
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Jie Nguyen
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA.
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
43
|
Li X, Zeng L, Lu X, Chen K, Yu M, Wang B, Zhao M. A Review of Artificial Intelligence in the Rupture Risk Assessment of Intracranial Aneurysms: Applications and Challenges. Brain Sci 2023; 13:1056. [PMID: 37508988 PMCID: PMC10377544 DOI: 10.3390/brainsci13071056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 06/24/2023] [Accepted: 07/04/2023] [Indexed: 07/30/2023] Open
Abstract
Intracranial aneurysms (IAs) are highly prevalent in the population, and their rupture poses a significant risk of death or disability. However, the treatment of aneurysms, whether through interventional embolization or craniotomy clipping surgery, is not always safe and carries a certain proportion of morbidity and mortality. Therefore, early detection and prompt intervention of IAs with a high risk of rupture is of notable clinical significance. Moreover, accurately predicting aneurysms that are likely to remain stable can help avoid the risks and costs of over-intervention, which also has considerable social significance. Recent advances in artificial intelligence (AI) technology offer promising strategies to assist clinical trials. This review will discuss the state-of-the-art AI applications for assessing the rupture risk of IAs, with a focus on achievements, challenges, and potential opportunities.
Collapse
Affiliation(s)
- Xiaopeng Li
- Department of Neurosurgery, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Lang Zeng
- Department of Neurosurgery, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Xuanzhen Lu
- Department of Neurology, The Third Hospital of Wuhan, Wuhan 430074, China
| | - Kun Chen
- Department of Neurosurgery, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Maling Yu
- Department of Neurology, The Third Hospital of Wuhan, Wuhan 430074, China
| | - Baofeng Wang
- Department of Neurosurgery, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Min Zhao
- Department of Neurosurgery, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| |
Collapse
|
44
|
Gao BB. Jointly learning distribution and expectation in a unified framework for facial age and attractiveness estimation. Neural Comput Appl 2023; 35:15583-15599. [DOI: 10.1007/s00521-023-08563-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Accepted: 03/28/2023] [Indexed: 09/01/2023]
|
45
|
Ryu SM, Lee S, Jang M, Koh JM, Bae SJ, Jegal SG, Shin K, Kim N. Diagnosis of osteoporotic vertebral compression fractures and fracture level detection using multitask learning with U-Net in lumbar spine lateral radiographs. Comput Struct Biotechnol J 2023; 21:3452-3458. [PMID: 37457807 PMCID: PMC10345217 DOI: 10.1016/j.csbj.2023.06.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 06/18/2023] [Accepted: 06/24/2023] [Indexed: 07/18/2023] Open
Abstract
Recent studies of automatic diagnosis of vertebral compression fractures (VCFs) using deep learning mainly focus on segmentation and vertebral level detection in lumbar spine lateral radiographs (LSLRs). Herein, we developed a model for simultaneous VCF diagnosis and vertebral level detection without using adjacent vertebral bodies. In total, 1102 patients with VCF, 1171 controls were enrolled. The 1865, 208, and 198 LSLRS were divided into training, validation, and test dataset. A ground truth label with a 4-point trapezoidal shape was made based on radiological reports showing normal or VCF at some vertebral level. We applied a modified U-Net architecture, in which decoders were trained to detect VCF and vertebral levels, sharing the same encoder. The multi-task model was significantly better than the single-task model in sensitivity and area under the receiver operating characteristic curve. In the internal dataset, the accuracy, sensitivity, and specificity of fracture detection per patient or vertebral body were 0.929, 0.944, and 0.917 or 0.947, 0.628, and 0.977, respectively. In external validation, those of fracture detection per patient or vertebral body were 0.713, 0.979, and 0.447 or 0.828, 0.936, and 0.820, respectively. The success rates were 96 % and 94 % for vertebral level detection in internal and external validation, respectively. The multi-task-shared encoder was significantly better than the single-task encoder. Furthermore, both fracture and vertebral level detection was good in internal and external validation. Our deep learning model may help radiologists perform real-life medical examinations.
Collapse
Affiliation(s)
- Seung Min Ryu
- Department of Orthopedic Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Soyoung Lee
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Miso Jang
- Department of Convergence Medicine, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Jung-Min Koh
- Division of Endocrinology and Metabolism, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sung Jin Bae
- Department of Health Screening and Promotion Center, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Seong Gyu Jegal
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Keewon Shin
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Namkug Kim
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
- Department of Convergence Medicine, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| |
Collapse
|
46
|
Kwolek K, Grzelecki D, Kwolek K, Marczak D, Kowalczewski J, Tyrakowski M. Automated patellar height assessment on high-resolution radiographs with a novel deep learning-based approach. World J Orthop 2023; 14:387-398. [PMID: 37377994 PMCID: PMC10292056 DOI: 10.5312/wjo.v14.i6.387] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 04/06/2023] [Accepted: 05/06/2023] [Indexed: 06/19/2023] Open
Abstract
BACKGROUND Artificial intelligence and deep learning have shown promising results in medical imaging and interpreting radiographs. Moreover, medical community shows a gaining interest in automating routine diagnostics issues and orthopedic measurements.
AIM To verify the accuracy of automated patellar height assessment using deep learning-based bone segmentation and detection approach on high resolution radiographs.
METHODS 218 Lateral knee radiographs were included in the analysis. 82 radiographs were utilized for training and 10 other radiographs for validation of a U-Net neural network to achieve required Dice score. 92 other radiographs were used for automatic (U-Net) and manual measurements of the patellar height, quantified by Caton-Deschamps (CD) and Blackburne-Peel (BP) indexes. The detection of required bones regions on high-resolution images was done using a You Only Look Once (YOLO) neural network. The agreement between manual and automatic measurements was calculated using the interclass correlation coefficient (ICC) and the standard error for single measurement (SEM). To check U-Net's generalization the segmentation accuracy on the test set was also calculated.
RESULTS Proximal tibia and patella was segmented with accuracy 95.9% (Dice score) by U-Net neural network on lateral knee subimages automatically detected by the YOLO network (mean Average Precision mAP greater than 0.96). The mean values of CD and BP indexes calculated by orthopedic surgeons (R#1 and R#2) was 0.93 (± 0.19) and 0.89 (± 0.19) for CD and 0.80 (± 0.17) and 0.78 (± 0.17) for BP. Automatic measurements performed by our algorithm for CD and BP indexes were 0.92 (± 0.21) and 0.75 (± 0.19), respectively. Excellent agreement between the orthopedic surgeons’ measurements and results of the algorithm has been achieved (ICC > 0.75, SEM < 0.014).
CONCLUSION Automatic patellar height assessment can be achieved on high-resolution radiographs with the required accuracy. Determining patellar end-points and the joint line-fitting to the proximal tibia joint surface allows for accurate CD and BP index calculations. The obtained results indicate that this approach can be valuable tool in a medical practice.
Collapse
Affiliation(s)
- Kamil Kwolek
- Department of Spine Disorders and Orthopaedics, Centre of Postgraduate Medical Education, Gruca Orthopaedic and Trauma Teaching Hospital, Otwock 05-400, Poland
| | - Dariusz Grzelecki
- Department of Orthopaedics and Rheumoorthopedics, Centre of Postgraduate Medical Education, Gruca Orthopaedic and Trauma Teaching Hospital, Otwock 05-400, Poland
| | - Konrad Kwolek
- Department of Orthopaedics and Traumatology, University Hospital, Krakow 30-663, Poland
| | - Dariusz Marczak
- Department of Orthopaedics and Rheumoorthopedics, Centre of Postgraduate Medical Education, Gruca Orthopaedic and Trauma Teaching Hospital, Otwock 05-400, Poland
| | - Jacek Kowalczewski
- Department of Orthopaedics and Rheumoorthopedics, Centre of Postgraduate Medical Education, Gruca Orthopaedic and Trauma Teaching Hospital, Otwock 05-400, Poland
| | - Marcin Tyrakowski
- Department of Spine Disorders and Orthopaedics, Centre of Postgraduate Medical Education, Gruca Orthopaedic and Trauma Teaching Hospital, Otwock 05-400, Poland
| |
Collapse
|
47
|
Deng Y, Chen Y, He Q, Wang X, Liao Y, Liu J, Liu Z, Huang J, Song T. Bone age assessment from articular surface and epiphysis using deep neural networks. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:13133-13148. [PMID: 37501481 DOI: 10.3934/mbe.2023585] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
Bone age assessment is of great significance to genetic diagnosis and endocrine diseases. Traditional bone age diagnosis mainly relies on experienced radiologists to examine the regions of interest in hand radiography, but it is time-consuming and may even lead to a vast error between the diagnosis result and the reference. The existing computer-aided methods predict bone age based on general regions of interest but do not explore specific regions of interest in hand radiography. This paper aims to solve such problems by performing bone age prediction on the articular surface and epiphysis from hand radiography using deep convolutional neural networks. The articular surface and epiphysis datasets are established from the Radiological Society of North America (RSNA) pediatric bone age challenge, where the specific feature regions of the articular surface and epiphysis are manually segmented from hand radiography. Five convolutional neural networks, i.e., ResNet50, SENet, DenseNet-121, EfficientNet-b4, and CSPNet, are employed to improve the accuracy and efficiency of bone age diagnosis in clinical applications. Experiments show that the best-performing model can yield a mean absolute error (MAE) of 7.34 months on the proposed articular surface and epiphysis datasets, which is more accurate and fast than the radiologists. The project is available at https://github.com/YameiDeng/BAANet/, and the annotated dataset is also published at https://doi.org/10.5281/zenodo.7947923.
Collapse
Affiliation(s)
- Yamei Deng
- Department of Radiology, Guangdong Provincial Key Laboratory of Major Obstetric Diseases, Guangdong Provincial Clinical Research Center for Obstetrics and Gynecology, The Third Affiliated Hospital of Guangzhou Medical University, Guangzhou 510150, China
| | - Yonglu Chen
- Department of Radiology, Guangdong Provincial Key Laboratory of Major Obstetric Diseases, Guangdong Provincial Clinical Research Center for Obstetrics and Gynecology, The Third Affiliated Hospital of Guangzhou Medical University, Guangzhou 510150, China
| | - Qian He
- Department of Radiology, Guangdong Provincial Key Laboratory of Major Obstetric Diseases, Guangdong Provincial Clinical Research Center for Obstetrics and Gynecology, The Third Affiliated Hospital of Guangzhou Medical University, Guangzhou 510150, China
| | - Xu Wang
- School of Automation, Guangdong University of Technology, Guangzhou 510006, China
| | - Yong Liao
- School of physics, electronics and electrical engineering, Xiangnan University, Chenzhou 423000, China
| | - Jue Liu
- Department of Radiology, Guangdong Provincial Key Laboratory of Major Obstetric Diseases, Guangdong Provincial Clinical Research Center for Obstetrics and Gynecology, The Third Affiliated Hospital of Guangzhou Medical University, Guangzhou 510150, China
| | - Zhaoran Liu
- Department of Radiology, Guangdong Provincial Key Laboratory of Major Obstetric Diseases, Guangdong Provincial Clinical Research Center for Obstetrics and Gynecology, The Third Affiliated Hospital of Guangzhou Medical University, Guangzhou 510150, China
| | - Jianwei Huang
- Department of Radiology, Guangdong Provincial Key Laboratory of Major Obstetric Diseases, Guangdong Provincial Clinical Research Center for Obstetrics and Gynecology, The Third Affiliated Hospital of Guangzhou Medical University, Guangzhou 510150, China
| | - Ting Song
- Department of Radiology, Guangdong Provincial Key Laboratory of Major Obstetric Diseases, Guangdong Provincial Clinical Research Center for Obstetrics and Gynecology, The Third Affiliated Hospital of Guangzhou Medical University, Guangzhou 510150, China
| |
Collapse
|
48
|
Mao X, Hui Q, Zhu S, Du W, Qiu C, Ouyang X, Kong D. Automated Skeletal Bone Age Assessment with Two-Stage Convolutional Transformer Network Based on X-ray Images. Diagnostics (Basel) 2023; 13:diagnostics13111837. [PMID: 37296689 DOI: 10.3390/diagnostics13111837] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 05/19/2023] [Accepted: 05/21/2023] [Indexed: 06/12/2023] Open
Abstract
Human skeletal development is continuous and staged, and different stages have various morphological characteristics. Therefore, bone age assessment (BAA) can accurately reflect the individual's growth and development level and maturity. Clinical BAA is time consuming, highly subjective, and lacks consistency. Deep learning has made considerable progress in BAA in recent years by effectively extracting deep features. Most studies use neural networks to extract global information from input images. However, clinical radiologists are highly concerned about the ossification degree in some specific regions of the hand bones. This paper proposes a two-stage convolutional transformer network to improve the accuracy of BAA. Combined with object detection and transformer, the first stage mimics the bone age reading process of the pediatrician, extracts the hand bone region of interest (ROI) in real time using YOLOv5, and proposes hand bone posture alignment. In addition, the previous information encoding of biological sex is integrated into the feature map to replace the position token in the transformer. The second stage extracts features within the ROI by window attention, interacts between different ROIs by shifting the window attention to extract hidden feature information, and penalizes the evaluation results using a hybrid loss function to ensure its stability and accuracy. The proposed method is evaluated on the data from the Pediatric Bone Age Challenge organized by the Radiological Society of North America (RSNA). The experimental results show that the proposed method achieves a mean absolute error (MAE) of 6.22 and 4.585 months on the validation and testing sets, respectively, and the cumulative accuracy within 6 and 12 months reach 71% and 96%, respectively, which is comparable to the state of the art, markedly reducing the clinical workload and realizing rapid, automatic, and high-precision assessment.
Collapse
Affiliation(s)
- Xiongwei Mao
- Department of Radiology, Zhejiang University Hospital, Zhejiang University, Hangzhou 310027, China
- Department of Radiology, Zhejiang University Hospital District, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou 310009, China
| | - Qinglei Hui
- School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, China
| | - Siyu Zhu
- Zhejiang Qiushi Institute for Mathematical Medicine, Hangzhou 311121, China
| | - Wending Du
- Zhejiang Qiushi Institute for Mathematical Medicine, Hangzhou 311121, China
| | - Chenhui Qiu
- School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, China
| | - Xiaoping Ouyang
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310027, China
| | - Dexing Kong
- School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, China
| |
Collapse
|
49
|
Amasya H, Aydoğan T, Cesur E, Kemaloğlu Alagöz N, Uğurlu M, Bayrakdar İŞ, Orhan K. Using artificial intelligence models to evaluate envisaged points initially: A pilot study. Proc Inst Mech Eng H 2023:9544119231173165. [PMID: 37211725 DOI: 10.1177/09544119231173165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
The morphology of the finger bones in hand-wrist radiographs (HWRs) can be considered as a radiological skeletal maturity indicator, along with the other indicators. This study aims to validate the anatomical landmarks envisaged to be used for classification of the morphology of the phalanges, by developing classical neural network (NN) classifiers based on a sub-dataset of 136 HWRs. A web-based tool was developed and 22 anatomical landmarks were labeled on four region of interests (proximal (PP3), medial (MP3), distal (DP3) phalanges of the third and medial phalanx (MP5) of the fifth finger) and the epiphysis-diaphysis relationships were saved as "narrow,""equal,""capping" or "fusion" by three observers. In each region, 18 ratios and 15 angles were extracted using anatomical points. The data set is analyzed by developing two NN classifiers, without (NN-1) and with (NN-2) the 5-fold cross-validation. The performance of the models was evaluated with percentage of agreement, Cohen's (cκ) and Weighted (wκ) Kappa coefficients, precision, recall, F1-score and accuracy (statistically significance: p < 0.05). Method error was found to be in the range of cκ: 0.7-1. Overall classification performance of the models was changed between 82.14% and 89.29%. On average, performance of the NN-1 and NN-2 models were found to be 85.71% and 85.52%, respectively. The cκ and wκ of the NN-1 model were changed between -0.08 (p > 0.05) and 0.91 among regions. The average performance was found to be promising except the regions without adequate samples and the anatomical points are validated to be used in the future studies, initially.
Collapse
Affiliation(s)
- Hakan Amasya
- Faculty of Dentistry, Department of Oral and Maxillofacial Radiology, Istanbul University-Cerrahpaşa, Istanbul, Turkey
- CAST (Cerrahpasa Research, Simulation and Design Laboratory), Istanbul University-Cerrahpaşa, Istanbul, Turkey
- Health Biotechnology Joint Research and Application Center of Excellence, Istanbul, Turkey
| | - Turgay Aydoğan
- Faculty of Engineering, Department of Computer Engineering, Süleyman Demirel University, Isparta, Turkey
| | - Emre Cesur
- Faculty of Dentistry, Department of Orthodontics, Medipol Mega University Hospital, Istanbul, Turkey
| | - Nazan Kemaloğlu Alagöz
- Uluborlu Selahattin Karasoy Vocational School, Isparta University of Applied Sciences, Isparta, Turkey
| | - Mehmet Uğurlu
- Faculty of Dentistry, Department of Orthodontics, Eskişehir Osmangazi University, Eskisehir, Turkey
| | - İbrahim Şevki Bayrakdar
- Faculty of Dentistry, Department of Oral and Maxillofacial Radiology, Eskişehir Osmangazi University, Eskisehir, Turkey
| | - Kaan Orhan
- Health Biotechnology Joint Research and Application Center of Excellence, Istanbul, Turkey
- Faculty of Dentistry, Department of Oral and Maxillofacial Radiology, Ankara University, Ankara, Turkey
- Ankara University Medical Design Application and Research Center (MEDITAM), Ankara, Turkey
| |
Collapse
|
50
|
He B, Xu Z, Zhou D, Chen Y. Multi-Branch Attention Learning for Bone Age Assessment with Ambiguous Label. SENSORS (BASEL, SWITZERLAND) 2023; 23:4834. [PMID: 37430748 DOI: 10.3390/s23104834] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Revised: 05/15/2023] [Accepted: 05/15/2023] [Indexed: 07/12/2023]
Abstract
Bone age assessment (BAA) is a typical clinical technique for diagnosing endocrine and metabolic diseases in children's development. Existing deep learning-based automatic BAA models are trained on the Radiological Society of North America dataset (RSNA) from Western populations. However, due to the difference in developmental process and BAA standards between Eastern and Western children, these models cannot be applied to bone age prediction in Eastern populations. To address this issue, this paper collects a bone age dataset based on the East Asian populations for model training. Nevertheless, it is laborious and difficult to obtain enough X-ray images with accurate labels. In this paper, we employ ambiguous labels from radiology reports and transform them into Gaussian distribution labels of different amplitudes. Furthermore, we propose multi-branch attention learning with ambiguous labels network (MAAL-Net). MAAL-Net consists of a hand object location module and an attention part extraction module to discover the informative regions of interest (ROIs) based only on image-level labels. Extensive experiments on both the RSNA dataset and the China Bone Age (CNBA) dataset demonstrate that our method achieves competitive results with the state-of-the-arts, and performs on par with experienced physicians in children's BAA tasks.
Collapse
Affiliation(s)
- Bishi He
- School of Automation (School of Artificial Intelligence), Hangzhou Dianzi University, Hangzhou 310018, China
| | - Zhe Xu
- School of Automation (School of Artificial Intelligence), Hangzhou Dianzi University, Hangzhou 310018, China
| | - Dong Zhou
- School of Automation (School of Artificial Intelligence), Hangzhou Dianzi University, Hangzhou 310018, China
| | - Yuanjiao Chen
- School of Automation (School of Artificial Intelligence), Hangzhou Dianzi University, Hangzhou 310018, China
| |
Collapse
|