1
|
Aftab J, Khan MA, Arshad S, Rehman SU, AlHammadi DA, Nam Y. Artificial intelligence based classification and prediction of medical imaging using a novel framework of inverted and self-attention deep neural network architecture. Sci Rep 2025; 15:8724. [PMID: 40082642 PMCID: PMC11906919 DOI: 10.1038/s41598-025-93718-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2024] [Accepted: 03/10/2025] [Indexed: 03/16/2025] Open
Abstract
Classifying medical images is essential in computer-aided diagnosis (CAD). Although the recent success of deep learning in the classification tasks has proven advantages over the traditional feature extraction techniques, it remains challenging due to the inter and intra-class similarity caused by the diversity of imaging modalities (i.e., dermoscopy, mammography, wireless capsule endoscopy, and CT). In this work, we proposed a novel deep-learning framework for classifying several medical imaging modalities. In the training phase of the deep learning models, data augmentation is performed at the first stage on all selected datasets. After that, two novel custom deep learning architectures were introduced, called the Inverted Residual Convolutional Neural Network (IRCNN) and Self Attention CNN (SACNN). Both models are trained on the augmented datasets with manual hyperparameter selection. Each dataset's testing images are used to extract features during the testing stage. The extracted features are fused using a modified serial fusion with a strong correlation approach. An optimization algorithm- slap swarm controlled standard Error mean (SScSEM) has been employed, and the best features that passed to the shallow wide neural network (SWNN) classifier for the final classification have been selected. GradCAM, an explainable artificial intelligence (XAI) approach, analyzes custom models. The proposed architecture was tested on five publically available datasets of different imaging modalities and obtained improved accuracy of 98.6 (INBreast), 95.3 (KVASIR), 94.3 (ISIC2018), 95.0 (Lung Cancer), and 98.8% (Oral Cancer), respectively. A detailed comparison is conducted based on precision and accuracy, showing that the proposed architecture performs better. The implemented models are available on GitHub ( https://github.com/ComputerVisionLabPMU/ScientificImagingPaper.git ).
Collapse
Affiliation(s)
- Junaid Aftab
- Department of Computer Engineering, HITEC University, Taxila, 47080, Pakistan
| | - Muhammad Attique Khan
- Department of Artificial Intelligence, College of Computer Engineering and Science, Prince Mohammad bin Fahd University, Al Khobar, Saudi Arabia.
| | - Sobia Arshad
- Department of Computer Engineering, HITEC University, Taxila, 47080, Pakistan
| | - Shams Ur Rehman
- Department of Computer Engineering, HITEC University, Taxila, 47080, Pakistan
| | - Dina Abdulaziz AlHammadi
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O.Box 84428, 11671, Riyadh, Saudi Arabia
| | - Yunyoung Nam
- Department of ICT Convergence, Soonchunhyang University, Asan, South Korea.
| |
Collapse
|
2
|
Mishra S, Das H, Mohapatra SK, Khan SB, Alojail M, Saraee M. A hybrid fused-KNN based intelligent model to access melanoma disease risk using indoor positioning system. Sci Rep 2025; 15:7438. [PMID: 40032864 PMCID: PMC11876441 DOI: 10.1038/s41598-024-74847-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2024] [Accepted: 09/30/2024] [Indexed: 03/05/2025] Open
Abstract
The Indoor Positioning System (IPS) based technology involves the positioning system using sensors and actuators, where the Global Positioning System (GPS) lacks. The IPS system can be used in buildings, malls, parking lots and several other application domains. This system can also be useful in the healthcare centre as an assisting medium for medical professionals in the disease of the diagnosis task. This research work includes the development and implementation of an intelligent and automated IPS based model for melanoma disease detection using image sets. A new classification approach called Fused K-nearest neighbor (KNN) is applied in this study. The IPS based Fused-KNN is a fusion of three distinct folds in KNN (3-NN, 5-NN and 7-NN) where the model is developed using input samples from various sensory units while involving image optimization processes such as the image similarity index, image overlapping and image sampling which helps in refining raw melanoma images thereby extracting a combined image from the sensors. The IPS based Fused-KNN model used in the study obtained an accuracy of 97.8%, which is considerably more than the existing classifiers. The error rate is also least with this new model which is introduced. RMSE (Root Mean Square Error) and MAE (Mean Absolute Error) value generated with the proposed IPS base Fused-KNN the model for melanoma detection was as low as 0.2476 and 0.542 respectively. An average mean value computed for accuracy, precision, recall and f-score were found to be 94.45%, 95.2%, 94.4% and 94.9% respectively when validated with 12 different cancer-based datasets. Hence the presented IPS based model can prove to be an efficient and intelligent predictive model for melanoma disease diagnosis, but also other cancer-based diseases in a faster and more reliable manner than existing models.
Collapse
Affiliation(s)
- Sushruta Mishra
- Kalinga Institute of Industrial Technology, Bhubaneswar, India
| | - Himansu Das
- Kalinga Institute of Industrial Technology, Bhubaneswar, India.
| | | | - Surbhi Bhatia Khan
- School of Science, Engineering and Environment, University of Salford, Salford, UK.
- University Centre for Research and Development, Chandigarh University, Mohali, Punjab, India.
- Centre for Research Impact & Outcome, Chitkara University, Chitkara University, Institute of Engineering and Technology, Rajpura, Punjab, India.
| | - Mohammad Alojail
- Management Information System Department, College of Business Administration, King Saud University, Riyadh, Saudi Arabia
| | - Mo Saraee
- School of Science, Engineering and Environment, University of Salford, Salford, UK
| |
Collapse
|
3
|
Mustafa S, Jaffar A, Rashid M, Akram S, Bhatti SM. Deep learning-based skin lesion analysis using hybrid ResUNet++ and modified AlexNet-Random Forest for enhanced segmentation and classification. PLoS One 2025; 20:e0315120. [PMID: 39820868 PMCID: PMC11737724 DOI: 10.1371/journal.pone.0315120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2024] [Accepted: 11/20/2024] [Indexed: 01/19/2025] Open
Abstract
Skin cancer is considered globally as the most fatal disease. Most likely all the patients who received wrong diagnosis and low-quality treatment die early. Though if it is detected in the early stages the patient has fairly good chance and the aforementioned diseases can be cured. Consequently, diagnostic identification and management of the patient at this level becomes a rather enormous task. This paper offers a cutting-edge hybrid deep learning approach of better segmentation and classification of skin lesions. The proposed method incorporates three key stages: preprocessing, segmentation of lesions, and classification of lesions. By the stage of preprocessing, a morphology-based technique takes out hair so as to enhance the segmentation precision to use the cleansed images for subsequent analysis. Segmentation cuts off the lesion from the surrounding skin, giving the classification phase a dedicated area of interest and the ability to clear the background noise that may affect classification rates. The isolation enables the model to better analyze anatomical lesion features in order to achieve accurate benign and malignant classifications. Using ResUNet++, the cutting-edge deep learning architecture, we achieved accurate lesion segmentation. Next, we will modify and use an AlexNet-Random Forest (AlexNet-RF) based classifier for robust lesion classification. The proposed hybrid deep learning model is intensively validated on the Ham10000 data set which is one of the most popular datasets for skin lesions analysis. The obtained results show that the utilized approach, compared to the previous ones, is more effective, giving better segmentation and classification results. This method takes advantage of ResUNet++ strong classification skill and modified AlexNet-Random Forest robustness for more accurate segmentation. There is a high probability that ResUNet++, which is highly proficient at medical image segmentation, can produce better segmentation of lesions than the simpler models. The composition of AlexNet's extraction of features with Random Forest ability to reduce overfitting possibly may be more precise in the classification when compared to using only one model.
Collapse
Affiliation(s)
- Saleem Mustafa
- Faculty of Computer Science & Information Technology, The Superior University, Lahore, Pakistan
- Intelligent Data Visual Computing Research (IDVCR), Lahore, Pakistan
| | - Arfan Jaffar
- Faculty of Computer Science & Information Technology, The Superior University, Lahore, Pakistan
- Intelligent Data Visual Computing Research (IDVCR), Lahore, Pakistan
| | - Muhammad Rashid
- Department of Computer Science, National University of Technology, Islamabad, Pakistan
| | - Sheeraz Akram
- Faculty of Computer Science & Information Technology, The Superior University, Lahore, Pakistan
- Intelligent Data Visual Computing Research (IDVCR), Lahore, Pakistan
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| | - Sohail Masood Bhatti
- Faculty of Computer Science & Information Technology, The Superior University, Lahore, Pakistan
- Intelligent Data Visual Computing Research (IDVCR), Lahore, Pakistan
| |
Collapse
|
4
|
Naseri H, Safaei AA. Diagnosis and prognosis of melanoma from dermoscopy images using machine learning and deep learning: a systematic literature review. BMC Cancer 2025; 25:75. [PMID: 39806282 PMCID: PMC11727731 DOI: 10.1186/s12885-024-13423-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2024] [Accepted: 12/31/2024] [Indexed: 01/16/2025] Open
Abstract
BACKGROUND Melanoma is a highly aggressive skin cancer, where early and accurate diagnosis is crucial to improve patient outcomes. Dermoscopy, a non-invasive imaging technique, aids in melanoma detection but can be limited by subjective interpretation. Recently, machine learning and deep learning techniques have shown promise in enhancing diagnostic precision by automating the analysis of dermoscopy images. METHODS This systematic review examines recent advancements in machine learning (ML) and deep learning (DL) applications for melanoma diagnosis and prognosis using dermoscopy images. We conducted a thorough search across multiple databases, ultimately reviewing 34 studies published between 2016 and 2024. The review covers a range of model architectures, including DenseNet and ResNet, and discusses datasets, methodologies, and evaluation metrics used to validate model performance. RESULTS Our results highlight that certain deep learning architectures, such as DenseNet and DCNN demonstrated outstanding performance, achieving over 95% accuracy on the HAM10000, ISIC and other datasets for melanoma detection from dermoscopy images. The review provides insights into the strengths, limitations, and future research directions of machine learning and deep learning methods in melanoma diagnosis and prognosis. It emphasizes the challenges related to data diversity, model interpretability, and computational resource requirements. CONCLUSION This review underscores the potential of machine learning and deep learning methods to transform melanoma diagnosis through improved diagnostic accuracy and efficiency. Future research should focus on creating accessible, large datasets and enhancing model interpretability to increase clinical applicability. By addressing these areas, machine learning and deep learning models could play a central role in advancing melanoma diagnosis and patient care.
Collapse
Affiliation(s)
- Hoda Naseri
- Department of Data Science, Faculty of Interdisciplinary Science and Technology, Tarbiat Modares University, Tehran, Iran
| | - Ali A Safaei
- Department of Data Science, Faculty of Interdisciplinary Science and Technology, Tarbiat Modares University, Tehran, Iran.
- Department of Medical Informatics, Faculty of Medical Sciences, Tarbiat Modares University, Tehran, Iran.
| |
Collapse
|
5
|
Ahmad I, Alqurashi F. Early cancer detection using deep learning and medical imaging: A survey. Crit Rev Oncol Hematol 2024; 204:104528. [PMID: 39413940 DOI: 10.1016/j.critrevonc.2024.104528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2024] [Accepted: 10/02/2024] [Indexed: 10/18/2024] Open
Abstract
Cancer, characterized by the uncontrolled division of abnormal cells that harm body tissues, necessitates early detection for effective treatment. Medical imaging is crucial for identifying various cancers, yet its manual interpretation by radiologists is often subjective, labour-intensive, and time-consuming. Consequently, there is a critical need for an automated decision-making process to enhance cancer detection and diagnosis. Previously, a lot of work was done on surveys of different cancer detection methods, and most of them were focused on specific cancers and limited techniques. This study presents a comprehensive survey of cancer detection methods. It entails a review of 99 research articles collected from the Web of Science, IEEE, and Scopus databases, published between 2020 and 2024. The scope of the study encompasses 12 types of cancer, including breast, cervical, ovarian, prostate, esophageal, liver, pancreatic, colon, lung, oral, brain, and skin cancers. This study discusses different cancer detection techniques, including medical imaging data, image preprocessing, segmentation, feature extraction, deep learning and transfer learning methods, and evaluation metrics. Eventually, we summarised the datasets and techniques with research challenges and limitations. Finally, we provide future directions for enhancing cancer detection techniques.
Collapse
Affiliation(s)
- Istiak Ahmad
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; School of Information and Communication Technology, Griffith University, Queensland 4111, Australia.
| | - Fahad Alqurashi
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| |
Collapse
|
6
|
Vardasca R, Mendes JG, Magalhaes C. Skin Cancer Image Classification Using Artificial Intelligence Strategies: A Systematic Review. J Imaging 2024; 10:265. [PMID: 39590729 PMCID: PMC11595075 DOI: 10.3390/jimaging10110265] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2024] [Revised: 09/26/2024] [Accepted: 10/17/2024] [Indexed: 11/28/2024] Open
Abstract
The increasing incidence of and resulting deaths associated with malignant skin tumors are a public health problem that can be minimized if detection strategies are improved. Currently, diagnosis is heavily based on physicians' judgment and experience, which can occasionally lead to the worsening of the lesion or needless biopsies. Several non-invasive imaging modalities, e.g., confocal scanning laser microscopy or multiphoton laser scanning microscopy, have been explored for skin cancer assessment, which have been aligned with different artificial intelligence (AI) strategies to assist in the diagnostic task, based on several image features, thus making the process more reliable and faster. This systematic review concerns the implementation of AI methods for skin tumor classification with different imaging modalities, following the PRISMA guidelines. In total, 206 records were retrieved and qualitatively analyzed. Diagnostic potential was found for several techniques, particularly for dermoscopy images, with strategies yielding classification results close to perfection. Learning approaches based on support vector machines and artificial neural networks seem to be preferred, with a recent focus on convolutional neural networks. Still, detailed descriptions of training/testing conditions are lacking in some reports, hampering reproduction. The use of AI methods in skin cancer diagnosis is an expanding field, with future work aiming to construct optimal learning approaches and strategies. Ultimately, early detection could be optimized, improving patient outcomes, even in areas where healthcare is scarce.
Collapse
Affiliation(s)
- Ricardo Vardasca
- ISLA Santarem, Rua Teixeira Guedes 31, 2000-029 Santarem, Portugal
- Instituto de Ciência e Inovação em Engenharia Mecânica e Engenharia Industrial, Universidade do Porto, 4099-002 Porto, Portugal; (J.G.M.); (C.M.)
| | - Joaquim Gabriel Mendes
- Instituto de Ciência e Inovação em Engenharia Mecânica e Engenharia Industrial, Universidade do Porto, 4099-002 Porto, Portugal; (J.G.M.); (C.M.)
- Faculdade de Engenharia, Universidade do Porto, 4099-002 Porto, Portugal
| | - Carolina Magalhaes
- Instituto de Ciência e Inovação em Engenharia Mecânica e Engenharia Industrial, Universidade do Porto, 4099-002 Porto, Portugal; (J.G.M.); (C.M.)
- Faculdade de Engenharia, Universidade do Porto, 4099-002 Porto, Portugal
| |
Collapse
|
7
|
Anber B, Yurtkan K. Fractional differentiation based image enhancement for automatic detection of malignant melanoma. BMC Med Imaging 2024; 24:231. [PMID: 39223468 PMCID: PMC11367925 DOI: 10.1186/s12880-024-01400-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2024] [Accepted: 08/14/2024] [Indexed: 09/04/2024] Open
Abstract
Recent improvements in artificial intelligence and computer vision make it possible to automatically detect abnormalities in medical images. Skin lesions are one broad class of them. There are types of lesions that cause skin cancer, again with several types. Melanoma is one of the deadliest types of skin cancer. Its early diagnosis is at utmost importance. The treatments are greatly aided with artificial intelligence by the quick and precise diagnosis of these conditions. The identification and delineation of boundaries inside skin lesions have shown promise when using the basic image processing approaches for edge detection. Further enhancements regarding edge detections are possible. In this paper, the use of fractional differentiation for improved edge detection is explored on the application of skin lesion detection. A framework based on fractional differential filters for edge detection in skin lesion images is proposed that can improve automatic detection rate of malignant melanoma. The derived images are used to enhance the input images. Obtained images then undergo a classification process based on deep learning. A well-studied dataset of HAM10000 is used in the experiments. The system achieves 81.04% accuracy with EfficientNet model using the proposed fractional derivative based enhancements whereas accuracies are around 77.94% when using original images. In almost all the experiments, the enhanced images improved the accuracy. The results show that the proposed method improves the recognition performance.
Collapse
Affiliation(s)
- Basmah Anber
- Computer Engineering Department, Faculty of Engineering, Cyprus International University, via Mersin10, Nicosia, Northern Cyprus, Turkey.
| | - Kamil Yurtkan
- Computer Engineering Department, Faculty of Engineering, Cyprus International University, via Mersin10, Nicosia, Northern Cyprus, Turkey
- Artificial Intelligence Application and Research Center, Cyprus International University, via Mersin10, Nicosia, Northern Cyprus, Turkey
| |
Collapse
|
8
|
Ramamurthy K, Thayumanaswamy I, Radhakrishnan M, Won D, Lingaswamy S. Integration of Localized, Contextual, and Hierarchical Features in Deep Learning for Improved Skin Lesion Classification. Diagnostics (Basel) 2024; 14:1338. [PMID: 39001229 PMCID: PMC11241006 DOI: 10.3390/diagnostics14131338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Revised: 06/11/2024] [Accepted: 06/12/2024] [Indexed: 07/16/2024] Open
Abstract
Skin lesion classification is vital for the early detection and diagnosis of skin diseases, facilitating timely intervention and treatment. However, existing classification methods face challenges in managing complex information and long-range dependencies in dermoscopic images. Therefore, this research aims to enhance the feature representation by incorporating local, global, and hierarchical features to improve the performance of skin lesion classification. We introduce a novel dual-track deep learning (DL) model in this research for skin lesion classification. The first track utilizes a modified Densenet-169 architecture that incorporates a Coordinate Attention Module (CoAM). The second track employs a customized convolutional neural network (CNN) comprising a Feature Pyramid Network (FPN) and Global Context Network (GCN) to capture multiscale features and global contextual information. The local features from the first track and the global features from second track are used for precise localization and modeling of the long-range dependencies. By leveraging these architectural advancements within the DenseNet framework, the proposed neural network achieved better performance compared to previous approaches. The network was trained and validated using the HAM10000 dataset, achieving a classification accuracy of 93.2%.
Collapse
Affiliation(s)
- Karthik Ramamurthy
- Centre for Cyber Physical Systems, Vellore Institute of Technology, Chennai 600127, India
| | - Illakiya Thayumanaswamy
- Department of Computational Intelligence, School of Computing, SRM Institute of Science and Technology, Kattankulathur 603203, India
| | - Menaka Radhakrishnan
- Centre for Cyber Physical Systems, Vellore Institute of Technology, Chennai 600127, India
| | - Daehan Won
- System Sciences and Industrial Engineering, Binghamton University, Binghamton, NY 13902, USA
| | - Sindhia Lingaswamy
- Department of Computer Applications, National Institute of Technology, Tiruchirappalli 620015, India
| |
Collapse
|
9
|
Sun J, Yuan B, Sun Z, Zhu J, Deng Y, Gong Y, Chen Y. MpoxNet: dual-branch deep residual squeeze and excitation monkeypox classification network with attention mechanism. Front Cell Infect Microbiol 2024; 14:1397316. [PMID: 38912211 PMCID: PMC11190078 DOI: 10.3389/fcimb.2024.1397316] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Accepted: 05/08/2024] [Indexed: 06/25/2024] Open
Abstract
While the world struggles to recover from the devastation wrought by the widespread spread of COVID-19, monkeypox virus has emerged as a new global pandemic threat. In this paper, a high precision and lightweight classification network MpoxNet based on ConvNext is proposed to meet the need of fast and safe detection of monkeypox classification. In this method, a two-branch depth-separable convolution residual Squeeze and Excitation module is designed. This design aims to extract more feature information with two branches, and greatly reduces the number of parameters in the model by using depth-separable convolution. In addition, our method introduces a convolutional attention module to enhance the extraction of key features within the receptive field. The experimental results show that MpoxNet has achieved remarkable results in monkeypox disease classification, the accuracy rate is 95.28%, the precision rate is 96.40%, the recall rate is 93.00%, and the F1-Score is 95.80%. This is significantly better than the current mainstream classification model. It is worth noting that the FLOPS and the number of parameters of MpoxNet are only 30.68% and 31.87% of those of ConvNext-Tiny, indicating that the model has a small computational burden and model complexity while efficient performance.
Collapse
Affiliation(s)
- Jingbo Sun
- School of Electronic Information, Xijing University, Xi’an, China
- Shaanxi Key Laboratory of Integrated and Intelligent Navigation, The 20th Research Institute of China Electronics Technology Group Corporation, Xi’an, China
- Xi’an Key Laboratory of High Precision Industrial Intelligent Vision Measurement Technology, Xijing University, Xi’an, China
| | - Baoxi Yuan
- School of Electronic Information, Xijing University, Xi’an, China
- Shaanxi Key Laboratory of Integrated and Intelligent Navigation, The 20th Research Institute of China Electronics Technology Group Corporation, Xi’an, China
- Xi’an Key Laboratory of High Precision Industrial Intelligent Vision Measurement Technology, Xijing University, Xi’an, China
| | - Zhaocheng Sun
- School of Electronic Information, Xijing University, Xi’an, China
| | - Jiajun Zhu
- School of Electronic Information, Xijing University, Xi’an, China
| | - Yuxin Deng
- School of Electronic Information, Xijing University, Xi’an, China
| | - Yi Gong
- School of Electronic Information, Xijing University, Xi’an, China
| | - Yuhe Chen
- School of Electronic Information, Xijing University, Xi’an, China
| |
Collapse
|
10
|
Yan T, Chen G, Zhang H, Wang G, Yan Z, Li Y, Xu S, Zhou Q, Shi R, Tian Z, Wang B. Convolutional neural network with parallel convolution scale attention module and ResCBAM for breast histology image classification. Heliyon 2024; 10:e30889. [PMID: 38770292 PMCID: PMC11103517 DOI: 10.1016/j.heliyon.2024.e30889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 05/04/2024] [Accepted: 05/07/2024] [Indexed: 05/22/2024] Open
Abstract
Breast cancer is the most common cause of female morbidity and death worldwide. Compared with other cancers, early detection of breast cancer is more helpful to improve the prognosis of patients. In order to achieve early diagnosis and treatment, clinical treatment requires rapid and accurate diagnosis. Therefore, the development of an automatic detection system for breast cancer suitable for patient imaging is of great significance for assisting clinical treatment. Accurate classification of pathological images plays a key role in computer-aided medical diagnosis and prognosis. However, in the automatic recognition and classification methods of breast cancer pathological images, the scale information, the loss of image information caused by insufficient feature fusion, and the enormous structure of the model may lead to inaccurate or inefficient classification. To minimize the impact, we proposed a lightweight PCSAM-ResCBAM model based on two-stage convolutional neural network. The model included a Parallel Convolution Scale Attention Module network (PCSAM-Net) and a Residual Convolutional Block Attention Module network (ResCBAM-Net). The first-level convolutional network was built through a 4-layer PCSAM module to achieve prediction and classification of patches extracted from images. To optimize the network's ability to represent global features of images, we proposed a tiled feature fusion method to fuse patch features from the same image, and proposed a residual convolutional attention module. Based on the above, the second-level convolutional network was constructed to achieve predictive classification of images. We evaluated the performance of our proposed model on the ICIAR2018 dataset and the BreakHis dataset, respectively. Furthermore, through model ablation studies, we found that scale attention and dilated convolution play an important role in improving model performance. Our proposed model outperforms the existing state-of-the-art models on 200 × and 400 × magnification datasets with a maximum accuracy of 98.74 %.
Collapse
Affiliation(s)
- Ting Yan
- Translational Medicine Research Center, Shanxi Medical University, Taiyuan, China
| | - Guohui Chen
- Translational Medicine Research Center, Shanxi Medical University, Taiyuan, China
| | - Huimin Zhang
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Guolan Wang
- Computer Information Engineering Institute, Shanxi Technology and Business College, Taiyuan, China
| | - Zhenpeng Yan
- Translational Medicine Research Center, Shanxi Medical University, Taiyuan, China
| | - Ying Li
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Songrui Xu
- Translational Medicine Research Center, Shanxi Medical University, Taiyuan, China
| | - Qichao Zhou
- Translational Medicine Research Center, Shanxi Medical University, Taiyuan, China
| | - Ruyi Shi
- Department of Cell Biology and Genetics, Shanxi Medical University, Taiyuan, Shanxi, 030001, China
| | - Zhi Tian
- Second Clinical Medical College, Shanxi Medical University, 382 Wuyi Road, Taiyuan, Shanxi, 030001, China
- Department of Orthopedics, The Second Hospital of Shanxi Medical University, Shanxi Key Laboratory of Bone and Soft Tissue Injury Repair, 382 Wuyi Road, Taiyuan, Shanxi, 030001, China
| | - Bin Wang
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| |
Collapse
|
11
|
Osanloo M, Ranjbar R, Zarenezhad E. Alginate Nanoparticles Containing Cuminum cyminum and Zataria multiflora Essential Oils with Promising Anticancer and Antibacterial Effects. Int J Biomater 2024; 2024:5556838. [PMID: 38725434 PMCID: PMC11081758 DOI: 10.1155/2024/5556838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 04/15/2024] [Accepted: 04/23/2024] [Indexed: 05/12/2024] Open
Abstract
Cancer and bacterial infections are major global health concerns driving the need for innovative medicines. This study investigated alginate nanoparticles loaded with essential oils (EOs) from Cuminum cyminum and Zataria multiflora as potential drug delivery systems. The nanoparticles were comprehensively characterized using techniques such as gas chromatography-mass spectrometry (GC-MS), dynamic light scattering (DLS), zetasizer, attenuated total reflectance-Fourier transform infrared spectroscopy (ATR-FTIR), and ultraviolet-visible spectroscopy (UV-Vis). Their biological properties against two human skin cancer cell lines (A-375 and A-431) and three bacteria (Escherichia coli, Pseudomonas aeruginosa, and Staphylococcus aureus) were also evaluated. Alginate nanoparticles containing C. cyminum and Z. multiflora EOs exhibited sizes of 160 ± 8 nm and 151 ± 10 nm, respectively. Their zeta potentials and encapsulation efficiencies were -18 ± 1 mV and 79 ± 4%, as well as -27 ± 2 mV and 86 ± 5%, respectively. The IC50 values against the tested cell lines and bacteria revealed superior efficacy for nanoparticles containing Z. multiflora EO. Considering the proper efficacy of the proposed nanoparticles, the straightforward preparation method and low cost suggest their potential for further in vivo studies.
Collapse
Affiliation(s)
- Mahmoud Osanloo
- Department of Medical Nanotechnology, School of Advanced Technologies in Medicine, Fasa University of Medical Sciences, Fasa, Iran
| | - Razieh Ranjbar
- Department of Medical Biotechnology, School of Advanced Technologies in Medicine, Fasa University of Medical Sciences, Fasa, Iran
| | - Elham Zarenezhad
- Noncommunicable Disease Research Center, Fasa University of Medical Sciences, Fasa, Iran
| |
Collapse
|
12
|
Buruiană A, Şerbănescu MS, Pop B, Gheban BA, Georgiu C, Crişan D, Crişan M. Automated cutaneous squamous cell carcinoma grading using deep learning with transfer learning. ROMANIAN JOURNAL OF MORPHOLOGY AND EMBRYOLOGY = REVUE ROUMAINE DE MORPHOLOGIE ET EMBRYOLOGIE 2024; 65:243-250. [PMID: 39020538 PMCID: PMC11384044 DOI: 10.47162/rjme.65.2.10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/19/2024]
Abstract
INTRODUCTION Histological grading of cutaneous squamous cell carcinoma (cSCC) is crucial for prognosis and treatment decisions, but manual grading is subjective and time-consuming. AIM This study aimed to develop and validate a deep learning (DL)-based model for automated cSCC grading, potentially improving diagnostic accuracy (ACC) and efficiency. MATERIALS AND METHODS Three deep neural networks (DNNs) with different architectures (AlexNet, GoogLeNet, ResNet-18) were trained using transfer learning on a dataset of 300 histopathological images of cSCC. The models were evaluated on their ACC, sensitivity (SN), specificity (SP), and area under the curve (AUC). Clinical validation was performed on 60 images, comparing the DNNs' predictions with those of a panel of pathologists. RESULTS The models achieved high performance metrics (ACC>85%, SN>85%, SP>92%, AUC>97%) demonstrating their potential for objective and efficient cSCC grading. The high agreement between the DNNs and pathologists, as well as among different network architectures, further supports the reliability and ACC of the DL models. The top-performing models are publicly available, facilitating further research and potential clinical implementation. CONCLUSIONS This study highlights the promising role of DL in enhancing cSCC diagnosis, ultimately improving patient care.
Collapse
Affiliation(s)
- Alexandra Buruiană
- Department of Medical Informatics and Biostatistics, University of Medicine and Pharmacy of Craiova, Romania;
| | | | | | | | | | | | | |
Collapse
|
13
|
Myslicka M, Kawala-Sterniuk A, Bryniarska A, Sudol A, Podpora M, Gasz R, Martinek R, Kahankova Vilimkova R, Vilimek D, Pelc M, Mikolajewski D. Review of the application of the most current sophisticated image processing methods for the skin cancer diagnostics purposes. Arch Dermatol Res 2024; 316:99. [PMID: 38446274 DOI: 10.1007/s00403-024-02828-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Revised: 12/28/2023] [Accepted: 01/25/2024] [Indexed: 03/07/2024]
Abstract
This paper presents the most current and innovative solutions applying modern digital image processing methods for the purpose of skin cancer diagnostics. Skin cancer is one of the most common types of cancers. It is said that in the USA only, one in five people will develop skin cancer and this trend is constantly increasing. Implementation of new, non-invasive methods plays a crucial role in both identification and prevention of skin cancer occurrence. Early diagnosis and treatment are needed in order to decrease the number of deaths due to this disease. This paper also contains some information regarding the most common skin cancer types, mortality and epidemiological data for Poland, Europe, Canada and the USA. It also covers the most efficient and modern image recognition methods based on the artificial intelligence applied currently for diagnostics purposes. In this work, both professional, sophisticated as well as inexpensive solutions were presented. This paper is a review paper and covers the period of 2017 and 2022 when it comes to solutions and statistics. The authors decided to focus on the latest data, mostly due to the rapid technology development and increased number of new methods, which positively affects diagnosis and prognosis.
Collapse
Affiliation(s)
- Maria Myslicka
- Faculty of Medicine, Wroclaw Medical University, J. Mikulicza-Radeckiego 5, 50-345, Wroclaw, Poland.
| | - Aleksandra Kawala-Sterniuk
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland.
| | - Anna Bryniarska
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
| | - Adam Sudol
- Faculty of Natural Sciences and Technology, University of Opole, Dmowskiego 7-9, 45-368, Opole, Poland
| | - Michal Podpora
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
| | - Rafal Gasz
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
| | - Radek Martinek
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
- Department of Cybernetics and Biomedical Engineering, VSB-Technical University of Ostrava, 17. Listopadu 2172/15, Ostrava, 70800, Czech Republic
| | - Radana Kahankova Vilimkova
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
- Department of Cybernetics and Biomedical Engineering, VSB-Technical University of Ostrava, 17. Listopadu 2172/15, Ostrava, 70800, Czech Republic
| | - Dominik Vilimek
- Department of Cybernetics and Biomedical Engineering, VSB-Technical University of Ostrava, 17. Listopadu 2172/15, Ostrava, 70800, Czech Republic
| | - Mariusz Pelc
- Institute of Computer Science, University of Opole, Oleska 48, 45-052, Opole, Poland
- School of Computing and Mathematical Sciences, University of Greenwich, Old Royal Naval College, Park Row, SE10 9LS, London, UK
| | - Dariusz Mikolajewski
- Institute of Computer Science, Kazimierz Wielki University in Bydgoszcz, ul. Kopernika 1, 85-074, Bydgoszcz, Poland
- Neuropsychological Research Unit, 2nd Clinic of the Psychiatry and Psychiatric Rehabilitation, Medical University in Lublin, Gluska 1, 20-439, Lublin, Poland
| |
Collapse
|
14
|
Desale RP, Patil PS. An efficient multi-class classification of skin cancer using optimized vision transformer. Med Biol Eng Comput 2024; 62:773-789. [PMID: 37996627 DOI: 10.1007/s11517-023-02969-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 11/07/2023] [Indexed: 11/25/2023]
Abstract
Skin cancer is a pervasive and deadly disease, prompting a surge in research efforts towards utilizing computer-based techniques to analyze skin lesion images to identify malignancies. This paper introduces an optimized vision transformer approach for effectively classifying skin tumors. The methodology begins with a pre-processing step aimed at preserving color constancy, eliminating hair artifacts, and reducing image noise. Here, a combination of techniques such as piecewise linear bottom hat filtering, adaptive median filtering, Gaussian filtering, and an enhanced gradient intensity method is used for pre-processing. Afterwards, the segmentation phase is initiated using the self-sparse watershed algorithm on the pre-processed image. Subsequently, the segmented image is passed through a feature extraction stage where the hybrid Walsh-Hadamard Karhunen-Loeve expansion technique is employed. The final step involves the application of an improved vision transformer for skin cancer classification. The entire methodology is implemented using the Python programming language, and the International Skin Imaging Collaboration (ISIC) 2019 database is utilized for experimentation. The experimental results demonstrate remarkable performance with the different performance metrics is accuracy 99.81%, precision 96.65%, sensitivity 98.21%, F-measure 97.42%, specificity 99.88%, recall 98.21%, Jaccard coefficient 98.54%, and Mathew's correlation coefficient (MCC) 98.89%. The proposed methodology outperforms the existing methodology.
Collapse
Affiliation(s)
- R P Desale
- E&TC Engineering Department, SSVPS's Bapusaheb Shivajirao Deore College of Engineering, Dhule, Maharashtra, 424005, India.
| | - P S Patil
- E&TC Engineering Department, SSVPS's Bapusaheb Shivajirao Deore College of Engineering, Dhule, Maharashtra, 424005, India
| |
Collapse
|
15
|
Veeramani N, Jayaraman P, Krishankumar R, Ravichandran KS, Gandomi AH. DDCNN-F: double decker convolutional neural network 'F' feature fusion as a medical image classification framework. Sci Rep 2024; 14:676. [PMID: 38182607 PMCID: PMC10770172 DOI: 10.1038/s41598-023-49721-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 12/11/2023] [Indexed: 01/07/2024] Open
Abstract
Melanoma is a severe skin cancer that involves abnormal cell development. This study aims to provide a new feature fusion framework for melanoma classification that includes a novel 'F' Flag feature for early detection. This novel 'F' indicator efficiently distinguishes benign skin lesions from malignant ones known as melanoma. The article proposes an architecture that is built in a Double Decker Convolutional Neural Network called DDCNN future fusion. The network's deck one, known as a Convolutional Neural Network (CNN), finds difficult-to-classify hairy images using a confidence factor termed the intra-class variance score. These hirsute image samples are combined to form a Baseline Separated Channel (BSC). By eliminating hair and using data augmentation techniques, the BSC is ready for analysis. The network's second deck trains the pre-processed BSC and generates bottleneck features. The bottleneck features are merged with features generated from the ABCDE clinical bio indicators to promote classification accuracy. Different types of classifiers are fed to the resulting hybrid fused features with the novel 'F' Flag feature. The proposed system was trained using the ISIC 2019 and ISIC 2020 datasets to assess its performance. The empirical findings expose that the DDCNN feature fusion strategy for exposing malignant melanoma achieved a specificity of 98.4%, accuracy of 93.75%, precision of 98.56%, and Area Under Curve (AUC) value of 0.98. This study proposes a novel approach that can accurately identify and diagnose fatal skin cancer and outperform other state-of-the-art techniques, which is attributed to the DDCNN 'F' Feature fusion framework. Also, this research ascertained improvements in several classifiers when utilising the 'F' indicator, resulting in the highest specificity of + 7.34%.
Collapse
Affiliation(s)
- Nirmala Veeramani
- School of Computing, SASTRA Deemed to Be University, Thanjavur, India
| | | | - Raghunathan Krishankumar
- Information Technology Systems and Analytics Area, Indian Institute of Management Bodh Gaya, Bodh Gaya, Bihar, 824234, India
| | | | - Amir H Gandomi
- Faculty of Engineering and Information Technology, University of Technology Sydney, Ultimo, NSW, Australia.
- University Research and Innovation Center (EKIK), Obuda University, Buddapest, Hungary.
| |
Collapse
|
16
|
Sanga P, Singh J, Dubey AK, Khanna NN, Laird JR, Faa G, Singh IM, Tsoulfas G, Kalra MK, Teji JS, Al-Maini M, Rathore V, Agarwal V, Ahluwalia P, Fouda MM, Saba L, Suri JS. DermAI 1.0: A Robust, Generalized, and Novel Attention-Enabled Ensemble-Based Transfer Learning Paradigm for Multiclass Classification of Skin Lesion Images. Diagnostics (Basel) 2023; 13:3159. [PMID: 37835902 PMCID: PMC10573070 DOI: 10.3390/diagnostics13193159] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 09/03/2023] [Accepted: 10/04/2023] [Indexed: 10/15/2023] Open
Abstract
Skin lesion classification plays a crucial role in dermatology, aiding in the early detection, diagnosis, and management of life-threatening malignant lesions. However, standalone transfer learning (TL) models failed to deliver optimal performance. In this study, we present an attention-enabled ensemble-based deep learning technique, a powerful, novel, and generalized method for extracting features for the classification of skin lesions. This technique holds significant promise in enhancing diagnostic accuracy by using seven pre-trained TL models for classification. Six ensemble-based DL (EBDL) models were created using stacking, softmax voting, and weighted average techniques. Furthermore, we investigated the attention mechanism as an effective paradigm and created seven attention-enabled transfer learning (aeTL) models before branching out to construct three attention-enabled ensemble-based DL (aeEBDL) models to create a reliable, adaptive, and generalized paradigm. The mean accuracy of the TL models is 95.30%, and the use of an ensemble-based paradigm increased it by 4.22%, to 99.52%. The aeTL models' performance was superior to the TL models in accuracy by 3.01%, and aeEBDL models outperformed aeTL models by 1.29%. Statistical tests show significant p-value and Kappa coefficient along with a 99.6% reliability index for the aeEBDL models. The approach is highly effective and generalized for the classification of skin lesions.
Collapse
Affiliation(s)
- Prabhav Sanga
- Department of Information Technology, Bharati Vidyapeeth’s College of Engineering, New Delhi 110063, India; (P.S.); (A.K.D.)
- Global Biomedical Technologies, Inc., Roseville, CA 95661, USA
| | - Jaskaran Singh
- Stroke Monitoring and Diagnostic Division, AtheroPoint™, Roseville, CA 95661, USA (I.M.S.); (V.R.)
| | - Arun Kumar Dubey
- Department of Information Technology, Bharati Vidyapeeth’s College of Engineering, New Delhi 110063, India; (P.S.); (A.K.D.)
| | - Narendra N. Khanna
- Department of Cardiology, Indraprastha Apollo Hospitals, New Delhi 110076, India;
| | - John R. Laird
- Heart and Vascular Institute, Adventist Health St. Helena, St. Helena, CA 94574, USA;
| | - Gavino Faa
- Department of Pathology, Azienda Ospedaliero Universitaria (A.O.U.), 09124 Cagliari, Italy;
| | - Inder M. Singh
- Stroke Monitoring and Diagnostic Division, AtheroPoint™, Roseville, CA 95661, USA (I.M.S.); (V.R.)
| | - Georgios Tsoulfas
- Department of Surgery, Aristoteleion University of Thessaloniki, 54124 Thessaloniki, Greece;
| | - Mannudeep K. Kalra
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA;
| | - Jagjit S. Teji
- Department of Pediatrics, Ann and Robert H. Lurie Children’s Hospital of Chicago, Chicago, IL 60611, USA;
| | - Mustafa Al-Maini
- Allergy, Clinical Immunology and Rheumatology Institute, Toronto, ON L4Z 4C4, Canada;
| | - Vijay Rathore
- Stroke Monitoring and Diagnostic Division, AtheroPoint™, Roseville, CA 95661, USA (I.M.S.); (V.R.)
| | - Vikas Agarwal
- Department of Immunology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow 226014, India;
| | - Puneet Ahluwalia
- Department of Uro Oncology, Medanta the Medicity, Gurugram 122001, India;
| | - Mostafa M. Fouda
- Department of Electrical and Computer Engineering, Idaho State University, Pocatello, ID 83209, USA;
| | - Luca Saba
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09124 Cagliari, Italy;
| | - Jasjit S. Suri
- Global Biomedical Technologies, Inc., Roseville, CA 95661, USA
- Stroke Monitoring and Diagnostic Division, AtheroPoint™, Roseville, CA 95661, USA (I.M.S.); (V.R.)
- Department of Electrical and Computer Engineering, Idaho State University, Pocatello, ID 83209, USA;
- Department of Computer Science and Engineering, Graphic Era University (G.E.U.), Dehradun 248002, India
| |
Collapse
|
17
|
Aydin Y. A Comparative Analysis of Skin Cancer Detection Applications Using Histogram-Based Local Descriptors. Diagnostics (Basel) 2023; 13:3142. [PMID: 37835884 PMCID: PMC10572674 DOI: 10.3390/diagnostics13193142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 10/01/2023] [Accepted: 10/04/2023] [Indexed: 10/15/2023] Open
Abstract
Among the most serious types of cancer is skin cancer. Despite the risk of death, when caught early, the rate of survival is greater than 95%. This inspires researchers to explore methods that allow for the early detection of skin cancer that could save millions of lives. The ability to detect the early signs of skin cancer has become more urgent in light of the rising number of illnesses, the high death rate, and costly healthcare treatments. Given the gravity of these issues, experts have created a number of existing approaches for detecting skin cancer. Identifying skin cancer and whether it is benign or malignant involves detecting features of the lesions such as size, form, symmetry, color, etc. The aim of this study is to determine the most successful skin cancer detection methods by comparing the outcomes and effectiveness of the various applications that categorize benign and malignant forms of skin cancer. Descriptors such as the Local Binary Pattern (LBP), the Local Directional Number Pattern (LDN), the Pyramid of Histogram of Oriented Gradients (PHOG), the Local Directional Pattern (LDiP), and Monogenic Binary Coding (MBC) are used to extract the necessary features. Support vector machines (SVM) and XGBoost are used in the classification process. In addition, this study uses colored histogram-based features to classify the various characteristics obtained from the color images. In the experimental results, the applications implemented with the proposed color histogram-based features were observed to be more successful. Under the proposed method (the colored LDN feature obtained using the YCbCr color space with the XGBoost classifier), a 90% accuracy rate was achieved on Dataset 1, which was obtained from the Kaggle website. For the HAM10000 data set, an accuracy rate of 96.50% was achieved under a similar proposed method (the colored MBC feature obtained using the HSV color space with the XGBoost classifier).
Collapse
Affiliation(s)
- Yildiz Aydin
- Department of Computer Engineering, Erzincan Binali Yildirim University, Erzincan 24000, Turkey
| |
Collapse
|
18
|
Patel RH, Foltz EA, Witkowski A, Ludzik J. Analysis of Artificial Intelligence-Based Approaches Applied to Non-Invasive Imaging for Early Detection of Melanoma: A Systematic Review. Cancers (Basel) 2023; 15:4694. [PMID: 37835388 PMCID: PMC10571810 DOI: 10.3390/cancers15194694] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Revised: 09/05/2023] [Accepted: 09/19/2023] [Indexed: 10/15/2023] Open
Abstract
BACKGROUND Melanoma, the deadliest form of skin cancer, poses a significant public health challenge worldwide. Early detection is crucial for improved patient outcomes. Non-invasive skin imaging techniques allow for improved diagnostic accuracy; however, their use is often limited due to the need for skilled practitioners trained to interpret images in a standardized fashion. Recent innovations in artificial intelligence (AI)-based techniques for skin lesion image interpretation show potential for the use of AI in the early detection of melanoma. OBJECTIVE The aim of this study was to evaluate the current state of AI-based techniques used in combination with non-invasive diagnostic imaging modalities including reflectance confocal microscopy (RCM), optical coherence tomography (OCT), and dermoscopy. We also aimed to determine whether the application of AI-based techniques can lead to improved diagnostic accuracy of melanoma. METHODS A systematic search was conducted via the Medline/PubMed, Cochrane, and Embase databases for eligible publications between 2018 and 2022. Screening methods adhered to the 2020 version of the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. Included studies utilized AI-based algorithms for melanoma detection and directly addressed the review objectives. RESULTS We retrieved 40 papers amongst the three databases. All studies directly comparing the performance of AI-based techniques with dermatologists reported the superior or equivalent performance of AI-based techniques in improving the detection of melanoma. In studies directly comparing algorithm performance on dermoscopy images to dermatologists, AI-based algorithms achieved a higher ROC (>80%) in the detection of melanoma. In these comparative studies using dermoscopic images, the mean algorithm sensitivity was 83.01% and the mean algorithm specificity was 85.58%. Studies evaluating machine learning in conjunction with OCT boasted accuracy of 95%, while studies evaluating RCM reported a mean accuracy rate of 82.72%. CONCLUSIONS Our results demonstrate the robust potential of AI-based techniques to improve diagnostic accuracy and patient outcomes through the early identification of melanoma. Further studies are needed to assess the generalizability of these AI-based techniques across different populations and skin types, improve standardization in image processing, and further compare the performance of AI-based techniques with board-certified dermatologists to evaluate clinical applicability.
Collapse
Affiliation(s)
- Raj H. Patel
- Edward Via College of Osteopathic Medicine, VCOM-Louisiana, 4408 Bon Aire Dr, Monroe, LA 71203, USA
- Department of Dermatology, Oregon Health & Science University, Portland, OR 97239, USA (A.W.); (J.L.)
| | - Emilie A. Foltz
- Department of Dermatology, Oregon Health & Science University, Portland, OR 97239, USA (A.W.); (J.L.)
- Elson S. Floyd College of Medicine, Washington State University, Spokane, WA 99202, USA
| | - Alexander Witkowski
- Department of Dermatology, Oregon Health & Science University, Portland, OR 97239, USA (A.W.); (J.L.)
| | - Joanna Ludzik
- Department of Dermatology, Oregon Health & Science University, Portland, OR 97239, USA (A.W.); (J.L.)
| |
Collapse
|
19
|
Mehmood A, Gulzar Y, Ilyas QM, Jabbari A, Ahmad M, Iqbal S. SBXception: A Shallower and Broader Xception Architecture for Efficient Classification of Skin Lesions. Cancers (Basel) 2023; 15:3604. [PMID: 37509267 PMCID: PMC10377736 DOI: 10.3390/cancers15143604] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 07/05/2023] [Accepted: 07/08/2023] [Indexed: 07/30/2023] Open
Abstract
Skin cancer is a major public health concern around the world. Skin cancer identification is critical for effective treatment and improved results. Deep learning models have shown considerable promise in assisting dermatologists in skin cancer diagnosis. This study proposes SBXception: a shallower and broader variant of the Xception network. It uses Xception as the base model for skin cancer classification and increases its performance by reducing the depth and expanding the breadth of the architecture. We used the HAM10000 dataset, which contains 10,015 dermatoscopic images of skin lesions classified into seven categories, for training and testing the proposed model. Using the HAM10000 dataset, we fine-tuned the new model and reached an accuracy of 96.97% on a holdout test set. SBXception also achieved significant performance enhancement with 54.27% fewer training parameters and reduced training time compared to the base model. Our findings show that reducing and expanding the Xception model architecture can greatly improve its performance in skin cancer categorization.
Collapse
Affiliation(s)
- Abid Mehmood
- Department of Management Information Systems, College of Business Administration, King Faisal University, Al Ahsa 31982, Saudi Arabia
| | - Yonis Gulzar
- Department of Management Information Systems, College of Business Administration, King Faisal University, Al Ahsa 31982, Saudi Arabia
| | - Qazi Mudassar Ilyas
- Department of Information Systems, College of Computer Sciences and Information Technology, King Faisal University, Al Ahsa 31982, Saudi Arabia
| | - Abdoh Jabbari
- College of Computer Science and Information Technology, Jazan University, Jazan 45142, Saudi Arabia
| | - Muneer Ahmad
- Department of Human and Digital Interface, Woosong University, Daejeon 34606, Republic of Korea
| | - Sajid Iqbal
- Department of Information Systems, College of Computer Sciences and Information Technology, King Faisal University, Al Ahsa 31982, Saudi Arabia
| |
Collapse
|
20
|
Naqvi M, Gilani SQ, Syed T, Marques O, Kim HC. Skin Cancer Detection Using Deep Learning-A Review. Diagnostics (Basel) 2023; 13:1911. [PMID: 37296763 PMCID: PMC10252190 DOI: 10.3390/diagnostics13111911] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 05/25/2023] [Accepted: 05/26/2023] [Indexed: 06/12/2023] Open
Abstract
Skin cancer is one the most dangerous types of cancer and is one of the primary causes of death worldwide. The number of deaths can be reduced if skin cancer is diagnosed early. Skin cancer is mostly diagnosed using visual inspection, which is less accurate. Deep-learning-based methods have been proposed to assist dermatologists in the early and accurate diagnosis of skin cancers. This survey reviewed the most recent research articles on skin cancer classification using deep learning methods. We also provided an overview of the most common deep-learning models and datasets used for skin cancer classification.
Collapse
Affiliation(s)
- Maryam Naqvi
- Institute of Digital Anti-Aging Healthcare, Inje University, Gimhae 50834, Republic of Korea
| | - Syed Qasim Gilani
- Department of Electrical Engineering and Computer Science, Florida Atlantic University, Boca Raton, FL 33431, USA
| | - Tehreem Syed
- Department of Electrical Engineering and Computer Engineering, Technische Universität Dresden, 01069 Dresden, Germany
| | - Oge Marques
- Department of Electrical Engineering and Computer Science, Florida Atlantic University, Boca Raton, FL 33431, USA
| | - Hee-Cheol Kim
- Institute of Digital Anti-Aging Healthcare, Inje University, Gimhae 50834, Republic of Korea
| |
Collapse
|
21
|
Ren Z, Li X, Pietralla D, Manassi M, Whitney D. Serial Dependence in Dermatological Judgments. Diagnostics (Basel) 2023; 13:1775. [PMID: 37238260 PMCID: PMC10217324 DOI: 10.3390/diagnostics13101775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Revised: 05/09/2023] [Accepted: 05/13/2023] [Indexed: 05/28/2023] Open
Abstract
Serial Dependence is a ubiquitous visual phenomenon in which sequentially viewed images appear more similar than they actually are, thus facilitating an efficient and stable perceptual experience in human observers. Although serial dependence is adaptive and beneficial in the naturally autocorrelated visual world, a smoothing perceptual experience, it might turn maladaptive in artificial circumstances, such as medical image perception tasks, where visual stimuli are randomly sequenced. Here, we analyzed 758,139 skin cancer diagnostic records from an online app, and we quantified the semantic similarity between sequential dermatology images using a computer vision model as well as human raters. We then tested whether serial dependence in perception occurs in dermatological judgments as a function of image similarity. We found significant serial dependence in perceptual discrimination judgments of lesion malignancy. Moreover, the serial dependence was tuned to the similarity in the images, and it decayed over time. The results indicate that relatively realistic store-and-forward dermatology judgments may be biased by serial dependence. These findings help in understanding one potential source of systematic bias and errors in medical image perception tasks and hint at useful approaches that could alleviate the errors due to serial dependence.
Collapse
Affiliation(s)
- Zhihang Ren
- Vision Science Graduate Group, University of California, Berkeley, Berkeley, CA 94720, USA
- Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Xinyu Li
- Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Dana Pietralla
- Institute of Sociology and Social Psychology, University of Cologne, Albertus-Magnus-Platz, D-50923 Cologne, Germany
| | - Mauro Manassi
- School of Psychology, King’s College, University of Aberdeen, Aberdeen AB24 3FX, UK
| | - David Whitney
- Vision Science Graduate Group, University of California, Berkeley, Berkeley, CA 94720, USA
- Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, USA
| |
Collapse
|
22
|
Almufareh MF, Tehsin S, Humayun M, Kausar S. A Transfer Learning Approach for Clinical Detection Support of Monkeypox Skin Lesions. Diagnostics (Basel) 2023; 13:diagnostics13081503. [PMID: 37189603 DOI: 10.3390/diagnostics13081503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 04/13/2023] [Accepted: 04/18/2023] [Indexed: 05/17/2023] Open
Abstract
Monkeypox (MPX) is a disease caused by monkeypox virus (MPXV). It is a contagious disease and has associated symptoms of skin lesions, rashes, fever, and respiratory distress lymph swelling along with numerous neurological distresses. This can be a deadly disease, and the latest outbreak of it has shown its spread to Europe, Australia, the United States, and Africa. Typically, diagnosis of MPX is performed through PCR, by taking a sample of the skin lesion. This procedure is risky for medical staff, as during sample collection, transmission and testing, they can be exposed to MPXV, and this infectious disease can be transferred to medical staff. In the current era, cutting-edge technologies such as IoT and artificial intelligence (AI) have made the diagnostics process smart and secure. IoT devices such as wearables and sensors permit seamless data collection while AI techniques utilize the data in disease diagnosis. Keeping in view the importance of these cutting-edge technologies, this paper presents a non-invasive, non-contact, computer-vision-based method for diagnosis of MPX by analyzing skin lesion images that are more smart and secure compared to traditional methods of diagnosis. The proposed methodology employs deep learning techniques to classify skin lesions as MPXV positive or not. Two datasets, the Kaggle Monkeypox Skin Lesion Dataset (MSLD) and the Monkeypox Skin Image Dataset (MSID), are used for evaluating the proposed methodology. The results on multiple deep learning models were evaluated using sensitivity, specificity and balanced accuracy. The proposed method has yielded highly promising results, demonstrating its potential for wide-scale deployment in detecting monkeypox. This smart and cost-effective solution can be effectively utilized in underprivileged areas where laboratory infrastructure may be lacking.
Collapse
Affiliation(s)
- Maram Fahaad Almufareh
- Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakakah 72388, Saudi Arabia
| | - Samabia Tehsin
- Department of Computer Science, Bahria University, Islamabad 44220, Pakistan
| | - Mamoona Humayun
- Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakakah 72388, Saudi Arabia
| | - Sumaira Kausar
- Department of Computer Science, Bahria University, Islamabad 44220, Pakistan
| |
Collapse
|
23
|
Multi-Models of Analyzing Dermoscopy Images for Early Detection of Multi-Class Skin Lesions Based on Fused Features. Processes (Basel) 2023. [DOI: 10.3390/pr11030910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/19/2023] Open
Abstract
Melanoma is a cancer that threatens life and leads to death. Effective detection of skin lesion types by images is a challenging task. Dermoscopy is an effective technique for detecting skin lesions. Early diagnosis of skin cancer is essential for proper treatment. Skin lesions are similar in their early stages, so manual diagnosis is difficult. Thus, artificial intelligence techniques can analyze images of skin lesions and discover hidden features not seen by the naked eye. This study developed hybrid techniques based on hybrid features to effectively analyse dermoscopic images to classify two datasets, HAM10000 and PH2, of skin lesions. The images have been optimized for all techniques, and the problem of imbalance between the two datasets has been resolved. The HAM10000 and PH2 datasets were classified by pre-trained MobileNet and ResNet101 models. For effective detection of the early stages skin lesions, hybrid techniques SVM-MobileNet, SVM-ResNet101 and SVM-MobileNet-ResNet101 were applied, which showed better performance than pre-trained CNN models due to the effectiveness of the handcrafted features that extract the features of color, texture and shape. Then, handcrafted features were combined with the features of the MobileNet and ResNet101 models to form a high accuracy feature. Finally, features of MobileNet-handcrafted and ResNet101-handcrafted were sent to ANN for classification with high accuracy. For the HAM10000 dataset, the ANN with MobileNet and handcrafted features achieved an AUC of 97.53%, accuracy of 98.4%, sensitivity of 94.46%, precision of 93.44% and specificity of 99.43%. Using the same technique, the PH2 data set achieved 100% for all metrics.
Collapse
|
24
|
Baig AR, Abbas Q, Almakki R, Ibrahim MEA, AlSuwaidan L, Ahmed AES. Light-Dermo: A Lightweight Pretrained Convolution Neural Network for the Diagnosis of Multiclass Skin Lesions. Diagnostics (Basel) 2023; 13:385. [PMID: 36766490 PMCID: PMC9914027 DOI: 10.3390/diagnostics13030385] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 01/16/2023] [Accepted: 01/18/2023] [Indexed: 01/22/2023] Open
Abstract
Skin cancer develops due to the unusual growth of skin cells. Early detection is critical for the recognition of multiclass pigmented skin lesions (PSLs). At an early stage, the manual work by ophthalmologists takes time to recognize the PSLs. Therefore, several "computer-aided diagnosis (CAD)" systems are developed by using image processing, machine learning (ML), and deep learning (DL) techniques. Deep-CNN models outperformed traditional ML approaches in extracting complex features from PSLs. In this study, a special transfer learning (TL)-based CNN model is suggested for the diagnosis of seven classes of PSLs. A novel approach (Light-Dermo) is developed that is based on a lightweight CNN model and applies the channelwise attention (CA) mechanism with a focus on computational efficiency. The ShuffleNet architecture is chosen as the backbone, and squeeze-and-excitation (SE) blocks are incorporated as the technique to enhance the original ShuffleNet architecture. Initially, an accessible dataset with 14,000 images of PSLs from seven classes is used to validate the Light-Dermo model. To increase the size of the dataset and control its imbalance, we have applied data augmentation techniques to seven classes of PSLs. By applying this technique, we collected 28,000 images from the HAM10000, ISIS-2019, and ISIC-2020 datasets. The outcomes of the experiments show that the suggested approach outperforms compared techniques in many cases. The most accurately trained model has an accuracy of 99.14%, a specificity of 98.20%, a sensitivity of 97.45%, and an F1-score of 98.1%, with fewer parameters compared to state-of-the-art DL models. The experimental results show that Light-Dermo assists the dermatologist in the better diagnosis of PSLs. The Light-Dermo code is available to the public on GitHub so that researchers can use it and improve it.
Collapse
Affiliation(s)
- Abdul Rauf Baig
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
| | | | | | | | | | | |
Collapse
|
25
|
A Deep-Learning-Based Artificial Intelligence System for the Pathology Diagnosis of Uterine Smooth Muscle Tumor. LIFE (BASEL, SWITZERLAND) 2022; 13:life13010003. [PMID: 36675952 PMCID: PMC9864148 DOI: 10.3390/life13010003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 12/09/2022] [Accepted: 12/15/2022] [Indexed: 12/24/2022]
Abstract
We aimed to develop an artificial intelligence (AI) diagnosis system for uterine smooth muscle tumors (UMTs) by using deep learning. We analyzed the morphological features of UMTs on whole-slide images (233, 108, and 30 digital slides of leiomyosarcomas, leiomyomas, and smooth muscle tumors of uncertain malignant potential stained with hematoxylin and eosin, respectively). Aperio ImageScope software randomly selected ≥10 areas of the total field of view. Pathologists randomly selected a marked region in each section that was no smaller than the total area of 10 high-power fields in which necrotic, vascular, collagenous, and mitotic areas were labeled. We constructed an automatic identification algorithm for cytological atypia and necrosis by using ResNet and constructed an automatic detection algorithm for mitosis by using YOLOv5. A logical evaluation algorithm was then designed to obtain an automatic UMT diagnostic aid that can "study and synthesize" a pathologist's experience. The precision, recall, and F1 index reached more than 0.920. The detection network could accurately detect the mitoses (0.913 precision, 0.893 recall). For the prediction ability, the AI system had a precision of 0.90. An AI-assisted system for diagnosing UMTs in routine practice scenarios is feasible and can improve the accuracy and efficiency of diagnosis.
Collapse
|
26
|
Pavlidis ET, Pavlidis TE. Diagnostic biopsy of cutaneous melanoma, sentinel lymph node biopsy and indications for lymphadenectomy. World J Clin Oncol 2022; 13:861-865. [PMID: 36337309 PMCID: PMC9630995 DOI: 10.5306/wjco.v13.i10.861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 09/05/2022] [Accepted: 10/11/2022] [Indexed: 02/06/2023] Open
Abstract
The incidence of cutaneous melanoma appears to be increasing worldwide and this is attributed to solar radiation exposure. Early diagnosis is a challenging task. Any clinically suspected lesion must be assessed by complete diagnostic excision biopsy (margins 1-2 mm); however, there are other biopsy techniques that are less commonly used. Melanomas are characterized by Breslow thickness as thin (< 1 mm), intermediate (1-4 mm) and thick (> 4 mm). This thickness determines their biological behavior, therapy, prognosis and survival. If the biopsy is positive, a wide local excision (margins 1-2 cm) is finally performed. However, metastasis to regional lymph nodes is the most accurate prognostic determinant. Therefore, sentinel lymph node biopsy (SLNB) for diagnosed melanoma plays a pivotal role in the management strategy. Complete lymph node clearance has undoubted advantages and is recommended in all cases of positive SLN biopsy. A PET-CT (positron emission tomography-computed tomography) scan is necessary for staging and follow-up after treatment. Novel targeted therapies and immunotherapies have shown improved outcomes in advanced cases.
Collapse
Affiliation(s)
- Efstathios T Pavlidis
- 2nd Prodedeutic Department of Surgery, Hippocration Hospital, Aristotle University of Thessaloniki, School of Medicine, Thessaloniki 54642, Greece
| | - Theodoros E Pavlidis
- 2nd Prodedeutic Department of Surgery, Hippocration Hospital, Aristotle University of Thessaloniki, School of Medicine, Thessaloniki 54642, Greece
| |
Collapse
|
27
|
El-Baz A, Giridharan GA, Shalaby A, Mahmoud AH, Ghazal M. Special Issue "Computer Aided Diagnosis Sensors". SENSORS (BASEL, SWITZERLAND) 2022; 22:8052. [PMID: 36298403 PMCID: PMC9610085 DOI: 10.3390/s22208052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Accepted: 10/19/2022] [Indexed: 06/16/2023]
Abstract
Sensors used to diagnose, monitor or treat diseases in the medical domain are known as medical sensors [...].
Collapse
Affiliation(s)
- Ayman El-Baz
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | | | - Ahmed Shalaby
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Ali H. Mahmoud
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Mohammed Ghazal
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates
| |
Collapse
|
28
|
An Efficient Deep Learning-Based Skin Cancer Classifier for an Imbalanced Dataset. Diagnostics (Basel) 2022; 12:diagnostics12092115. [PMID: 36140516 PMCID: PMC9497837 DOI: 10.3390/diagnostics12092115] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 08/24/2022] [Accepted: 08/29/2022] [Indexed: 12/12/2022] Open
Abstract
Efficient skin cancer detection using images is a challenging task in the healthcare domain. In today’s medical practices, skin cancer detection is a time-consuming procedure that may lead to a patient’s death in later stages. The diagnosis of skin cancer at an earlier stage is crucial for the success rate of complete cure. The efficient detection of skin cancer is a challenging task. Therefore, the numbers of skilful dermatologists around the globe are not enough to deal with today’s healthcare. The huge difference between data from various healthcare sector classes leads to data imbalance problems. Due to data imbalance issues, deep learning models are often trained on one class more than others. This study proposes a novel deep learning-based skin cancer detector using an imbalanced dataset. Data augmentation was used to balance various skin cancer classes to overcome the data imbalance. The Skin Cancer MNIST: HAM10000 dataset was employed, which consists of seven classes of skin lesions. Deep learning models are widely used in disease diagnosis through images. Deep learning-based models (AlexNet, InceptionV3, and RegNetY-320) were employed to classify skin cancer. The proposed framework was also tuned with various combinations of hyperparameters. The results show that RegNetY-320 outperformed InceptionV3 and AlexNet in terms of the accuracy, F1-score, and receiver operating characteristic (ROC) curve both on the imbalanced and balanced datasets. The performance of the proposed framework was better than that of conventional methods. The accuracy, F1-score, and ROC curve value obtained with the proposed framework were 91%, 88.1%, and 0.95, which were significantly better than those of the state-of-the-art method, which achieved 85%, 69.3%, and 0.90, respectively. Our proposed framework may assist in disease identification, which could save lives, reduce unnecessary biopsies, and reduce costs for patients, dermatologists, and healthcare professionals.
Collapse
|