1
|
Zia A, Husnain M, Buck S, Richetti J, Hulm E, Ral JP, Rolland V, Sirault X. Unlocking chickpea flour potential: AI-powered prediction for quality assessment and compositional characterisation. Curr Res Food Sci 2025; 10:101030. [PMID: 40231315 PMCID: PMC11995126 DOI: 10.1016/j.crfs.2025.101030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2024] [Revised: 02/28/2025] [Accepted: 03/11/2025] [Indexed: 04/16/2025] Open
Abstract
The growing demand for sustainable, nutritious, and environmentally friendly food sources has placed chickpea flour as a vital component in the global shift to plant-based diets. However, the inherent variability in the composition of chickpea flour, influenced by genetic diversity, environmental conditions, and processing techniques, poses significant challenges to standardisation and quality control. This study explores the integration of deep learning models with near-infrared (NIR) spectroscopy to improve the accuracy and efficiency of chickpea flour quality assessment. Using a dataset comprising 136 chickpea varieties, the research compares the performance of several state-of-the-art deep learning models, including Convolutional Neural Networks (CNNs), Vision Transformers (ViTs), and Graph Convolutional Networks (GCNs), and compares the most effective model, CNN, against the traditional Partial Least Squares Regression (PLSR) method. The results demonstrate that CNN-based models outperform PLSR, providing more accurate predictions for key quality attributes such as protein content, starch, soluble sugars, insoluble fibres, total lipids, and moisture levels. The study highlights the potential of AI-enhanced NIR spectroscopy to revolutionise quality assessment in the food industry by offering a non-destructive, rapid, and reliable method for analysing chickpea flour. Despite the challenges posed by the limited dataset, deep learning models exhibit capabilities that suggest that further advancements would allow their industrial applicability. This research paves the way for broader applications of AI-driven quality control in food production, contributing to the development of more consistent and high-quality plant-based food products.
Collapse
Affiliation(s)
- Ali Zia
- Commonwealth Scientific and Industrial Research Organisation (CSIRO), Australia
- College of Science and School of Computing, Australian National University, Australia
| | - Muhammad Husnain
- Commonwealth Scientific and Industrial Research Organisation (CSIRO), Australia
- School of Information & Communication Technology, Griffith University, Australia
| | - Sally Buck
- Commonwealth Scientific and Industrial Research Organisation (CSIRO), Australia
| | - Jonathan Richetti
- Commonwealth Scientific and Industrial Research Organisation (CSIRO), Australia
| | - Elizabeth Hulm
- Commonwealth Scientific and Industrial Research Organisation (CSIRO), Australia
| | - Jean-Philippe Ral
- Commonwealth Scientific and Industrial Research Organisation (CSIRO), Australia
| | - Vivien Rolland
- Commonwealth Scientific and Industrial Research Organisation (CSIRO), Australia
| | - Xavier Sirault
- Commonwealth Scientific and Industrial Research Organisation (CSIRO), Australia
| |
Collapse
|
2
|
Wang HC, Wang SC, Xiao F, Ho UC, Lee CH, Yan JL, Chen YF, Ko LW. Development of a Clinically Applicable Deep Learning System Based on Sparse Training Data to Accurately Detect Acute Intracranial Hemorrhage from Non-enhanced Head Computed Tomography. Neurol Med Chir (Tokyo) 2025; 65:103-112. [PMID: 39864839 PMCID: PMC11968197 DOI: 10.2176/jns-nmc.2024-0163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2024] [Accepted: 10/30/2024] [Indexed: 01/28/2025] Open
Abstract
Non-enhanced head computed tomography is widely used for patients presenting with head trauma or stroke, given acute intracranial hemorrhage significantly influences clinical decision-making. This study aimed to develop a deep learning algorithm, referred to as DeepCT, to detect acute intracranial hemorrhage on non-enhanced head computed tomography images and evaluate its clinical applicability. We retrospectively collected 1,815 computed tomography image sets from a single center for model training. Additional computed tomography sets from 3 centers were used to construct an independent validation dataset (VAL) and 2 test datasets (GPS-C and DICH). A third test dataset (US-TW) comprised 150 cases, each from 1 hospital in Taiwan and 1 hospital in the United States of America. Our deep learning model, based on U-Net and ResNet architectures, was implemented using PyTorch. The deep learning algorithm exhibited high accuracy across the validation and test datasets, with overall accuracy ranging from 0.9343 to 0.9820. Our findings show that the deep learning algorithm effectively identifies acute intracranial hemorrhage in non-enhanced head computed tomography studies. Clinically, this algorithm can be used for hyperacute triage, reducing reporting times, and enhancing the accuracy of radiologist interpretations. The evaluation of the algorithm on both United States and Taiwan datasets further supports its universal reliability for detecting acute intracranial hemorrhage.
Collapse
Affiliation(s)
- Huan-Chih Wang
- Division of Neurosurgery, Department of Surgery, National Taiwan University Hospital
- College of Biological Science and Technology, National Yang Ming Chiao Tung University
| | - Shao-Chung Wang
- Department of Medical Imaging and Intervention, New Taipei Municipal Tucheng Hospital, Chang Gung Medical Foundation
| | - Furen Xiao
- Division of Neurosurgery, Department of Surgery, National Taiwan University Hospital
| | - Ue-Cheung Ho
- Division of Neurosurgery, Department of Surgery, National Taiwan University Hospital
| | - Chiao-Hua Lee
- Department of Radiology, China Medical University Hsinchu Hospital
| | - Jiun-Lin Yan
- Department of Neurosurgery, Keelung Chang Gung Memorial Hospital
- College of Medicine, Chang Gung University
| | - Ya-Fang Chen
- Department of Radiology, National Taiwan University Hospital
| | - Li-Wei Ko
- College of Biological Science and Technology, National Yang Ming Chiao Tung University
- Institute of Electrical and Control Engineering, College of Electrical and Computer Engineering, National Yang Ming Chiao Tung University
| |
Collapse
|
3
|
Wang B, Chen RQ, Li J, Roy K. Interfacing data science with cell therapy manufacturing: where we are and where we need to be. Cytotherapy 2024; 26:967-979. [PMID: 38842968 DOI: 10.1016/j.jcyt.2024.03.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 03/08/2024] [Accepted: 03/12/2024] [Indexed: 08/25/2024]
Abstract
Although several cell-based therapies have received FDA approval, and others are showing promising results, scalable, and quality-driven reproducible manufacturing of therapeutic cells at a lower cost remains challenging. Challenges include starting material and patient variability, limited understanding of manufacturing process parameter effects on quality, complex supply chain logistics, and lack of predictive, well-understood product quality attributes. These issues can manifest as increased production costs, longer production times, greater batch-to-batch variability, and lower overall yield of viable, high-quality cells. The lack of data-driven insights and decision-making in cell manufacturing and delivery is an underlying commonality behind all these problems. Data collection and analytics from discovery, preclinical and clinical research, process development, and product manufacturing have not been sufficiently utilized to develop a "systems" understanding and identify actionable controls. Experience from other industries shows that data science and analytics can drive technological innovations and manufacturing optimization, leading to improved consistency, reduced risk, and lower cost. The cell therapy manufacturing industry will benefit from implementing data science tools, such as data-driven modeling, data management and mining, AI, and machine learning. The integration of data-driven predictive capabilities into cell therapy manufacturing, such as predicting product quality and clinical outcomes based on manufacturing data, or ensuring robustness and reliability using data-driven supply-chain modeling could enable more precise and efficient production processes and lead to better patient access and outcomes. In this review, we introduce some of the relevant computational and data science tools and how they are being or can be implemented in the cell therapy manufacturing workflow. We also identify areas where innovative approaches are required to address challenges and opportunities specific to the cell therapy industry. We conclude that interfacing data science throughout a cell therapy product lifecycle, developing data-driven manufacturing workflow, designing better data collection tools and algorithms, using data analytics and AI-based methods to better understand critical quality attributes and critical-process parameters, and training the appropriate workforce will be critical for overcoming current industry and regulatory barriers and accelerating clinical translation.
Collapse
Affiliation(s)
- Bryan Wang
- Marcus Center for Therapeutic Cell Characterization and Manufacturing, Parker H. Petit Institute of Bioengineering and Bioscience, Georgia Institute of Technology, Atlanta, GA, USA; Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA; School of Chemical and Biomolecular Engineering, Georgia Institute of Technology, Atlanta, GA, USA; NSF Engineering Research Center (ERC) for cell Manufacturing Technologies (CMaT), USA
| | - Rui Qi Chen
- H. Milton Stewart School of Industrial and Systems Engineering, Atlanta, GA, USA
| | - Jing Li
- H. Milton Stewart School of Industrial and Systems Engineering, Atlanta, GA, USA
| | - Krishnendu Roy
- NSF Engineering Research Center (ERC) for cell Manufacturing Technologies (CMaT), USA; Department of Biomedical Engineering, School of Engineering, Vanderbilt University, Nashville, TN, USA; Department of Pathology, Microbiology, and Immunology, Vanderbilt University School of Medicine, Nashville, TN, USA; Department of Chemical and Biomolecular Engineering, School of Engineering, Vanderbilt University, Nashville, TN, USA.
| |
Collapse
|
4
|
Zhao R, Wei Y, Wang X, He X, Xu H. Convolutional Neural Network-Assisted Photoresist Formulation Discriminator Design of a Contact Layer for Electron Beam Lithography. J Phys Chem Lett 2024; 15:8715-8720. [PMID: 39159485 DOI: 10.1021/acs.jpclett.4c01911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/21/2024]
Abstract
The photoresist formulation is closely related to the material properties, and its composition content determines the lithography imaging quality. To satisfy the process requirements, imaging verification of extensive formulations is required through lithography experiments. Identifying photoresist formulations with a high imaging performance has become a challenge. Herein, we develop a formulation discriminator of a metal oxide nanoparticle photoresist for a contact layer applied to electron beam lithography. This discriminator consists of convolutional neural network photoresist imaging and formulation classification models. A photoresist imaging model is adopted to predict the contact width of variable formulations, while a formulation classification model is used to classify formulations according to relative local critical dimension uniformity (RLCDU). The verification results indicate that the discriminator can accurately recognize photoresist formulations that simultaneously meet the conditions of contact width and RLCDU, and its feasibility has been demonstrated, providing a valuable reference for the preparation of photoresist materials.
Collapse
Affiliation(s)
- Rongbo Zhao
- Institute of Nuclear and New Energy Technology, Tsinghua University, Beijing 100084, China
| | - Yayi Wei
- Institute of Microelectronics of Chinese Academy of Sciences, Beijing 100029, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Xiaolin Wang
- Institute of Nuclear and New Energy Technology, Tsinghua University, Beijing 100084, China
| | - Xiangming He
- Institute of Nuclear and New Energy Technology, Tsinghua University, Beijing 100084, China
| | - Hong Xu
- Institute of Nuclear and New Energy Technology, Tsinghua University, Beijing 100084, China
| |
Collapse
|
5
|
Lee HT, Chiu PY, Yen CW, Chou ST, Tseng YC. Application of artificial intelligence in lateral cephalometric analysis. J Dent Sci 2024; 19:1157-1164. [PMID: 38618076 PMCID: PMC11010784 DOI: 10.1016/j.jds.2023.12.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 12/06/2023] [Accepted: 12/07/2023] [Indexed: 04/16/2024] Open
Affiliation(s)
- Huang-Ting Lee
- School of Dentistry, College of Dental Medicine, Kaohsiung Medical University, Kaohsiung, Taiwan
| | - Po-Yuan Chiu
- School of Dentistry, College of Dental Medicine, Kaohsiung Medical University, Kaohsiung, Taiwan
- Department of Orthodontics, Kaohsiung Medical University Hospital, Kaohsiung, Taiwan
| | - Chen-Wen Yen
- Department of Mechanical and Electromechanical Engineering, National Sun Yat-sen University, Kaohsiung, Taiwan
| | - Szu-Ting Chou
- School of Dentistry, College of Dental Medicine, Kaohsiung Medical University, Kaohsiung, Taiwan
- Department of Orthodontics, Kaohsiung Medical University Hospital, Kaohsiung, Taiwan
| | - Yu-Chuan Tseng
- School of Dentistry, College of Dental Medicine, Kaohsiung Medical University, Kaohsiung, Taiwan
- Department of Orthodontics, Kaohsiung Medical University Hospital, Kaohsiung, Taiwan
| |
Collapse
|
6
|
Wang Z, Han Y, Zhang Y, Hao J, Zhang Y. Classification and Recognition Method of Non-Cooperative Objects Based on Deep Learning. SENSORS (BASEL, SWITZERLAND) 2024; 24:583. [PMID: 38257675 PMCID: PMC10818946 DOI: 10.3390/s24020583] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 01/09/2024] [Accepted: 01/15/2024] [Indexed: 01/24/2024]
Abstract
Accurately classifying and identifying non-cooperative targets is paramount for modern space missions. This paper proposes an efficient method for classifying and recognizing non-cooperative targets using deep learning, based on the principles of the micro-Doppler effect and laser coherence detection. The theoretical simulations and experimental verification demonstrate that the accuracy of target classification for different targets can reach 100% after just one round of training. Furthermore, after 10 rounds of training, the accuracy of target recognition for different attitude angles can stabilize at 100%.
Collapse
Affiliation(s)
- Zhengjia Wang
- Institute of Precision Acousto-Optic Instrument, School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin 150080, China; (Z.W.); (Y.H.); (Y.Z.)
| | - Yi Han
- Institute of Precision Acousto-Optic Instrument, School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin 150080, China; (Z.W.); (Y.H.); (Y.Z.)
| | - Yiwei Zhang
- Institute of Precision Acousto-Optic Instrument, School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin 150080, China; (Z.W.); (Y.H.); (Y.Z.)
| | - Junhua Hao
- School of Precision Instruments and Opto-Electronics Engineering, Key Lab of Optoelectronic Information Technology (Ministry of Education), and Key Lab of Micro-Opto-Electro-Mechanical Systems (MOEMS) Technology (Ministry of Education), Tianjin University, Tianjin 300072, China
- Department of Physics, Tianjin Renai College, Tianjin 301636, China
| | - Yong Zhang
- National Key Laboratory of Science and Technology on Tunable Laser, Harbin Institute of Technology, Harbin 150080, China
- Department of Optoelectronic Information Science and Technology, School of Astronautics, Harbin Institute of Technology, Harbin 150080, China
| |
Collapse
|
7
|
Liyaqat T, Ahmad T, Saxena C. TeM-DTBA: time-efficient drug target binding affinity prediction using multiple modalities with Lasso feature selection. J Comput Aided Mol Des 2023; 37:573-584. [PMID: 37777631 DOI: 10.1007/s10822-023-00533-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 09/07/2023] [Indexed: 10/02/2023]
Abstract
Drug discovery, especially virtual screening and drug repositioning, can be accelerated through deeper understanding and prediction of Drug Target Interactions (DTIs). The advancement of deep learning as well as the time and financial costs associated with conventional wet-lab experiments have made computational methods for DTI prediction more popular. However, the majority of these computational methods handle the DTI problem as a binary classification task, ignoring the quantitative binding affinity that determines the drug efficacy to their target proteins. Moreover, computational space as well as execution time of the model is often ignored over accuracy. To address these challenges, we introduce a novel method, called Time-efficient Multimodal Drug Target Binding Affinity (TeM-DTBA), which predicts the binding affinity between drugs and targets by fusing different modalities based on compound structures and target sequences. We employ the Lasso feature selection method, which lowers the dimensionality of feature vectors and speeds up the proposed model training time by more than 50%. The results from two benchmark datasets demonstrate that our method outperforms state-of-the-art methods in terms of performance. The mean squared errors of 18.8% and 23.19%, achieved on the KIBA and Davis datasets, respectively, suggest that our method is more accurate in predicting drug-target binding affinity.
Collapse
Affiliation(s)
- Tanya Liyaqat
- Department of Computer Engineering, Jamia Millia Islamia, New Delhi, India.
| | - Tanvir Ahmad
- Department of Computer Engineering, Jamia Millia Islamia, New Delhi, India
| | - Chandni Saxena
- The Chinese University of Hong Kong, Sha Tin, SAR, China
| |
Collapse
|
8
|
Xia X, Wang M, Shi Y, Huang Z, Liu J, Men H, Fang H. Identification of white degradable and non-degradable plastics in food field: A dynamic residual network coupled with hyperspectral technology. SPECTROCHIMICA ACTA. PART A, MOLECULAR AND BIOMOLECULAR SPECTROSCOPY 2023; 296:122686. [PMID: 37028098 DOI: 10.1016/j.saa.2023.122686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 03/22/2023] [Accepted: 03/27/2023] [Indexed: 06/19/2023]
Abstract
In the food field, with the improvement of people's health and environmental protection awareness, degradable plastics have become a trend to replace non-degradable plastics. However, their appearance is very similar, making it difficult to distinguish them. This work proposed a rapid identification method for white non-degradable and degradable plastics. Firstly, a hyperspectral imaging system was used to collect the hyperspectral images of the plastics in visible and near-infrared bands (380-1038 nm). Secondly, a residual network (ResNet) was designed according to the characteristics of hyperspectral information. Finally, a dynamic convolution module was introduced into the ResNet to establish a dynamic residual network (Dy-ResNet) to adaptively mine the data features and realize the classification of the degradable and non-degradable plastics. Dy-ResNet had better classification performance than the other classical deep learning methods. The classification accuracy of the degradable and non-degradable plastics was 99.06%. In conclusion, hyperspectral imaging technology was combined with Dy-ResNet to identify the white non-degradable and degradable plastics effectively.
Collapse
Affiliation(s)
- Xiuxin Xia
- School of Automation Engineering, Northeast Electric Power University, Jilin 132012, China
| | - Mingyang Wang
- School of Automation Engineering, Northeast Electric Power University, Jilin 132012, China
| | - Yan Shi
- School of Automation Engineering, Northeast Electric Power University, Jilin 132012, China
| | - Zhifei Huang
- School of Automation Engineering, Northeast Electric Power University, Jilin 132012, China
| | - Jingjing Liu
- School of Automation Engineering, Northeast Electric Power University, Jilin 132012, China
| | - Hong Men
- School of Automation Engineering, Northeast Electric Power University, Jilin 132012, China.
| | - Hairui Fang
- School of Automation Engineering, Northeast Electric Power University, Jilin 132012, China.
| |
Collapse
|
9
|
Guo S, Wang D, Feng Z, Chen J, Guo W. Di-CNN: Domain-Knowledge-Informed Convolutional Neural Network for Manufacturing Quality Prediction. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23115313. [PMID: 37300042 DOI: 10.3390/s23115313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 05/29/2023] [Accepted: 06/01/2023] [Indexed: 06/12/2023]
Abstract
In manufacturing, convolutional neural networks (CNNs) are widely used on image sensor data for data-driven process monitoring and quality prediction. However, as purely data-driven models, CNNs do not integrate physical measures or practical considerations into the model structure or training procedure. Consequently, CNNs' prediction accuracy can be limited, and model outputs may be hard to interpret practically. This study aims to leverage manufacturing domain knowledge to improve the accuracy and interpretability of CNNs in quality prediction. A novel CNN model, named Di-CNN, was developed that learns from both design-stage information (such as working condition and operational mode) and real-time sensor data, and adaptively weighs these data sources during model training. It exploits domain knowledge to guide model training, thus improving prediction accuracy and model interpretability. A case study on resistance spot welding, a popular lightweight metal-joining process for automotive manufacturing, compared the performance of (1) a Di-CNN with adaptive weights (the proposed model), (2) a Di-CNN without adaptive weights, and (3) a conventional CNN. The quality prediction results were measured with the mean squared error (MSE) over sixfold cross-validation. Model (1) achieved a mean MSE of 6.8866 and a median MSE of 6.1916, Model (2) achieved 13.6171 and 13.1343, and Model (3) achieved 27.2935 and 25.6117, demonstrating the superior performance of the proposed model.
Collapse
Affiliation(s)
- Shenghan Guo
- The School of Manufacturing Systems and Networks, Arizona State University, Mesa, AZ 85212, USA
| | - Dali Wang
- Oak Ridge National Laboratory, Oak Ridge, TN 37830, USA
| | - Zhili Feng
- Oak Ridge National Laboratory, Oak Ridge, TN 37830, USA
| | - Jian Chen
- Oak Ridge National Laboratory, Oak Ridge, TN 37830, USA
| | - Weihong Guo
- The Department of Industrial and Systems Engineering, Rutgers, The State University of New Jersey, Piscataway, NJ 08854, USA
| |
Collapse
|
10
|
Yang K, Lu Y, Xue L, Yang Y, Chang S, Zhou C. URNet: System for recommending referrals for community screening of diabetic retinopathy based on deep learning. Exp Biol Med (Maywood) 2023; 248:909-921. [PMID: 37466156 PMCID: PMC10525407 DOI: 10.1177/15353702231171898] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Accepted: 02/01/2023] [Indexed: 07/20/2023] Open
Abstract
Diabetic retinopathy (DR) will cause blindness if the detection and treatment are not carried out in the early stages. To create an effective treatment strategy, the severity of the disease must first be divided into referral-warranted diabetic retinopathy (RWDR) and non-referral diabetic retinopathy (NRDR). However, there are usually no sufficient fundus examinations due to lack of professional service in the communities, particularly in the developing countries. In this study, we introduce UGAN_Resnet_CBAM (URNet; UGAN is a generative adversarial network that uses Unet for feature extraction), a two-stage end-to-end deep learning technique for the automatic detection of diabetic retinopathy. The characteristics of DDR fundus data set were used to design an adaptive image preprocessing module in the first stage. Gradient-weighted Class Activation Mapping (Grad-CAM) and t-distribution and stochastic neighbor embedding (t-SNE) were used as the evaluation indices to analyze the preprocessing results. In the second stage, we enhanced the performance of the Resnet50 network by integrating the convolutional block attention module (CBAM). The outcomes demonstrate that our proposed solution outperformed other current structures, achieving 94.5% and 94.4% precisions, and 96.2% and 91.9% recall for NRDR and RWDR, respectively.
Collapse
Affiliation(s)
- Kun Yang
- College of Quality and Technical Supervision, Hebei University, Baoding 071002, China
- Hebei Technology Innovation Center for Lightweight of New Energy Vehicle Power System, Baoding 071002, China
| | - Yufei Lu
- College of Quality and Technical Supervision, Hebei University, Baoding 071002, China
| | - Linyan Xue
- College of Quality and Technical Supervision, Hebei University, Baoding 071002, China
- Hebei Technology Innovation Center for Lightweight of New Energy Vehicle Power System, Baoding 071002, China
| | - Yueting Yang
- College of Quality and Technical Supervision, Hebei University, Baoding 071002, China
| | - Shilong Chang
- College of Quality and Technical Supervision, Hebei University, Baoding 071002, China
| | - Chuanqing Zhou
- College of Medical Instruments, Shanghai University of Medicine and Health Sciences, Shanghai 201318, China
| |
Collapse
|
11
|
Hemachandran K, Alasiry A, Marzougui M, Ganie SM, Pise AA, Alouane MTH, Chola C. Performance Analysis of Deep Learning Algorithms in Diagnosis of Malaria Disease. Diagnostics (Basel) 2023; 13:diagnostics13030534. [PMID: 36766640 PMCID: PMC9914762 DOI: 10.3390/diagnostics13030534] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2022] [Revised: 01/07/2023] [Accepted: 01/20/2023] [Indexed: 02/04/2023] Open
Abstract
Malaria is predominant in many subtropical nations with little health-monitoring infrastructure. To forecast malaria and condense the disease's impact on the population, time series prediction models are necessary. The conventional technique of detecting malaria disease is for certified technicians to examine blood smears visually for parasite-infected RBC (red blood cells) underneath a microscope. This procedure is ineffective, and the diagnosis depends on the individual performing the test and his/her experience. Automatic image identification systems based on machine learning have previously been used to diagnose malaria blood smears. However, so far, the practical performance has been insufficient. In this paper, we have made a performance analysis of deep learning algorithms in the diagnosis of malaria disease. We have used Neural Network models like CNN, MobileNetV2, and ResNet50 to perform this analysis. The dataset was extracted from the National Institutes of Health (NIH) website and consisted of 27,558 photos, including 13,780 parasitized cell images and 13,778 uninfected cell images. In conclusion, the MobileNetV2 model outperformed by achieving an accuracy rate of 97.06% for better disease detection. Also, other metrics like training and testing loss, precision, recall, fi-score, and ROC curve were calculated to validate the considered models.
Collapse
Affiliation(s)
- K. Hemachandran
- Department of Analytics, School of Business, Woxsen University, Hyderabad 502345, Telangana, India
| | - Areej Alasiry
- College of Computer Science, King Khalid University, Abha 62529, Saudi Arabia
| | - Mehrez Marzougui
- College of Computer Science, King Khalid University, Abha 62529, Saudi Arabia
| | - Shahid Mohammad Ganie
- Department of Analytics, School of Business, Woxsen University, Hyderabad 502345, Telangana, India
| | - Anil Audumbar Pise
- Siatik Premier Google Cloud Platform Partner, Johannesburg 2000, South Africa
- School of Computer Science and Applied Mathematics, University of the Witwatersrand, Johannesburg 2000, South Africa
- School Saveetha School of Engineering, Chennai 600124, Tamil Nadu, India
| | - M. Turki-Hadj Alouane
- College of Computer Science, King Khalid University, Abha 62529, Saudi Arabia
- Correspondence:
| | - Channabasava Chola
- Department of Studies in Computer Science, University of Mysore, Manasagangothri, Mysore 570006, Karnataka, India
| |
Collapse
|
12
|
Prediction method of intelligent building electricity consumption based on deep learning. EVOLUTIONARY INTELLIGENCE 2023. [DOI: 10.1007/s12065-023-00815-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
|
13
|
Fraser J, Aricibasi H, Tulpan D, Bergeron R. A computer vision image differential approach for automatic detection of aggressive behavior in pigs using deep learning. J Anim Sci 2023; 101:skad347. [PMID: 37813375 PMCID: PMC10601918 DOI: 10.1093/jas/skad347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 10/06/2023] [Indexed: 10/11/2023] Open
Abstract
Pig aggression is a major problem facing the industry as it negatively affects both the welfare and the productivity of group-housed pigs. This study aimed to use a supervised deep learning (DL) approach based on a convolutional neural network (CNN) and image differential to automatically detect aggressive behaviors in pairs of pigs. Different pairs of unfamiliar piglets (N = 32) were placed into one of the two observation pens for 3 d, where they were video recorded each day for 1 h following mixing, resulting in 16 h of video recordings of which 1.25 h were selected for modeling. Four different approaches based on the number of frames skipped (1, 5, or 10 for Diff1, Diff5, and Diff10, respectively) and the amalgamation of multiple image differences into one (blended) were used to create four different datasets. Two CNN models were tested, with architectures based on the Visual Geometry Group (VGG) VGG-16 model architecture, consisting of convolutional layers, max-pooling layers, dense layers, and dropout layers. While both models had similar architectures, the second CNN model included stacked convolutional layers. Nine different sigmoid activation function thresholds between 0.1 and 1.0 were evaluated and a 0.5 threshold was selected to be used for testing. The stacked CNN model correctly predicted aggressive behaviors with the highest testing accuracy (0.79), precision (0.81), recall (0.77), and area under the curve (0.86) values. When analyzing the model recall for behavior subtypes prediction, mounting and mobile non-aggressive behaviors were the hardest to classify (recall = 0.63 and 0.75), while head biting, immobile, and parallel pressing were easy to classify (recall = 0.95, 0.94, and 0.91). Runtimes were also analyzed with the blended dataset, taking four times less time to train and validate than the Diff1, Diff5, and Diff10 datasets. Preprocessing time was reduced by up to 2.3 times in the blended dataset compared to the other datasets and, when combined with testing runtimes, it satisfied the requirements for real-time systems capable of detecting aggressive behavior in pairs of pigs. Overall, these results show that using a CNN and image differential-based deep learning approach can be an effective and computationally efficient technique to automatically detect aggressive behaviors in pigs.
Collapse
Affiliation(s)
- Jasmine Fraser
- Department of Animal Biosciences, Ontario Agricultural College, University of Guelph, 50 Stone Road East, Guelph, ON, CanadaN1G2W1
| | - Harry Aricibasi
- Department of Biomedical Sciences, Ontario Veterinary College, University of Guelph, 50 Stone Road East, Guelph, ON, CanadaN1G2W1
| | - Dan Tulpan
- Department of Animal Biosciences, Ontario Agricultural College, University of Guelph, 50 Stone Road East, Guelph, ON, CanadaN1G2W1
| | - Renée Bergeron
- Department of Animal Biosciences, Ontario Agricultural College, University of Guelph, 50 Stone Road East, Guelph, ON, CanadaN1G2W1
| |
Collapse
|
14
|
Nicholaus IT, Lee JS, Kang DK. One-Class Convolutional Neural Networks for Water-Level Anomaly Detection. SENSORS (BASEL, SWITZERLAND) 2022; 22:8764. [PMID: 36433361 PMCID: PMC9698379 DOI: 10.3390/s22228764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 11/07/2022] [Accepted: 11/11/2022] [Indexed: 06/16/2023]
Abstract
Companies that own water systems to provide water storage and distribution services always strive to enhance and efficiently distribute water to different places for various purposes. However, these water systems are likely to face problems ranging from leakage to destruction of infrastructures, leading to economic and life losses. Thus, apprehending the nature of abnormalities that may interrupt or aggravate the service or cause the destruction is at the core of their business model. Normally, companies use sensor networks to monitor these systems and record operational data including any fluctuations in water levels considered abnormalities. Detecting abnormalities allows water companies to enhance the service's sustainability, quality, and affordability. This study investigates a 2D-CNN-based method for detecting water-level abnormalities as time-series anomaly pattern detection in the One-Class Classification (OCC) problem. Moreover, since abnormal data are usually scarce or unavailable, we explored a cheap method to generate synthetic temporal data and use them as a target class in addition to the normal data to train the CNN model for feature extraction and classification. These settings allow us to train a model to learn relevant pattern representations of the given classes in a binary classification fashion using cross-entropy loss. The ultimate goal of these investigations is to determine if any 2D-CNN-based model can be trained from scratch or if transfer learning of any pre-trained CNN model can be partially trained and used as the base network for one-class classification. The evaluation of the proposed One-Class CNN and previous approaches have shown that our approach has outperformed several state-of-the-art approaches by a significant margin. Additionally, in this paper, we mention two interesting findings: using synthetic data as the pseudo-class is a promising direction, and transfer learning should be dealt with considering that underfitting can happen because the transferred model is too complicated for training data.
Collapse
Affiliation(s)
- Isack Thomas Nicholaus
- Department of Computer Engineering, Dongseo University, 47 Jurye-ro, Sasang-gu, Busan 47011, Republic of Korea
| | - Jun-Seoung Lee
- Infranics R&D Center, 12th flr. KT Mok-Dong Tower 201 Mokdongseo-ro, Yangcheon-gu, Seoul 07994, Republic of Korea
| | - Dae-Ki Kang
- Department of Computer Engineering, Dongseo University, 47 Jurye-ro, Sasang-gu, Busan 47011, Republic of Korea
| |
Collapse
|
15
|
Bhandari N, Walambe R, Kotecha K, Khare SP. A comprehensive survey on computational learning methods for analysis of gene expression data. Front Mol Biosci 2022; 9:907150. [PMID: 36458095 PMCID: PMC9706412 DOI: 10.3389/fmolb.2022.907150] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Accepted: 09/28/2022] [Indexed: 09/19/2023] Open
Abstract
Computational analysis methods including machine learning have a significant impact in the fields of genomics and medicine. High-throughput gene expression analysis methods such as microarray technology and RNA sequencing produce enormous amounts of data. Traditionally, statistical methods are used for comparative analysis of gene expression data. However, more complex analysis for classification of sample observations, or discovery of feature genes requires sophisticated computational approaches. In this review, we compile various statistical and computational tools used in analysis of expression microarray data. Even though the methods are discussed in the context of expression microarrays, they can also be applied for the analysis of RNA sequencing and quantitative proteomics datasets. We discuss the types of missing values, and the methods and approaches usually employed in their imputation. We also discuss methods of data normalization, feature selection, and feature extraction. Lastly, methods of classification and class discovery along with their evaluation parameters are described in detail. We believe that this detailed review will help the users to select appropriate methods for preprocessing and analysis of their data based on the expected outcome.
Collapse
Affiliation(s)
- Nikita Bhandari
- Computer Science Department, Symbiosis Institute of Technology, Symbiosis International (Deemed University), Pune, India
| | - Rahee Walambe
- Electronics and Telecommunication Department, Symbiosis Institute of Technology, Symbiosis International (Deemed University), Pune, India
- Symbiosis Center for Applied AI (SCAAI), Symbiosis International (Deemed University), Pune, India
| | - Ketan Kotecha
- Computer Science Department, Symbiosis Institute of Technology, Symbiosis International (Deemed University), Pune, India
- Symbiosis Center for Applied AI (SCAAI), Symbiosis International (Deemed University), Pune, India
| | - Satyajeet P. Khare
- Symbiosis School of Biological Sciences, Symbiosis International (Deemed University), Pune, India
| |
Collapse
|
16
|
Chen X, Cheng G, Liu S, Meng S, Jiao Y, Zhang W, Liang J, Zhang W, Wang B, Xu X, Xu J. Probing 1D convolutional neural network adapted to near-infrared spectroscopy for efficient classification of mixed fish. SPECTROCHIMICA ACTA. PART A, MOLECULAR AND BIOMOLECULAR SPECTROSCOPY 2022; 279:121350. [PMID: 35609391 DOI: 10.1016/j.saa.2022.121350] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 05/02/2022] [Accepted: 05/03/2022] [Indexed: 06/15/2023]
Abstract
Salmon and Cod are economically significant world-class fish that have high economic value. It is difficult to accurately sort and process them by appearance during harvest and transportation. Conventional chemical detection means are time-consuming and costly, which greatly affects the cost and efficiency of Fishery production. Therefore, there is an urgent need for smart Fisheries methods which use for the classification of mixed fish. In this paper, near-infrared spectroscopy (NIRS) was used to assess salmon and cod samples. This study aims to evaluate feasibility of a back-propagation neural network (BPNN) and a convolutional neural network (CNN) for identifying different species of fishes by the corresponding spectra in comparison to traditional chemometrics Partial Least Squares. After comparing the effects of different batch sizes, number of convolutional kernels, number of convolutional layers, and number of pooling layers on the classification of NIRS spectra comparing different structures of one-dimensional (1D)-CNN, we propose the 1D-CNN-8 model that is most suitable for the classification of mixed fish. Compared with the results of traditional chemometrics methods and BPNN, the prediction model of the 1D-CNN model can reach 98.00% Accuracy and the parameters are significantly better than others. Meanwhile, the parameters and floating-point operations of the optimal model are both small. Therefore, the improved CNN model based on the NIRS can effectively and quickly identify different kinds of fish samples and contribute to realizing edge computing at the same time.
Collapse
Affiliation(s)
- Xinghao Chen
- College of Artificial Intelligence, Nankai University, Tianjin 300350, China
| | - Gongyi Cheng
- The Key Laboratory of Weak-Light Nonlinear Photonics, Ministry of Education, School of Physics, Nankai University, Tianjin 300071, China
| | - Shuhan Liu
- The Key Laboratory of Weak-Light Nonlinear Photonics, Ministry of Education, School of Physics, Nankai University, Tianjin 300071, China
| | - Sizhuo Meng
- The Key Laboratory of Weak-Light Nonlinear Photonics, Ministry of Education, School of Physics, Nankai University, Tianjin 300071, China
| | - Yiping Jiao
- The Key Laboratory of Weak-Light Nonlinear Photonics, Ministry of Education, School of Physics, Nankai University, Tianjin 300071, China
| | - Wenjie Zhang
- The Key Laboratory of Weak-Light Nonlinear Photonics, Ministry of Education, School of Physics, Nankai University, Tianjin 300071, China
| | - Jing Liang
- The Key Laboratory of Weak-Light Nonlinear Photonics, Ministry of Education, School of Physics, Nankai University, Tianjin 300071, China
| | - Wang Zhang
- Lianyungang Customs P.R.C, Lianyungang 222042, China
| | - Bin Wang
- College of Artificial Intelligence, Nankai University, Tianjin 300350, China
| | - Xiaoxuan Xu
- College of Artificial Intelligence, Nankai University, Tianjin 300350, China.
| | - Jing Xu
- College of Artificial Intelligence, Nankai University, Tianjin 300350, China
| |
Collapse
|
17
|
Awotunde O, Roseboom N, Cai J, Hayes K, Rajane R, Chen R, Yusuf A, Lieberman M. Discrimination of Substandard and Falsified Formulations from Genuine Pharmaceuticals Using NIR Spectra and Machine Learning. Anal Chem 2022; 94:12586-12594. [PMID: 36067409 DOI: 10.1021/acs.analchem.2c00998] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Near-infrared (NIR) spectroscopy is a promising technique for field identification of substandard and falsified drugs because it is portable, rapid, nondestructive, and can differentiate many formulated pharmaceutical products. Portable NIR spectrometers rely heavily on chemometric analyses based on libraries of NIR spectra from authentic pharmaceutical samples. However, it is difficult to build comprehensive product libraries in many low- and middle-income countries due to the large numbers of manufacturers who supply these markets, frequent unreported changes in materials sourcing and product formulation by the manufacturers, and general lack of cooperation in providing authentic samples. In this work, we show that a simple library of lab-formulated binary mixtures of an active pharmaceutical ingredient (API) with two diluents gave good performance on field screening tasks, such as discriminating substandard and falsified formulations of the API. Six data analysis models, including principal component analysis and support-vector machine classification and regression methods and convolutional neural networks, were trained on binary mixtures of acetaminophen with either lactose or ascorbic acid. While the models all performed strongly in cross-validation (on formulations similar to their training set), they individually showed poor robustness for formulations outside the training set. However, a predictive algorithm based on the six models, trained only on binary samples, accurately predicts whether the correct amount of acetaminophen is present in ternary mixtures, genuine acetaminophen formulations, adulterated acetaminophen formulations, and falsified formulations containing substitute APIs. This data analytics approach may extend the utility of NIR spectrometers for analysis of pharmaceuticals in low-resource settings.
Collapse
Affiliation(s)
- Olatunde Awotunde
- Department of Chemistry and Biochemistry, University of Notre Dame, Notre Dame, Indiana 46556, United States
| | - Nicholas Roseboom
- Department of Chemistry and Biochemistry, University of Notre Dame, Notre Dame, Indiana 46556, United States
| | - Jin Cai
- Department of Chemistry and Biochemistry, University of Notre Dame, Notre Dame, Indiana 46556, United States
| | - Kathleen Hayes
- Department of Chemistry and Biochemistry, University of Notre Dame, Notre Dame, Indiana 46556, United States
| | - Revati Rajane
- Precise Software Solutions Inc, Rockville, Maryland 20850, United States
| | - Ruoyan Chen
- Precise Software Solutions Inc, Rockville, Maryland 20850, United States
| | - Abdullah Yusuf
- Precise Software Solutions Inc, Rockville, Maryland 20850, United States
| | - Marya Lieberman
- Department of Chemistry and Biochemistry, University of Notre Dame, Notre Dame, Indiana 46556, United States
| |
Collapse
|
18
|
Kaur I, Goyal LM, Ghansiyal A, Hemanth DJ. Efficient Approach for Rhopalocera Classification Using Growing Convolutional Neural Network. INT J UNCERTAIN FUZZ 2022. [DOI: 10.1142/s0218488522400189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
In the present times, artificial-intelligence based techniques are considered as one of the prominent ways to classify images which can be conveniently leveraged in the real-world scenarios. This technology can be extremely beneficial to the lepidopterists, to assist them in classification of the diverse species of Rhopalocera, commonly called as butterflies. In this article, image classification is performed on a dataset of various butterfly species, facilitated via the feature extraction process of the Convolutional Neural Network (CNN) along with leveraging the additional features calculated independently to train the model. The classification models deployed for this purpose predominantly include K-Nearest Neighbors (KNN), Random Forest and Support Vector Machine (SVM). However, each of these methods tend to focus on one specific class of features. Therefore, an ensemble of multiple classes of features used for classification of images is implemented. This research paper discusses the results achieved from the classification performed on basis of two different classes of features i.e., structure and texture. The amalgamation of the two specified classes of features forms a combined data set, which has further been used to train the Growing Convolutional Neural Network (GCNN), resulting in higher accuracy of the classification model. The experiment performed resulted in promising outcomes with TP rate, FP rate, Precision, recall and F-measure values as 0.9690, 0.0034, 0.9889, 0.9692 and 0.9686 respectively. Furthermore, an accuracy of 96.98% was observed by the proposed methodology.
Collapse
Affiliation(s)
- Iqbaldeep Kaur
- Department of Computer Science, CGC, Landran, Mohali, India
| | - Lalit Mohan Goyal
- Department of Computer Engineering, J C Bose University of Science and Technology, YMCA, Faridabad, India
| | | | - D. Jude Hemanth
- Department of ECE, Karunya Institute of Technology and Sciences, Coimbatore, India
| |
Collapse
|
19
|
Hu RS, Hesham AEL, Zou Q. Machine Learning and Its Applications for Protozoal Pathogens and Protozoal Infectious Diseases. Front Cell Infect Microbiol 2022; 12:882995. [PMID: 35573796 PMCID: PMC9097758 DOI: 10.3389/fcimb.2022.882995] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Accepted: 03/28/2022] [Indexed: 12/24/2022] Open
Abstract
In recent years, massive attention has been attracted to the development and application of machine learning (ML) in the field of infectious diseases, not only serving as a catalyst for academic studies but also as a key means of detecting pathogenic microorganisms, implementing public health surveillance, exploring host-pathogen interactions, discovering drug and vaccine candidates, and so forth. These applications also include the management of infectious diseases caused by protozoal pathogens, such as Plasmodium, Trypanosoma, Toxoplasma, Cryptosporidium, and Giardia, a class of fatal or life-threatening causative agents capable of infecting humans and a wide range of animals. With the reduction of computational cost, availability of effective ML algorithms, popularization of ML tools, and accumulation of high-throughput data, it is possible to implement the integration of ML applications into increasing scientific research related to protozoal infection. Here, we will present a brief overview of important concepts in ML serving as background knowledge, with a focus on basic workflows, popular algorithms (e.g., support vector machine, random forest, and neural networks), feature extraction and selection, and model evaluation metrics. We will then review current ML applications and major advances concerning protozoal pathogens and protozoal infectious diseases through combination with correlative biology expertise and provide forward-looking insights for perspectives and opportunities in future advances in ML techniques in this field.
Collapse
Affiliation(s)
- Rui-Si Hu
- Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu, China
- Yangtze Delta Region Institute (Quzhou), University of Electronic Science and Technology of China, Quzhou, China
| | - Abd El-Latif Hesham
- Genetics Department, Faculty of Agriculture, Beni-Suef University, Beni-Suef, Egypt
| | - Quan Zou
- Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu, China
- Yangtze Delta Region Institute (Quzhou), University of Electronic Science and Technology of China, Quzhou, China
- *Correspondence: Quan Zou,
| |
Collapse
|
20
|
Developing a New Model for Drilling Rate of Penetration Prediction Using Convolutional Neural Network. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2022. [DOI: 10.1007/s13369-022-06765-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
|
21
|
System for the Recognizing of Pigmented Skin Lesions with Fusion and Analysis of Heterogeneous Data Based on a Multimodal Neural Network. Cancers (Basel) 2022; 14:cancers14071819. [PMID: 35406591 PMCID: PMC8997449 DOI: 10.3390/cancers14071819] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 03/30/2022] [Accepted: 03/30/2022] [Indexed: 02/07/2023] Open
Abstract
Simple Summary Skin cancer is one of the most common cancers in humans. This study aims to create a system for recognizing pigmented skin lesions by analyzing heterogeneous data based on a multimodal neural network. Fusing patient statistics and multidimensional visual data allows for finding additional links between dermoscopic images and medical diagnostic results, significantly improving neural network classification accuracy. The use by specialists of the proposed system of neural network recognition of pigmented skin lesions will enhance the efficiency of diagnosis compared to visual diagnostic methods. Abstract Today, skin cancer is one of the most common malignant neoplasms in the human body. Diagnosis of pigmented lesions is challenging even for experienced dermatologists due to the wide range of morphological manifestations. Artificial intelligence technologies are capable of equaling and even surpassing the capabilities of a dermatologist in terms of efficiency. The main problem of implementing intellectual analysis systems is low accuracy. One of the possible ways to increase this indicator is using stages of preliminary processing of visual data and the use of heterogeneous data. The article proposes a multimodal neural network system for identifying pigmented skin lesions with a preliminary identification, and removing hair from dermatoscopic images. The novelty of the proposed system lies in the joint use of the stage of preliminary cleaning of hair structures and a multimodal neural network system for the analysis of heterogeneous data. The accuracy of pigmented skin lesions recognition in 10 diagnostically significant categories in the proposed system was 83.6%. The use of the proposed system by dermatologists as an auxiliary diagnostic method will minimize the impact of the human factor, assist in making medical decisions, and expand the possibilities of early detection of skin cancer.
Collapse
|
22
|
Zhang Q, Yang Q, Zhang X, Bao Q, Su J, Liu X. Waste image classification based on transfer learning and convolutional neural network. WASTE MANAGEMENT (NEW YORK, N.Y.) 2021; 135:150-157. [PMID: 34509053 DOI: 10.1016/j.wasman.2021.08.038] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/08/2021] [Revised: 08/24/2021] [Accepted: 08/26/2021] [Indexed: 06/13/2023]
Abstract
The rapid economic and social development has led to a rapid increase in the output of domestic waste. How to realize waste classification through intelligent methods has become a key factor for human beings to achieve sustainable development. Traditional waste classification technology has low efficiency and low accuracy. To improve the efficiency and accuracy of waste classification processing, this paper proposes a DenseNet169 waste image classification model based on transfer learning. Because of the disadvantages of the existing public waste dataset, such as uneven distribution of data, single background, obvious features, and small sample size of the waste image, the waste image dataset NWNU-TRASH is constructed. The dataset has the advantages of balanced distribution, high diversity, and rich background, which is more in line with real needs. 70% of the dataset is used as the training set and 30% as the test set. Based on the deep learning network DenseNet169 pre-trained model, we can form a DenseNet169 model suitable for this experimental dataset. The experimental results show that the accuracy of classification is over 82% in the DenseNet169 model after the transfer learning, which is better than other image classification algorithms.
Collapse
Affiliation(s)
- Qiang Zhang
- Department of Computer Science and Engineering, Northwest Normal University, Lanzhou, Gansu Province 730070, China
| | - Qifan Yang
- Department of Computer Science and Engineering, Northwest Normal University, Lanzhou, Gansu Province 730070, China
| | - Xujuan Zhang
- School of Computer Science and Artificial Intelligence, Lanzhou Institute of Technology, Lanzhou, Gansu Province 730050, China
| | - Qiang Bao
- College of Computing, Illinois Institute of Technology, Chicago, IL 60616, USA
| | - Jinqi Su
- Xi'an University of Posts&Telecommunications, Xi'an, Shanxi Province 710121, China
| | - Xueyan Liu
- Department of Computer Science and Engineering, Northwest Normal University, Lanzhou, Gansu Province 730070, China.
| |
Collapse
|
23
|
Target Detection Method for Low-Resolution Remote Sensing Image Based on ESRGAN and ReDet. PHOTONICS 2021. [DOI: 10.3390/photonics8100431] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
With the widespread use of remote sensing images, low-resolution target detection in remote sensing images has become a hot research topic in the field of computer vision. In this paper, we propose a Target Detection on Super-Resolution Reconstruction (TDoSR) method to solve the problem of low target recognition rates in low-resolution remote sensing images under foggy conditions. The TDoSR method uses the Enhanced Super-Resolution Generative Adversarial Network (ESRGAN) to perform defogging and super-resolution reconstruction of foggy low-resolution remote sensing images. In the target detection part, the Rotation Equivariant Detector (ReDet) algorithm, which has a higher recognition rate at this stage, is used to identify and classify various types of targets. While a large number of experiments have been carried out on the remote sensing image dataset DOTA-v1.5, the results of this paper suggest that the proposed method achieves good results in the target detection of low-resolution foggy remote sensing images. The principal result of this paper demonstrates that the recognition rate of the TDoSR method increases by roughly 20% when compared with low-resolution foggy remote sensing images.
Collapse
|
24
|
Yoon HJ, Kim DR, Gwon E, Kim N, Baek SH, Ahn HW, Kim KA, Kim SJ. Fully automated identification of cephalometric landmarks for upper airway assessment using cascaded convolutional neural networks. Eur J Orthod 2021; 44:66-77. [PMID: 34379120 DOI: 10.1093/ejo/cjab054] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
OBJECTIVES The aim of the study was to evaluate the accuracy of a cascaded two-stage convolutional neural network (CNN) model in detecting upper airway (UA) soft tissue landmarks in comparison with the skeletal landmarks on the lateral cephalometric images. MATERIALS AND METHODS The dataset contained 600 lateral cephalograms of adult orthodontic patients, and the ground-truth positions of 16 landmarks (7 skeletal and 9 UA landmarks) were obtained from 500 learning dataset. We trained a UNet with EfficientNetB0 model through the region of interest-centred circular segmentation labelling process. Mean distance errors (MDEs, mm) of the CNN algorithm was compared with those from human examiners. Successful detection rates (SDRs, per cent) assessed within 1-4 mm precision ranges were compared between skeletal and UA landmarks. RESULTS The proposed model achieved MDEs of 0.80 ± 0.55 mm for skeletal landmarks and 1.78 ± 1.21 mm for UA landmarks. The mean SDRs for UA landmarks were 72.22 per cent for 2 mm range, and 92.78 per cent for 4 mm range, contrasted with those for skeletal landmarks amounting to 93.43 and 98.71 per cent, respectively. As compared with mean interexaminer difference, however, this model showed higher detection accuracies for geometrically constructed UA landmarks on the nasopharynx (AD2 and Ss), while lower accuracies for anatomically located UA landmarks on the tongue (Td) and soft palate (Sb and St). CONCLUSION The proposed CNN model suggests the availability of an automated cephalometric UA assessment to be integrated with dentoskeletal and facial analysis.
Collapse
Affiliation(s)
- Hyun-Joo Yoon
- Department of Dentistry, Graduate School, Kyung Hee University, Seoul, Republic of Korea
| | - Dong-Ryul Kim
- Department of Dentistry, Graduate School, Kyung Hee University, Seoul, Republic of Korea
| | - Eunseo Gwon
- Department of Convergence Medicine, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Namkug Kim
- Department of Convergence Medicine, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.,Department of Radiology, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Seung-Hak Baek
- Department of Orthodontics, School of Dentistry, Seoul National University, Seoul, Republic of Korea
| | - Hyo-Won Ahn
- Department of Orthodontics, School of Dentistry, Kyung Hee University, Seoul, Republic of Korea
| | - Kyung-A Kim
- Department of Orthodontics, School of Dentistry, Kyung Hee University, Seoul, Republic of Korea
| | - Su-Jung Kim
- Department of Orthodontics, School of Dentistry, Kyung Hee University, Seoul, Republic of Korea
| |
Collapse
|
25
|
Khullar V, Singh HP, Bala M. Deep Neural Network-based Handheld Diagnosis System for Autism Spectrum Disorder. Neurol India 2021; 69:66-74. [PMID: 33642273 DOI: 10.4103/0028-3886.310069] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Objective The aim of the present work was to propose and implement deep neural network (DNN)-based handheld diagnosis system for more accurate diagnosis and severity assessment of individuals with autism spectrum disorder (ASD). Methods Initially, the learning of the proposed system for ASD diagnosis was performed by implementing DNN algorithms such as a convolutional neural network (CNN) and long short-term memory (LSTM), and multilayer perceptron (MLP) with DSM-V based acquired dataset. The performance of the DNN algorithms was analyzed based on parameters viz. accuracy, loss, mean squared error (MSE), precision, recall, and area under the curve (AUC) during the training and validation process. Later, the optimum DNN algorithm, among the tested algorithms, was implemented on handheld diagnosis system (HDS) and the performance of HDS was analyzed. The stability of proposed DNN-based HDS was validated with the dataset group of 20 ASD and 20 typically developed (TD) individuals. Results It was observed during comparative analysis that LSTM resulted better in ASD diagnosis as compared to other artificial intelligence (AI) algorithms such as CNN and MLP since LSTM showed stabilized results achieving maximum accuracy in less consumption of epochs with minimum MSE and loss. Further, the LSTM based proposed HDS for ASD achieved optimum results with 100% accuracy in reference to DSM-V, which was validated statistically using a group of ASD and TD individuals. Conclusion The use of advanced AI algorithms could play an important role in the diagnosis of ASD in today's era. Since the proposed LSTM based HDS for ASD and determination of its severity provided accurate results with maximum accuracy with reference to DSM-V criteria, the proposed HDS could be the best alternative to the manual diagnosis system for diagnosis of ASD.
Collapse
Affiliation(s)
- Vikas Khullar
- I.K.G. Punjab Technical University, Kapurthala; CT Institute of Engineering, Management and Technology, Jalandhar, Punjab, India
| | - Harjit Pal Singh
- I.K.G. Punjab Technical University, Kapurthala; CT Institute of Engineering, Management and Technology, Jalandhar, Punjab, India
| | - Manju Bala
- I.K.G. Punjab Technical University, Kapurthala; Khalsa College of Engineering and Technology, Amritsar, Punjab, India
| |
Collapse
|
26
|
|
27
|
Li C, Qiu Z, Cao X, Chen Z, Gao H, Hua Z. Hybrid Dilated Convolution with Multi-Scale Residual Fusion Network for Hyperspectral Image Classification. MICROMACHINES 2021; 12:mi12050545. [PMID: 34068823 PMCID: PMC8151123 DOI: 10.3390/mi12050545] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Revised: 04/20/2021] [Accepted: 05/06/2021] [Indexed: 11/16/2022]
Abstract
The convolutional neural network (CNN) has been proven to have better performance in hyperspectral image (HSI) classification than traditional methods. Traditional CNN on hyperspectral image classification is used to pay more attention to spectral features and ignore spatial information. In this paper, a new HSI model called local and hybrid dilated convolution fusion network (LDFN) was proposed, which fuses the local information of details and rich spatial features by expanding the perception field. The details of our local and hybrid dilated convolution fusion network methods are as follows. First, many operations are selected, such as standard convolution, average pooling, dropout and batch normalization. Then, fusion operations of local and hybrid dilated convolution are included to extract rich spatial-spectral information. Last, different convolution layers are gathered into residual fusion networks and finally input into the softmax layer to classify. Three widely hyperspectral datasets (i.e., Salinas, Pavia University and Indian Pines) have been used in the experiments, which show that LDFN outperforms state-of-art classifiers.
Collapse
|
28
|
Hu J, Liu J, Liang P, Li B. A novel method based on convolutional neural network for malaria diagnosis. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2020. [DOI: 10.3233/jifs-201427] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Malaria is one of the three major diseases with the highest mortality worldwide and can turn fatal if not taken seriously. The key to surviving this disease is its early diagnosis. However, manual diagnosis is time consuming and tedious due to the large amount of image data. Generally, computer-aided diagnosis can effectively improve doctors’ perception and accuracy. This paper presents a medical diagnosis method powered by convolutional neural network (CNN) to extract features from images and improve early detection of malaria. The image sharpening and histogram equalization method are used aiming at enlarging the difference between parasitized regions and other area. Dropout technology is employed in every convolutional layer to reduce overfitting in the network, which is proved to be effective. The proposed CNN model achieves a significant performance with the best classification accuracy of 99.98%. Moreover, this paper compares the proposed model with the pretrained CNNs and other traditional algorithms. The results indicate the proposed model can achieve state-of-the-art performance from multiple metrics. In general, the novelty of this work is the reduction of the CNN structure to only five layers, thereby greatly reducing the running time and the number of parameters, which is demonstrated in the experiments. Furthermore, the proposed model can assist clinicians to accurately diagnose the malaria disease.
Collapse
Affiliation(s)
- Junhua Hu
- School of Business, Central South University, Changsha, China
| | - Jie Liu
- School of Business, Central South University, Changsha, China
| | - Pei Liang
- School of Business, Central South University, Changsha, China
| | - Bo Li
- School of Business, Central South University, Changsha, China
| |
Collapse
|
29
|
Lathuiliere S, Mesejo P, Alameda-Pineda X, Horaud R. A Comprehensive Analysis of Deep Regression. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2020; 42:2065-2081. [PMID: 30990175 DOI: 10.1109/tpami.2019.2910523] [Citation(s) in RCA: 56] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Deep learning revolutionized data science, and recently its popularity has grown exponentially, as did the amount of papers employing deep networks. Vision tasks, such as human pose estimation, did not escape from this trend. There is a large number of deep models, where small changes in the network architecture, or in the data pre-processing, together with the stochastic nature of the optimization procedures, produce notably different results, making extremely difficult to sift methods that significantly outperform others. This situation motivates the current study, in which we perform a systematic evaluation and statistical analysis of vanilla deep regression, i.e., convolutional neural networks with a linear regression top layer. This is the first comprehensive analysis of deep regression techniques. We perform experiments on four vision problems, and report confidence intervals for the median performance as well as the statistical significance of the results, if any. Surprisingly, the variability due to different data pre-processing procedures generally eclipses the variability due to modifications in the network architecture. Our results reinforce the hypothesis according to which, in general, a general-purpose network (e.g., VGG-16 or ResNet-50) adequately tuned can yield results close to the state-of-the-art without having to resort to more complex and ad-hoc regression models.
Collapse
|
30
|
Cao Y, Xiao X, Liu Z, Yang M, Sun D, Guo W, Cui L, Zhang P. Detecting vulnerable plaque with vulnerability index based on convolutional neural networks. Comput Med Imaging Graph 2020; 81:101711. [PMID: 32155412 DOI: 10.1016/j.compmedimag.2020.101711] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2019] [Revised: 01/29/2020] [Accepted: 02/16/2020] [Indexed: 10/25/2022]
Abstract
Plaque rupture and subsequent thrombosis are major processes of acute cardiovascular events. The Vulnerability Index is a very important indicator of whether a plaque is ruptured, and these easily ruptured or fragile plaques can be detected early. The higher the general vulnerability index, the higher the instability of the plaque. Therefore, determining a clear vulnerability index classification point can effectively reduce unnecessary interventional therapy. However, the current critical value of the vulnerability index has not been well defined. In this study, we proposed a neural network-based method to determine the critical point of vulnerability index that distinguishes vulnerable plaques from stable ones. Firstly, based on MatConvNet, the intravascular ultrasound images under different vulnerability index labels are classified. Different vulnerability indexes can obtain different accuracy rates for the demarcation points. The corresponding data points are fitted to find the existing relationship to judge the highest classification. In this way, the vulnerability index corresponding to the highest classification accuracy rate is judged. Then the article is based on the same experiment of different components of the aortic artery in the artificial neural network, and finally the vulnerability index corresponding to the highest classification accuracy can be obtained. The results show that the best vulnerability index point is 1.716 when the experiment is based on the intravascular ultrasound image, and the best vulnerability index point is 1.607 when the experiment is based on the aortic artery component data. Moreover, the vulnerability index and classification accuracy rate has a periodic relationship within a certain range, and finally the highest AUC is 0.7143 based on the obtained vulnerability index point on the verification set. In this paper, the convolution neural network is used to find the best vulnerability index classification points. The experimental results show that this method has the guiding significance for the classification and diagnosis of vulnerable plaques, further reduce interventional treatment of cardiovascular disease.
Collapse
Affiliation(s)
- Yankun Cao
- The Rsearch Center of Intelligent Medical Information Processing, School of Information Science and Engineering, Shandong University, Qingdao 266237, China; Joint SDU-NTU Centre for Artificial Intelligence Research (C-FAIR), Shandong University, Jinan 250101, China
| | - Xiaoyan Xiao
- Department of Nephrology, Qilu Hospital of Shandong University, No.107 Wenhuaxi Road, Jinan 250012, China
| | - Zhi Liu
- The Rsearch Center of Intelligent Medical Information Processing, School of Information Science and Engineering, Shandong University, Qingdao 266237, China; Joint SDU-NTU Centre for Artificial Intelligence Research (C-FAIR), Shandong University, Jinan 250101, China.
| | - Meijun Yang
- The Rsearch Center of Intelligent Medical Information Processing, School of Information Science and Engineering, Shandong University, Qingdao 266237, China
| | - Dianmin Sun
- Department of Thoracic Surgery, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan 250117, Shandong, China
| | - Wei Guo
- Joint SDU-NTU Centre for Artificial Intelligence Research (C-FAIR), Shandong University, Jinan 250101, China
| | - Lizhen Cui
- Joint SDU-NTU Centre for Artificial Intelligence Research (C-FAIR), Shandong University, Jinan 250101, China
| | - Pengfei Zhang
- Key Laboratory of Cardiovascular Remodeling and Function Research, Chinese Ministry of Education and Chinese National Health Commission, Department of Cardiology, Qilu Hospital of Shandong University. N0.107 Wenhuaxi Road, Jinan, Shanodng Province, China.
| |
Collapse
|
31
|
Zhang L, Ding X, Hou R. Classification Modeling Method for Near-Infrared Spectroscopy of Tobacco Based on Multimodal Convolution Neural Networks. JOURNAL OF ANALYTICAL METHODS IN CHEMISTRY 2020; 2020:9652470. [PMID: 32104610 PMCID: PMC7037502 DOI: 10.1155/2020/9652470] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2019] [Accepted: 01/11/2020] [Indexed: 05/14/2023]
Abstract
The origin of tobacco is the most important factor in determining the style characteristics and intrinsic quality of tobacco. There are many applications for the identification of tobacco origin by near-infrared spectroscopy. In order to improve the accuracy of the tobacco origin classification, a near-infrared spectrum (NIRS) identification method based on multimodal convolutional neural networks (CNN) was proposed, taking advantage of the strong feature extraction ability of the CNN. Firstly, the one-dimensional convolutional neural network (1-D CNN) is used to extract and combine the pattern features of one-dimensional NIRS data, and then the extracted features are used for classification. Secondly, the one-dimensional NIRS data are converted into two-dimensional spectral images, and the structure features are extracted from two-dimensional spectral images by the two-dimensional convolutional neural network (2-D CNN) method. The classification is performed by the combination of global and local training features. Finally, the influences of different network structure parameters on model identification performance are studied, and the optimal CNN models are selected and compared. The multimodal NIR-CNN identification models of tobacco origin were established by using NIRS of 5,200 tobacco samples from 10 major tobacco producing provinces in China and 3 foreign countries. The classification accuracy of 1-D CNN and 2-D CNN models was 93.15% and 93.05%, respectively, which was better than the traditional PLS-DA method. The experimental results show that the application of 1-D CNN and 2-D CNN can accurately and reliably distinguish the NIRS data, and it can be developed into a new rapid identification method of tobacco origin, which has an important promotion value.
Collapse
Affiliation(s)
- Lei Zhang
- College of Information Science and Engineering, Ocean University of China, Qingdao 266100, China
| | - Xiangqian Ding
- College of Information Science and Engineering, Ocean University of China, Qingdao 266100, China
| | - Ruichun Hou
- College of Information Science and Engineering, Ocean University of China, Qingdao 266100, China
| |
Collapse
|
32
|
Hamamoto R, Komatsu M, Takasawa K, Asada K, Kaneko S. Epigenetics Analysis and Integrated Analysis of Multiomics Data, Including Epigenetic Data, Using Artificial Intelligence in the Era of Precision Medicine. Biomolecules 2019; 10:62. [PMID: 31905969 PMCID: PMC7023005 DOI: 10.3390/biom10010062] [Citation(s) in RCA: 55] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2019] [Revised: 12/20/2019] [Accepted: 12/27/2019] [Indexed: 12/14/2022] Open
Abstract
To clarify the mechanisms of diseases, such as cancer, studies analyzing genetic mutations have been actively conducted for a long time, and a large number of achievements have already been reported. Indeed, genomic medicine is considered the core discipline of precision medicine, and currently, the clinical application of cutting-edge genomic medicine aimed at improving the prevention, diagnosis and treatment of a wide range of diseases is promoted. However, although the Human Genome Project was completed in 2003 and large-scale genetic analyses have since been accomplished worldwide with the development of next-generation sequencing (NGS), explaining the mechanism of disease onset only using genetic variation has been recognized as difficult. Meanwhile, the importance of epigenetics, which describes inheritance by mechanisms other than the genomic DNA sequence, has recently attracted attention, and, in particular, many studies have reported the involvement of epigenetic deregulation in human cancer. So far, given that genetic and epigenetic studies tend to be accomplished independently, physiological relationships between genetics and epigenetics in diseases remain almost unknown. Since this situation may be a disadvantage to developing precision medicine, the integrated understanding of genetic variation and epigenetic deregulation appears to be now critical. Importantly, the current progress of artificial intelligence (AI) technologies, such as machine learning and deep learning, is remarkable and enables multimodal analyses of big omics data. In this regard, it is important to develop a platform that can conduct multimodal analysis of medical big data using AI as this may accelerate the realization of precision medicine. In this review, we discuss the importance of genome-wide epigenetic and multiomics analyses using AI in the era of precision medicine.
Collapse
Affiliation(s)
- Ryuji Hamamoto
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.K.); (K.T.); (K.A.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Masaaki Komatsu
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.K.); (K.T.); (K.A.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Ken Takasawa
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.K.); (K.T.); (K.A.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Ken Asada
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.K.); (K.T.); (K.A.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Syuzo Kaneko
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.K.); (K.T.); (K.A.); (S.K.)
| |
Collapse
|
33
|
Kunz F, Stellzig-Eisenhauer A, Zeman F, Boldt J. Artificial intelligence in orthodontics : Evaluation of a fully automated cephalometric analysis using a customized convolutional neural network. J Orofac Orthop 2019; 81:52-68. [PMID: 31853586 DOI: 10.1007/s00056-019-00203-8] [Citation(s) in RCA: 108] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2019] [Accepted: 10/20/2019] [Indexed: 11/27/2022]
Abstract
PURPOSE The aim of this investigation was to create an automated cephalometric X‑ray analysis using a specialized artificial intelligence (AI) algorithm. We compared the accuracy of this analysis to the current gold standard (analyses performed by human experts) to evaluate precision and clinical application of such an approach in orthodontic routine. METHODS For training of the network, 12 experienced examiners identified 18 landmarks on a total of 1792 cephalometric X‑rays. To evaluate quality of the predictions of the AI, both AI and each examiner analyzed 12 commonly used orthodontic parameters on a basis of 50 cephalometric X‑rays that were not part of the training data for the AI. Median values of the 12 examiners for each parameter were defined as humans' gold standard and compared to the AI's predictions. RESULTS There were almost no statistically significant differences between humans' gold standard and the AI's predictions. Differences between the two analyses do not seem to be clinically relevant. CONCLUSIONS We created an AI algorithm able to analyze unknown cephalometric X‑rays at almost the same quality level as experienced human examiners (current gold standard). This study is one of the first to successfully enable implementation of AI into dentistry, in particular orthodontics, satisfying medical requirements.
Collapse
Affiliation(s)
- Felix Kunz
- Poliklinik für Kieferorthopädie, Universitätsklinikum Würzburg, Pleicherwall 2, 97070, Würzburg, Germany.
| | | | - Florian Zeman
- Zentrum für Klinische Studien, Universitätsklinikum Regensburg, Franz-Josef-Strauß-Allee 11, 93053, Regensburg, Germany
| | - Julian Boldt
- Poliklinik für Zahnärztliche Prothetik, Universitätsklinikum Würzburg, Pleicherwall 2, 97070, Würzburg, Germany
| |
Collapse
|
34
|
Interactive video-player to improve social smile in individuals with autism spectrum disorder. ADVANCES IN AUTISM 2019. [DOI: 10.1108/aia-05-2019-0014] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Purpose
The purpose of this paper is to propose and develop a live interaction-based video player system named LIV4Smile for the improvement of the social smile in individuals with autism spectrum disorder (ASD).
Design/methodology/approach
The proposed LIV4Smile intervention was a video player that operated by detecting smile using a convolutional neural network (CNN)-based algorithm. To maintain a live interaction, a CNN-based smile detector was configured and used in this system. The statistical test was also conducted to validate the performance of the system.
Findings
The significant improvement was observed in smile responses of individuals with ASD with the utilization of the proposed LIV4Smile system in a real-time environment.
Research limitations/implications
A small sample size and clinical utilizing for validation and initial training of ASD individuals for LIV4Smile could be considered under implications.
Originality/value
The main aim of this study was to address the inclusive practices for children with autism. The proposed CNN algorithm-based LIV4Smile intervention resulted in high accuracy in facial smile detection.
Collapse
|
35
|
Cheng S, Zhou G. Facial Expression Recognition Method Based on Improved VGG Convolutional Neural Network. INT J PATTERN RECOGN 2019. [DOI: 10.1142/s0218001420560030] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Because the shallow neural network has limited ability to represent complex functions with limited samples and calculation units, its generalization ability will be limited when it comes to complex classification problems. The essence of deep learning is to learn a nonlinear network structure, to represent input data distributed representation and demonstrate a powerful ability to learn deeper features of data from a small set of samples. In order to realize the accurate classification of expression images under normal conditions, this paper proposes an expression recognition model of improved Visual Geometry Group (VGG) deep convolutional neural network (CNN). Based on the VGG-19, the model optimizes network structure and network parameters. Most expression databases are unable to train the entire network from the start due to lack of sufficient data. This paper uses migration learning techniques to overcome the shortage of image training samples. Shallow CNN, Alex-Net and improved VGG-19 deep CNN are used to train and analyze the facial expression data on the Extended Cohn–Kanade expression database, and compare the experimental results obtained. The experimental results indicate that the improved VGG-19 network model can achieve 96% accuracy in facial expression recognition, which is obviously superior to the results of other network models.
Collapse
Affiliation(s)
- Shuo Cheng
- Harbin Normal University, Harbin 150000, P. R. China
| | - Guohui Zhou
- Harbin Normal University, Harbin 150000, P. R. China
| |
Collapse
|
36
|
FMnet: Iris Segmentation and Recognition by Using Fully and Multi-Scale CNN for Biometric Security. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9102042] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In Deep Learning, recent works show that neural networks have a high potential in the field of biometric security. The advantage of using this type of architecture, in addition to being robust, is that the network learns the characteristic vectors by creating intelligent filters in an automatic way, grace to the layers of convolution. In this paper, we propose an algorithm “FMnet” for iris recognition by using Fully Convolutional Network (FCN) and Multi-scale Convolutional Neural Network (MCNN). By taking into considerations the property of Convolutional Neural Networks to learn and work at different resolutions, our proposed iris recognition method overcomes the existing issues in the classical methods which only use handcrafted features extraction, by performing features extraction and classification together. Our proposed algorithm shows better classification results as compared to the other state-of-the-art iris recognition approaches.
Collapse
|
37
|
Zhou LQ, Wang JY, Yu SY, Wu GG, Wei Q, Deng YB, Wu XL, Cui XW, Dietrich CF. Artificial intelligence in medical imaging of the liver. World J Gastroenterol 2019; 25:672-682. [PMID: 30783371 PMCID: PMC6378542 DOI: 10.3748/wjg.v25.i6.672] [Citation(s) in RCA: 129] [Impact Index Per Article: 21.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/25/2018] [Revised: 12/24/2018] [Accepted: 01/09/2019] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI), particularly deep learning algorithms, is gaining extensive attention for its excellent performance in image-recognition tasks. They can automatically make a quantitative assessment of complex medical image characteristics and achieve an increased accuracy for diagnosis with higher efficiency. AI is widely used and getting increasingly popular in the medical imaging of the liver, including radiology, ultrasound, and nuclear medicine. AI can assist physicians to make more accurate and reproductive imaging diagnosis and also reduce the physicians' workload. This article illustrates basic technical knowledge about AI, including traditional machine learning and deep learning algorithms, especially convolutional neural networks, and their clinical application in the medical imaging of liver diseases, such as detecting and evaluating focal liver lesions, facilitating treatment, and predicting liver treatment response. We conclude that machine-assisted medical services will be a promising solution for future liver medical care. Lastly, we discuss the challenges and future directions of clinical application of deep learning techniques.
Collapse
Affiliation(s)
- Li-Qiang Zhou
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China
| | - Jia-Yu Wang
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China
| | - Song-Yuan Yu
- Department of Ultrasound, Tianyou Hospital Affiliated to Wuhan University of Technology, Wuhan 430030, Hubei Province, China
| | - Ge-Ge Wu
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China
| | - Qi Wei
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China
| | - You-Bin Deng
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China
| | - Xing-Long Wu
- School of Mathematics and Computer Science, Wuhan Textitle University, Wuhan 430200, Hubei Province, China
| | - Xin-Wu Cui
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China
| | - Christoph F Dietrich
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China
- Medical Clinic 2, Caritas-Krankenhaus Bad Mergentheim, Academic Teaching Hospital of the University of Würzburg, Würzburg 97980, Germany
| |
Collapse
|
38
|
Liu R, Miao Q, Song J, Quan Y, Li Y, Xu P, Dai J. Multiscale road centerlines extraction from high-resolution aerial imagery. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2018.10.036] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
39
|
Raman R, Srinivasan S, Virmani S, Sivaprasad S, Rao C, Rajalakshmi R. Fundus photograph-based deep learning algorithms in detecting diabetic retinopathy. Eye (Lond) 2019; 33:97-109. [PMID: 30401899 PMCID: PMC6328553 DOI: 10.1038/s41433-018-0269-y] [Citation(s) in RCA: 60] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2018] [Accepted: 10/07/2018] [Indexed: 02/05/2023] Open
Abstract
Remarkable advances in biomedical research have led to the generation of large amounts of data. Using artificial intelligence, it has become possible to extract meaningful information from large volumes of data, in a shorter frame of time, with very less human interference. In effect, convolutional neural networks (a deep learning method) have been taught to recognize pathological lesions from images. Diabetes has high morbidity, with millions of people who need to be screened for diabetic retinopathy (DR). Deep neural networks offer a great advantage of screening for DR from retinal images, in improved identification of DR lesions and risk factors for diseases, with high accuracy and reliability. This review aims to compare the current evidences on various deep learning models for diagnosis of diabetic retinopathy (DR).
Collapse
Affiliation(s)
- Rajiv Raman
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Chennai, 600006, India.
| | | | - Sunny Virmani
- Verily Life Sciences LLC, South San Francisco, California, USA
| | - Sobha Sivaprasad
- NIHR Moorfields Biomedical Research Centre, London, EC1V 2PD, UK
| | - Chetan Rao
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Chennai, 600006, India
| | - Ramachandran Rajalakshmi
- Dr. Mohan's Diabetes Specialities Centre and Madras Diabetes Research Foundation, Chennai, 600086, India
| |
Collapse
|
40
|
SDAE-BP Based Octane Number Soft Sensor Using Near-infrared Spectroscopy in Gasoline Blending Process. Symmetry (Basel) 2018. [DOI: 10.3390/sym10120770] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
As the most important properties in the gasoline blending process, octane number is difficult to be measured in real time. To address this problem, a novel deep learning based soft sensor strategy, by using the near-infrared (NIR) spectroscopy obtained in the gasoline blending process, is proposed. First, as a network structure with hidden layer as symmetry axis, input layer and output layer as symmetric, the denosing auto-encoder (DAE) realizes the advanced expression of input. Additionally, the stacked DAE (SDAE) is trained based on unlabeled NIR and the weights in each DAE is recorded. Then, the recorded weights are used as the initial parameters of back propagation (BP) with the reason that the SDAE trained initial weights can avoid local minimums and realizes accelerate convergence, and the soft sensor model is achieved with labeled NIR data. Finally, the achieved soft sensor model is used to estimate the real time octane number. The performance of the method is demonstrated through the NIR dataset of gasoline, which was collected from a real gasoline blending process. Compared with PCA-BP (the dimension of datasets of BP reduced by principal component analysis) soft sensor model, the prediction accuracy was improved from 86.4% of PCA-BP to 94.8%, and the training time decreased from 20.1 s to 16.9 s. Therefore, SDAE-BP is proposed as a novel method for rapid and efficient determination of octane number in the gasoline blending process.
Collapse
|
41
|
Wang S, Zhang R, Deng Y, Chen K, Xiao D, Peng P, Jiang T. Discrimination of smoking status by MRI based on deep learning method. Quant Imaging Med Surg 2018; 8:1113-1120. [PMID: 30701165 DOI: 10.21037/qims.2018.12.04] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
Background This study aimed to assess the feasibility of deep learning-based magnetic resonance imaging (MRI) in the prediction of smoking status. Methods The head MRI 3D-T1WI images of 127 subjects (61 smokers and 66 non-smokers) were collected, and 176 image slices obtained for each subject. These subjects were 23-45 years old, and the smokers had at least 5 years of smoking experience. Approximate 25% of the subjects were randomly selected as the test set (15 smokers and 16 non-smokers), and the remaining subjects as the training set. Two deep learning models were developed: deep 3D convolutional neural network (Conv3D) and convolution neural network plus a recurrent neural network (RNN) with long short-term memory architecture (ConvLSTM). Results In the prediction of smoking status, Conv3D model achieved an accuracy of 80.6% (25/31), a sensitivity of 80.0% and a specificity of 81.3%, and ConvLSTM model achieved an accuracy of 93.5% (29/31), a sensitivity of 93.33% and a specificity of 93.75%. The accuracy obtained by these methods was significantly higher than that (<70%) obtained with support vector machine (SVM) methods. Conclusions The deep learning-based MRI can accurately predict smoking status. Studies with large sample size are needed to improve the accuracy and to predict the level of nicotine dependence.
Collapse
Affiliation(s)
- Shuangkun Wang
- Department of Radiology, Beijing Chaoyang Hospital, Capital Medical University, Beijing 10020, China
| | | | | | | | - Dan Xiao
- Tobacco Medicine and Tobacco Cessation Center, China-Japan Friendship Hospital, Beijing 100029, China.,WHO Collaborating Center for Tobacco Cessation and Respiratory Diseases Prevention, China-Japan Friendship Hospital, Beijing 100029, China
| | - Peng Peng
- Department of Radiology, Beijing Chaoyang Hospital, Capital Medical University, Beijing 10020, China
| | - Tao Jiang
- Department of Radiology, Beijing Chaoyang Hospital, Capital Medical University, Beijing 10020, China
| |
Collapse
|
42
|
|
43
|
An Adaptive Multi-Sensor Data Fusion Method Based on Deep Convolutional Neural Networks for Fault Diagnosis of Planetary Gearbox. SENSORS 2017; 17:s17020414. [PMID: 28230767 PMCID: PMC5335931 DOI: 10.3390/s17020414] [Citation(s) in RCA: 80] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/30/2016] [Revised: 02/11/2017] [Accepted: 02/16/2017] [Indexed: 11/28/2022]
Abstract
A fault diagnosis approach based on multi-sensor data fusion is a promising tool to deal with complicated damage detection problems of mechanical systems. Nevertheless, this approach suffers from two challenges, which are (1) the feature extraction from various types of sensory data and (2) the selection of a suitable fusion level. It is usually difficult to choose an optimal feature or fusion level for a specific fault diagnosis task, and extensive domain expertise and human labor are also highly required during these selections. To address these two challenges, we propose an adaptive multi-sensor data fusion method based on deep convolutional neural networks (DCNN) for fault diagnosis. The proposed method can learn features from raw data and optimize a combination of different fusion levels adaptively to satisfy the requirements of any fault diagnosis task. The proposed method is tested through a planetary gearbox test rig. Handcraft features, manual-selected fusion levels, single sensory data, and two traditional intelligent models, back-propagation neural networks (BPNN) and a support vector machine (SVM), are used as comparisons in the experiment. The results demonstrate that the proposed method is able to detect the conditions of the planetary gearbox effectively with the best diagnosis accuracy among all comparative methods in the experiment.
Collapse
|
44
|
Arık SÖ, Ibragimov B, Xing L. Fully automated quantitative cephalometry using convolutional neural networks. J Med Imaging (Bellingham) 2017; 4:014501. [PMID: 28097213 PMCID: PMC5220585 DOI: 10.1117/1.jmi.4.1.014501] [Citation(s) in RCA: 128] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2016] [Accepted: 12/12/2016] [Indexed: 11/14/2022] Open
Abstract
Quantitative cephalometry plays an essential role in clinical diagnosis, treatment, and surgery. Development of fully automated techniques for these procedures is important to enable consistently accurate computerized analyses. We study the application of deep convolutional neural networks (CNNs) for fully automated quantitative cephalometry for the first time. The proposed framework utilizes CNNs for detection of landmarks that describe the anatomy of the depicted patient and yield quantitative estimation of pathologies in the jaws and skull base regions. We use a publicly available cephalometric x-ray image dataset to train CNNs for recognition of landmark appearance patterns. CNNs are trained to output probabilistic estimations of different landmark locations, which are combined using a shape-based model. We evaluate the overall framework on the test set and compare with other proposed techniques. We use the estimated landmark locations to assess anatomically relevant measurements and classify them into different anatomical types. Overall, our results demonstrate high anatomical landmark detection accuracy ([Formula: see text] to 2% higher success detection rate for a 2-mm range compared with the top benchmarks in the literature) and high anatomical type classification accuracy ([Formula: see text] average classification accuracy for test set). We demonstrate that CNNs, which merely input raw image patches, are promising for accurate quantitative cephalometry.
Collapse
Affiliation(s)
- Sercan Ö. Arık
- Baidu USA, 1195 Bordeaux Drive, Sunnyvale, California 94089, United States
| | - Bulat Ibragimov
- Stanford University, Department of Radiation Oncology, School of Medicine, 875 Blake Wilbur Drive, Stanford, California 94305, United States
| | - Lei Xing
- Stanford University, Department of Radiation Oncology, School of Medicine, 875 Blake Wilbur Drive, Stanford, California 94305, United States
| |
Collapse
|
45
|
|
46
|
Hyperspectral Imagery Classification Using Sparse Representations of Convolutional Neural Network Features. REMOTE SENSING 2016. [DOI: 10.3390/rs8020099] [Citation(s) in RCA: 156] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
|
47
|
Tang A, Lu K, Wang Y, Huang J, Li H. A Real-Time Hand Posture Recognition System Using Deep Neural Networks. ACM T INTEL SYST TEC 2015. [DOI: 10.1145/2735952] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Hand posture recognition (HPR) is quite a challenging task, due to both the difficulty in detecting and tracking hands with normal cameras and the limitations of traditional manually selected features. In this article, we propose a two-stage HPR system for Sign Language Recognition using a Kinect sensor. In the first stage, we propose an effective algorithm to implement hand detection and tracking. The algorithm incorporates both color and depth information, without specific requirements on uniform-colored or stable background. It can handle the situations in which hands are very close to other parts of the body or hands are not the nearest objects to the camera and allows for occlusion of hands caused by faces or other hands. In the second stage, we apply deep neural networks (DNNs) to automatically learn features from hand posture images that are insensitive to movement, scaling, and rotation. Experiments verify that the proposed system works quickly and accurately and achieves a recognition accuracy as high as 98.12%.
Collapse
Affiliation(s)
- Ao Tang
- University of Science and Technology of China, Hefei, China
| | - Ke Lu
- University of the Chinese Academy of Sciences, Beijing, China
| | - Yufei Wang
- University of Science and Technology of China, Hefei, China
| | - Jie Huang
- University of Science and Technology of China, Hefei, China
| | - Houqiang Li
- University of Science and Technology of China, Hefei, China
| |
Collapse
|
48
|
Zhang Y, Zhao D, Sun J, Zou G, Li W. Adaptive Convolutional Neural Network and Its Application in Face Recognition. Neural Process Lett 2015. [DOI: 10.1007/s11063-015-9420-y] [Citation(s) in RCA: 61] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
49
|
Pixel-based Machine Learning in Computer-Aided Diagnosis of Lung and Colon Cancer. INTELLIGENT SYSTEMS REFERENCE LIBRARY 2014. [DOI: 10.1007/978-3-642-40017-9_5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
|
50
|
Multiscale Convolutional Neural Networks for Vision–Based Classification of Cells. COMPUTER VISION – ACCV 2012 2013. [DOI: 10.1007/978-3-642-37444-9_27] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
|