Basic Study Open Access
Copyright ©The Author(s) 2020. Published by Baishideng Publishing Group Inc. All rights reserved.
Artif Intell Med Imaging. Sep 28, 2020; 1(3): 94-107
Published online Sep 28, 2020. doi: 10.35711/aimi.v1.i3.94
Predicting a live birth by artificial intelligence incorporating both the blastocyst image and conventional embryo evaluation parameters
Yasunari Miyagi, Department of Artificial Intelligence, Medical Data Labo, Okayama 703-8267, Japan
Yasunari Miyagi, Department of Gynecologic Oncology, Saitama Medical University International Medical Center, Hidaka 350-1298, Saitama, Japan
Toshihiro Habara, Rei Hirata, Nobuyoshi Hayashi, Department of Reproduction, Okayama Couples' Clinic, Okayama 701-1152, Japan
ORCID number: Yasunari Miyagi (0000-0003-0962-033X); Toshihiro Habara (0000-0003-3853-8044); Rei Hirata (0000-0002-2248-4224); Nobuyoshi Hayashi (0000-0001-6576-3066).
Author contributions: Miyagi Y, Habara T, R Hirata, and Hayashi N designed and coordinated the study; Miyagi Y and Hayashi N supervised the project; Habara T, and R Hirata acquired and validated data; Miyagi Y developed artificial intelligence software, analyzed and interpreted data, and wrote draft; Hayashi N set up project administration; Miyagi Y, Habara T, R Hirata, and Hayashi N wrote the manuscript; and all authors approved the final version of the article.
Institutional review board statement: The study was reviewed and approved by the Institutional Review Board at Okayama Couples’ Clinic.
Conflict-of-interest statement: The authors declare no conflict of interest.
Data sharing statement: No informed consent was not obtained for data sharing. No additional data are available.
Open-Access: This article is an open-access article that was selected by an in-house editor and fully peer-reviewed by external reviewers. It is distributed in accordance with the Creative Commons Attribution NonCommercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/Licenses/by-nc/4.0/
Corresponding author: Yasunari Miyagi, MD, PhD, Director, Professor, Surgeon, Department of Artificial Intelligence, Medical Data Labo, 289-48 Yamasaki, Naka ward, Okayama 703-8267, Japan. ymiyagi@mac.com
Received: August 24, 2020
Peer-review started: August 24, 2020
First decision: September 13, 2020
Revised: September 19, 2020
Accepted: September 19, 2020
Article in press: September 19, 2020
Published online: September 28, 2020
Processing time: 34 Days and 15.3 Hours

Abstract
BACKGROUND

The achievement of live birth is the goal of assisted reproductive technology in reproductive medicine. When the selected blastocyst is transferred to the uterus, the degree of implantation of the blastocyst is evaluated by microscopic inspection, and the result is only about 30%-40%, and the method of predicting live birth from the blastocyst image is unknown. Live births correlate with several clinical conventional embryo evaluation parameters (CEE), such as maternal age. Therefore, it is necessary to develop artificial intelligence (AI) that combines blastocyst images and CEE to predict live births.

AIM

To develop an AI classifier for blastocyst images and CEE to predict the probability of achieving a live birth.

METHODS

A total of 5691 images of blastocysts on the fifth day after oocyte retrieval obtained from consecutive patients from January 2009 to April 2017 with fully deidentified data were retrospectively enrolled with explanations to patients and a website containing additional information with an opt-out option. We have developed a system in which the original architecture of the deep learning neural network is used to predict the probability of live birth from a blastocyst image and CEE.

RESULTS

The live birth rate was 0.387 (= 1587/4104 cases). The number of independent clinical information for predicting live birth is 10, which significantly avoids multicollinearity. A single AI classifier is composed of ten layers of convolutional neural networks, and each elementwise layer of ten factors is developed and obtained with 42792 as the number of training data points and 0.001 as the L2 regularization value. The accuracy, sensitivity, specificity, negative predictive value, positive predictive value, Youden J index, and area under the curve values for predicting live birth are 0.743, 0.638, 0.789, 0.831, 0.573, 0.427, and 0.740, respectively. The optimal cut-off point of the receiver operator characteristic curve is 0.207.

CONCLUSION

AI classifiers have the potential of predicting live births that humans cannot predict. Artificial intelligence may make progress in assisted reproductive technology.

Key Words: Artificial intelligence; Blastocyst; Deep learning; Live birth; Machine learning; Neural network

Core Tip: The feasibility of predicting live birth by artificial intelligence (AI) combining blastocyst images and conventional embryo evaluation parameters (CEE) is investigated because there is no human method to predict live birth from blastocyst image. Deep learning of blastocyst images is performed by using the original conventional neural network, and the elementwise layer network is used for independent CEE factors to develop a single AI classifier, the accuracy, sensitivity, specificity and area under the curve values used to predict live birth by the AI are 0.743, 0.638, 0.789, and 0.740, respectively.



INTRODUCTION

The achievement of live birth is the goal of assisted reproductive technology in reproductive medicine. Miscarriage or embryo developmental failure can cause cost and time loss, and bring the negative psychological outcome to the patient. Although the morphological structures have been studied, the prognosis of the developmental ability of oocytes has not yet been found[1]. Time-lapse microscopy and conventional morphological evaluations recently studied are not sufficient to ensure the thriving of the embryo after transfer[2]. The feasibility of investigating time-lapse imaging has not yet been established. Preimplantation genetic testing for aneuploidy[3,4], which is an invasive procedure for embryos, is the subject of ethical considerations. Since embryos are genetically heterogeneous, the chromosomal profile of biopsy samples does not always reflect the rest of the profile[5]. After all, no method has been established in practice to use morphological analysis and/or non-morphological analysis to predict the live birth of a blastocyst.

Recently, artificial intelligence (AI) has been developed[6] and investigated as a diagnostic tool in reproductive medicine. e.g., predicting the viability of embryos can lead to a sensitivity of 70.1% for viable embryos, and a specificity of 60.5% for non-viable embryos[7]. A report showed that the AI classifier was used to classify images of mature blastocysts, which appeared to be the final stage prior to freezing or transfer, and the most important embryo stage for evaluating assisted reproductive technology demonstrated the potential for predicting the probability of live birth[8]. Our report (2019) is used to apply deep learning in convolutional neural networks (CNN) to the prediction of live births[9-12] to blastocyst images classified by maternal age, demonstrated that the accuracy, sensitivity, specificity, positive predictive value, and negative predictive value area under the curve (AUC) were 0.732, 0.673, 0.753, 0.404, 0.862 and 0.726, respectively[13]. To the best of our knowledge, these are unique values for predicting live birth through image recognition of blastocyst images. We have previously reported live birth predictions using multivariate logistic regression function in combination with conventional embryo evaluation (CEE) (e.g. maternal age, body mass index, etc.) and the application of deep learning, which was applied to blastocyst images that were also classified by age; this method was defined as a combination method[13,14] in which the accuracy, sensitivity, specificity, positive predictive value, negative predictive value, and AUC value for all ages were 0.721, 0.779, 0.704, 0.400, 0.885 and 0.773, respectively (2019). This combination method seemed to be better than CEE.

AI can be trained through images and non-images (such as numbers[15]) simultaneously. The classifier made by AI can convert data composed of image data and non-image data into a confidence score, which is an estimated probability of belonging to a target category (such as a live birth category). Therefore, when inputting images and non-images, an AI classifier trained on image and non-image data can generate confidence scores. The AI feature that can convert images into probabilities seems to be an outstanding advantage. Compared with AI classifiers trained only by images, AI classifiers trained with more information (including image and non-image data) may show better results. Then it may be necessary to investigate whether a single AI classifier of deep learning might demonstrate better predictability than the combination method when applied to both the blastocyst image and independent CEE factors, which were not classified by age but included age, to predict a live birth. Although it is necessary to use a combination method to create multiple AI classifiers, it is not necessary to classify a single AI classifier by age as an independent factor of CEE[16,17]. Therefore, we constructed the original neural network architecture of the AI classifier as a pilot study and demonstrated the feasibility of the classifier compared with the combination method.

MATERIALS AND METHODS
Patients and data preparation

The study collected images of blastocysts with morphological features and clinical information obtained from consecutive patients at the Okayama Couples’ Clinic from January 1, 2009, to April 30, 2017, with completely deidentified data were enrolled. Only elective single embryo transfer is performed. Track all blastocysts to confirm whether the result is a live birth or a non-live birth. This retrospective study was approved by the Institutional Review Board (IRB) of Okayama Couples' Clinic (IRB number 18000128-5). This non-interventional study provides patients with the option to opt-out with additional information on the clinic’s website.

CEE

All blastocysts with clinical information and morphological features, such as maternal age, body mass index, past embryo transfer time, in vitro fertilization time, anti-Müllerian hormone value, FSH value, blastocyst grade on day 3, embryo cryopreservation day, Trophoblast grade, inner cell mass grade, number of blastomeres on the 3rd day after insemination, the average diameter of blastocysts, antral follicle count, the existence of immune sterility, the existence of oviduct infertility, the existence of endometriosis, insemination procedures, ovarian stimulation method, the grade of smooth endoplasmic reticulum cluster, degree of blastocyst expansion, presence of vacuoles, refractile body, male age, and male body mass index, were collected to evaluate the outcome of live birth vs non-live birth. This information was provided by doctors and embryologists engaged in clinical practice for over twenty years and who have implemented standard laboratory practices related to embryo morphological evaluations according to the 2011 international consensus meeting[18].

The relationships between each factor in CEE and live birth were assessed. Then, we obtained univariate regression functions. Significant factors without multicollinearity which indicated a state of strong correlations between variables were selected as independent factors to predict live birth.

Blastocyst images

As a routine conventional microscopic observation at magnification of 400 times, a single clear image of the blastocyst is captured at about 115 h after insemination or about 139 h if the blastocyst is less than approximate 120 µm in diameter. According to a published report[14], each image is cropped into a square and then saved in size of 50 × 50 pixels to provide the best accuracy. The picture has been de-identified so as not to identify the person.

Preparation for AI

The deidentified data set included all the factors in CEE, images of the blastocysts that resulted in miscarriages, non-live births, or live births were transferred to the AI system off-line.

AI classifier

AI classification programs were developed as shown in Figure 1. AI classifiers which were made up of both CNN[19-24] with L2 regularization[25,26] and elementwise functions that apply a function to each element of a tensor for each factor of the CEE to obtain the probability of predicting a live birth or non-live birth, as shown in Figure 1. We introduced deep learning for images with a published CNN architecture except for a softmax layer[14]. The CNN in the image consisted of 10 Layers with a combination of convolutional layers with multiple kernel sizes[27-29], pooling layers[30-33], flattened layers[34], linear layers[35,36] and rectified linear layers unit layers[37,38]. On the other hand, we also performed elementwise functions that we reported for the CEE factors[14].

Figure 1
Figure 1 Flowchart for generating the artificial intelligence classifiers.  The artificial intelligence classifier consisted of a combination of 10 Layers of a convolutional neural network for an image and each elementwise layer often significant factors of the conventional embryo evaluation (CEE) that we had reported[14]. The ten factors chosen as independent factors to predict live birth were age, the number of embryo transfers, anti-Müllerian hormone concentration, day-3 blastomere number, grade on day 3, embryo cryopreservation day, inner cell mass, trophectoderm, average diameter, and body mass index. The functions in the elementwise layer for each factor of the CEE are shown as formulas in Table 1. The image processing and ten factors of the conventional embryo evaluation tensor were combined at the catenated layer. AI: Artificial intelligence.

Then, all tensors from the convolutional network for image and scholars by the elementwise functions for the factors of the CEE were catenated and inputted into a batch normalization layer[39]. Then, the data was placed in a linear layer and a softmax layer[40,41] which presented the probability of live birth or non-live birth.

The appropriate number of training datasets was investigated by evaluating the accuracies using the ten-fold cross-validation method[42-44]. Firstly, all data were divided into test datasets and training datasets randomly in a ratio of one to nine. Four-fifths of the training data set was used as the AI training dataset. The remainder, one-fifth, of the dataset, was defined as the validation dataset. The AI training dataset, validation dataset, and non-overlapping test dataset were created in this fashion. The AI classifier was trained by an AI training dataset with concurrent validation by the validation dataset, and then the AI classifier was evaluated with the test dataset. The training dataset is augmented by rotating images, as is often performed in the AI classifier process known as data augmentation, because the blastocyst image processing with any degree of rotation can produce images, resulting in different vector data of the same category[14]. Repeat this procedure ten times to incorporate all the data. Investigate the number of training data points until the accuracy value is the largest possible while keeping the variance of the accuracy value as small as possible. Therefore, this process can temporarily display an appropriate amount of training data to more accurately verify the prediction. Then, by varying the hyperparameters and the number of training data points, the best AI classifier showing the best accuracy was finally selected during the early stopping procedure. By comparing with the combination method, the feasibility of the new method is evaluated.

Development environment

The tools and conditions for development used are as follows: Intel Core i5 running Windows 10 (Redmond, WA, United States), 32 GB (Santa Clara, CA, United States) and NVIDIA GeForce GTX 1080 Ti (Santa Clara, CA, United States) and Wolfram Language 12.0 (Wolfram Research, Champaign, IL, United States).

Statistical analysis

Wolfram Language 12.0 is used for all statistical analyses. One-way analysis of variance test and univariate regression analysis was used. P < 0.05 was considered to indicate statistical significance.

RESULTS
Clinical information and morphological features

There were 5691 blastocysts, among which the outcome of live birth and non-live birth were 1587/4104, respectively. Images, morphological feature data, and clinical information were obtained. The live birth rate was 0.387. Table 1 shows the independent clinical information and morphological features for predicting live births. The mean ± SD/median/range for age, number of embryo transfers, anti-Müllerian hormone concentration (ng/mL), day-3 blastomere number, grade on day 3 (class A = 1, B = 2, C = 3, D = 4), embryo cryopreservation day (day 5 = 1, day 6 = 2), inner cell mass (A = 1, B = 2, C = 3), trophectoderm (A = 1, B = 2, C = 3), average diameter (µm) and body mass index (kg/m2) were 35.75 ± 4.78/36/20-48; 2.75 ± 2.36/2/1-30; 3.91 ± 3.54/2.94/0.0-32.2; 8.03 ± 1.74/8/2-17; 1.87 ± 0.57/2/1-4; 1.20 ± 0.40/1/1-2; 1.58 ± 0.55/2/1-3; 2.01 ± 0.75/2/1-3; 154.77 ± 24.12/153.8/81.3-242.5; and 21.30 ± 3.16/20.6/13.9-43.3, respectively.

Table 1 The morphological features and clinical information of 5691 blastocysts and the univariate regression formulas[13] of the independent factors for predicting the probability of live birth.
Independent factors
mean ± SD
Median
Minimum
Maximum
Formulas
Coefficients
Age35.75 ± 4.78362048k/[1 + Exp (β0 + β1x)]β0 = -10.742 ± 4.106 (P = 0.0089); β1 = 0.284 ± 0.109 (P = 0.0088); K = 0.451
Number of embryo transfers procedures in the past2.75 ± 2.3621301/[1 + Exp (β0 + β1x)]β0 = 0.635 ± 1.158 (P = 0.584); β1 = 0.156 ± 0.123 (P = 0.204)
Anti-Müllerian hormone concentration (ng/mL)3.91 ± 3.542.940.032.21/[1 + Exp (β0 + β1x)]β0 = 1.282 ± 2.640 (P = 0.627); β1 = 0.062 ± 0.139 (P = 0.678)
Day-3 blastomere number8.03 ± 1.748217k/(2πσ2)1/2 Exp (-(x-m)2/(2σ2))σ = 4.668 ± 0.773 (P = 4.179 × 10-5); m = 11.624 ± 0.663 (P = 1.969 × 10-10); K = 4.643 ± 0.611 (P = 3.91 × 10-6)
Grade on day 3 (Class A = 1, B = 2, C = 3, D = 4)1.87 ± 0.57214k/[1 + Exp (β0 + β1x)]β0 = -7.967 ± 8.012 (P = 0.320); β1 = 2.584 ± 2.582 (P = 0.317); K = 0.319
Embryo cryopreservation day (Day 5 = 1, Day 6 = 2)1.20 ± 0.40112β0 + β1xβ0 = 0.435; β1 = -0.131
Inner cell mass (A = 1, B = 2, C = 3)1.58 ± 0.55213β0 + β1xβ0 = 0.479 ± 0.037 (P = 0.049); β1 = -0.131 ± 0.017 (P = 0.083)
Trophectoderm (A = 1, B = 2, C = 3)2.01 ± 0.75213β0 + β1xβ0 = 0.526 ± 0.002 (P = 0.0026); β1 = -0.124 ± 0.001 (P = 0.005)
Average diameter (µm)154.77 ± 24.12153.881.3242.51/[1 + Exp (β0 + β1x)]β0 = 2.623± 5.312 (P = 0.621); β1 = -0.011 ± 0.030 (P = 0.723)
Body mass index (kg/m2)21.30 ± 3.1620.613.943.31/[1 + Exp (β0 + β1x)]β0 = -0.631± 0.844 (P = 0.454); β1 = 0.079 ± 0.035 (P = 0.026)
Univariate regression functions

The univariate regression functions in order to use at the elementwise layer in the neural network for each were as follows: Age, k/[1 + Exp (β0 + β1x)], β0 = −10.742, β1 = 0.284, k = 0.451; number of embryo transfers, 1/[1 + Exp (β0 + β1x)], β0 = 0.635, β1 = 0.156; anti-Müllerian hormone concentration (ng/mL), 1/[1 + Exp (β0 + β1x)], β0 = 1.282, β1 = 0.062; day-3 blastomere number, k/(2πσ2)0.5 Exp [(x-m)2/(2σ2)], σ = 4.668, m = 11.624, k = 4.643; grade on day 3 (class A = 1, B = 2, C = 3, D = 4), k/[1 + Exp (β0 + β1x)]; β0 = −7.967, β1 = 2.584, k = 0.319; embryo cryopreservation day (day 5 = 1, day 6 = 2), β0 + β1x; β0 = 0.435, β1 = −0.131; inner cell mass (A = 1, B = 2, C = 3), β0 + β1x, β0 = 0.479, β1 = −0.131; trophectoderm (A = 1, B = 2, C = 3), β0 + β1x; β0 = 0.526, β1 = −0.124; averaged diameter (µm), 1/[1 + Exp (β0 + β1x)], β0 = 2.623, β1 = -0.011; and body mass index (kg/m2), 1/[1 + Exp (β0 + β1x)], β0 = -0.631, β1 = 0.079.

The approximate number of training data points

Overview of the accuracy profile as a function of the approximate number of training data points to study the appropriate amount of training data are shown in the left panel of Figure 2. The accuracy values were classified with L2 regularization values. High accuracies were obtained when the number of training data points was between 25605 and 45468. The mean of standard deviation of each parameter in the training data set are 0.0163, 0.0090, 0.0082, 0.0075, 0.009, 0.0121, 0.0086, and 0.0071, respectively. Although there is training data, there is no significant difference in standard deviation (P = 0.223 by one-way analysis of variance test). The same data were converted to a two-dimensional contour plot of accuracy as a function of the number of training data points and the number of L2-regularization values (right panel in Figure 2). The brighter area that indicated higher accuracy was observed when the number of the training data points was between 25605 and 45468 and when the L2-regularization values were less than 0.1.

Figure 2
Figure 2 The accuracy value (mean ± SD) as a function of the number of training data points . The accuracy values were classified with L2 regularization values: 0.0001, 0.001, 0.005, 0.01, 0.025, 0.05, 0.1 and 0.2. High accuracy values were obtained when the number of training data was between 25605 and 45468 (left panel). The two-dimensional contour plot of the accuracy value as a function of the training data and L2 regularization values (right panel). The brighter area indicates higher accuracy. High accuracy values were observed when the number of the training data was between 25605 and 45468 and when L2 regularization values were less than 0.1.
AI classifier

Therefore, the best AI classifier is investigated. When the number of training data points is between 25605 and 45468 and the L2-regularization values are less than 0.1, the best AI classifier will exist. Finally, the best AI classifier was obtained with 42792 training data points and 0.001 L2 regularization values. The accuracy, sensitivity, specificity, positive predictive value, negative predictive value, Youden J index[45] and area under the curve (mean ± SE) obtained by the AI classifier are 0.743, 0.638, 0.789, 0.573, 0.831, 0.427 and 0.740 ± 0.031, respectively, as shown in Table 2. The optimal cut-off point of the receiver operator characteristic (ROC) curve[46] is 0.207. The classification time per case is less than 0.2 s.

Table 2 Discrimination ability of the best classifier of the original neural network architecture comparing the combination method[13].
Patient age (yr)
Accuracy
Sensitivity
Specificity
PPV
NPV
AUC
95%CI of the AUC
Cut-point
AI in this study
All ages0.7430.6380.7890.5730.8310.7400.681-0.8010.207
The combination method[13]
All ages0.7210.7790.7040.4000.8850.7730.655-0.8880.213
< 350.6160.6520.5920.5150.7190.6550.600-0.7070.388
35-370.6710.7860.6120.5080.8490.7230.653-0.7930.281
38-390.7320.7580.7250.4550.9080.7910.693-0.8890.219
40-410.8010.7000.8160.3500.9500.8060.687-0.9250.142
≥ 420.7841.0000.7730.1711.0000.8880.713-1.0630.037
DISCUSSION

Here, a single AI classifier for deep learning with CNN using blastocyst images and elementwise layers using independent factors of the morphological features and clinical information of the CEE is developed. When the patient's age is less than 39 years old, this integrated AI classifier is superior to the combination method in terms of accuracy in predicting which embryo to transfer to obtain a live birth.

The accuracy value of predicting live birth is 0.743. We have previously reported that the accuracy values of predicting live birth through the CEE/AI/combined method are 0.631/0.647/0.616, 0.687/0.675/0.671, 0.725/0.697/0.732, 0.714/0.776/0.801 and 0.910/0.866/0.784 for the age categories of < 35, 35-37, 38-39, 40-41 and ≥ 42 years, respectively[13]. Our report provides 0.721 ± 0.077 (mean ± SD) as the accuracy value. The combination method is the multivariate logistic function, the CEE probability generated by the multivariate logistic regression, and the confidence score generated by AI and the deep learning of CNN independently. However, the AI classifier in this study used both blastocyst images and CEE factors including age. These two different methods used the same data set composed of blastocyst images and CEE factors, and there was no significant difference in accuracy (P = 0.52). Therefore, in this study, the accuracy value of 0.743 as a predictor of live birth seems to be close to the average accuracy of the combination method. In this study, the results on the accuracy value are superior to the combination method that was classified by maternal age (when the patient's age is less than 39 years old); when the patient's age is greater than 39 years old, the classification method is inferior. Regarding the accuracy value, if the AI classifier in this study is used, it will be better for patients younger than 39 years old, as shown in Table 2.

Although there is no other way to predict live births, compared with AI in other medical classifiers, this single AI classifier does not seem to be good enough. The accuracy value of the AI classifier has been published, and were 0.997 for the breast cancer diagnosis[47]; 0.83-0.90 for the early diagnosis of Alzheimer's disease[48]; 0.83 for urological dysfunctions[49]; 0.72[50], 0.50[51], 0.823[52] and 0.941[53] for colposcopy diagnosis; 0.83 for the orthopedic trauma diagnosis[54]; and 0.98 for the morphological quality of blastocysts with the evaluation by the embryologist[55]. In one report, due to the probability of live births, images of embryos classified as poor and good were scored 0.509 and 0.614, respectively[55]. The AI classifier fails to notice clinical obstacles to achieving delivery, such as uterine factors[56] (e.g., uterine leiomyoma[57] and endometrial polyps[58]), endometriosis[59], ovarian function[60], oviduct obstruction[61,62], immune disorders[63,64] and the uterine microbiota[65,66]), so the prediction of blastocyst outcome through images can never reach 100%. Therefore, in this study, using 0.743 as the accuracy value for predicting live birth, as the application of AI in medicine, seems to be a moderately good result.

The values of AUC, sensitivity, and specificity are the most important statistical data for evaluating binary classification test methods because these values are independent of the distribution of the patient. The AUC in this study was 0.740 ± 0.031 (mean ± SE). In our report, the AUC values of predicted live births done by CEE/AI/combination methods are 0.651/0.634/0.655, 0.697/0.688/0.723, 0.771/0.728/0.791, 0.788/0.743/0.806 and 0.820/0.837/0.888 for the age categories of < 35, 35-37, 38-39, 40-41, and ≥ 42 years, respectively[13].

The reported AUC of the combination method is 0.773 ± 0.088 (mean ± SD). There was no significant difference between the AUC value of the AI classifier and the average AUC value of the combination method (P = 0.41). However, in this study, when the patient is younger than 37 years old, the results of the AUC value may be superior to the combination method, and when the patient is older than 37 years old, the results are inferior. Regarding AUC, if the AI classifier in this study is used, it will be better for patients younger than 37 years old. The published AUC value of the AI classifier with deep learning is 0.66, which can predict live birth[13]; 0.65 predicts live birth without aneuploidy[8]; 0.74 classifies embryos into three categories[64]; 0.826[52] for colposcopy by image, and 0.941[53] for colposcopy by image combined with HPV. Therefore, in this study, as an AI application in medicine, an AUC value of 0.740 seems to be a moderately good result.

The sensitivity of this study is 0.638. In our report, the sensitivity of CEE/AI/combination method to age category is 0.580/0.530/0.652, 0.714/0.655/0.786, 0.727/0.697/0.758, 0.700/0.650/0.700, and 0.667/0.833/1.000 for the age categories of < 35, 35-37, 38-39, 40-41, and ≥ 42 years, respectively[13]. The sensitivity of this combination method is 0.779 ± 0.134 (mean ± SD). The sensitivity of this study is inferior to the combination method (P < 0.019) and lower than the combination method of any age category.

The specificity of this study was 0.789. In our report, the specificities of CEE/AI/combination methods are 0.665/0.724/0.592, 0.673/0.685/0.612, 0.725/0.697/0.725, 0.716/0.794/0.816, and 0.922/0.867/0.773 for the age categories of < 35, 35-37, 38-39, 40-41, and ≥ 42 years, respectively[13]. The specificity of the combination method in the report is 0.704 ± 0.098 (mean ± SD). Although there is no significant difference, the specificity of this study is superior to the combination method (P = 0.052). Except for 40-41 years old, the specificity of the combination method of any age category is higher.

In this study, Youden’s J index[45] was 0.427. Youden's J index (sensitivity plus specificity -1) is a statistical value that is very valuable for dichotomous diagnostic tests, and can sometimes be used for ROC analysis. In our report, the Youden’s J index values of the CEE/AI/combination methods are 0.245/0.254/0.244, 0.387/0.340/0.398, 0.452/0.394/0.483, 0.416/0.444/0.516, and 0.589/0.700/0.773 for the age categories of < 35, 35-37, 38-39, 40-41, and ≥ 42 years, respectively[13]. The combination method in the report yielded 0.483 ± 0.193 (mean ± SD). There is no significant difference in Youden's J index (P = 0.519). However, in this study, the results of the Youden’s J index may be superior to the combination method when the patient is younger than 37 years old, and inferior to the combination method when the patient is older than 37 years old. Regarding Youden's J index, if the AI classifier in this study is to be used, it will be better for patients younger than 37 years old. A report has been published on the Youden’s J index value of the medical AI classifier. The index of LSIL/HSIL diagnosed by deep learning colposcopy is 0.682[52] and 0.789[53], while the index for predicting live birth is 0.30 without aneuploidy[8].

As for the accidental evaluation and comparison of AI only used for blastocyst images[13], its accuracy, sensitivity, specificity, positive predictive value, negative predictive value, Youden’s J index, and AUC value are 0.732/0.721/0.743, 0.673/0.779/0.638, 0.753/0.704/0.789, 0.404/0.400/0.831, 0.862/0.885/0.573, 0.426/0.482/0.427 and 0.726/0.773/0.740 by AI for blastocyst image only/AI as the combination method/AI in this study. We hope that the AI or combination method in this study will be better than other methods, but there seems to be no outstanding AI. Although the AI in this study seemed to demonstrate tendency that was superior for specificity and positive predictive value, and inferior for sensitivity and negative predictive value and AI for blastocyst image only seemed to be superior to AI in this study for negative predictive value and the value of the AUC, there was no significant superiority among the three types of AI classifiers. This result might suggest that medical imaging as the morphological features of the blastocyst was the most important among the significant parameters in the dataset. Non-image parameters may not contribute much to predicting live birth, but there is one thing. Therefore, multi-image data sets such as time-lapse photography of AI may be good candidates for predicting live births in the future. In this study, only a single image was evaluated, but it is known through time-lapse image analysis that morphology is not a static parameter, so it will be evaluated by multiple images in the future. In further research, the application of artificial intelligence consists of different neural network architectures that can process images and non-images in multiple time series at the same time, which may be better applied to time-lapse evaluation, because it has not yet shown the established method of predicting live birth. Without the intervention of more complex statistical methods, or preferably by AI applications, it may be difficult to analyze multiple data composed of images and non-images.

Not only is there no gold standard method for making neural network architectures for general targets but also images. However, the following neural networks for general image recognition have made progress: LeNet[68] (in 1998), AlexNet[40] (in 2012), GoogLeNet[36] (in 2014), ResNet[69] (in 2015), and Squeeze-and-Excitation networks[70] (in 2017). In this study, the image of the blastocyst was analyzed using CNN, and the scholar data of CEE was converted at the elementwise layers that have a function for each factor. After connecting these outputs, by using several network layers, the probability of live birth or non-live birth can be generated at the end of the neural network through the softmax function. The AI of the neural network can evaluate not only images but also non-image data. Compared with traditional statistics, this function of artificial intelligence seems to be advanced. When evaluating images through traditional statistical data, humans should define some image features (such as morphological shape and hue) before analysis, and then extract and quantitatively convert them into tensor data. Although the criteria for extracting certain features from images are indispensable for using traditional statistical data, the universality of the definition cannot be proven. On the other hand, artificial intelligence can evaluate images without any standard to extract certain features. Therefore, it is expected that AI can predict live birth through blastocyst images and CEE factors. As far as we know, there are no reports about the simultaneous use of image and non-image data for live birth prediction.

The ability of the AI classifier neural network, which consists of the CNN for the image and elementwise functions for the scholarly values of the CEE factors in this study, was almost similar to that of the published combination method[13]. The AI classifier in this study demonstrated insignificant superiority in terms of specificity, significant inferiority in terms of sensitivity, and similarity in terms of accuracy, Youden J index, and AUC. However, comparing the accuracy of the AI in this study and the combination method in patients aged 35 to 37 years in a validation study, the required sample size would be 9497000 with 0.05 and 0.20 alpha and beta errors, respectively. Also, for the AI classifier of the neural network composed of CNN for the image and some networks for the non-image data, modifications in the network architecture, hyperparameters, and an increase in the number of datasets are expected. Although further prospective studies may be required, this AI model appeared to have the potential for clinical applicability. Also, this AI classifier was a single classifier that could be easier to improve in the future, although the five AI classifier combination method would be more difficult to improve because it would require a data set for each age category, resulting in a higher number of data sets would lead. The AI classifier can display the ranking of the blastocysts to predict a live birth with decimal places, and it helps embryologists and clinicians select the blastocyst for embryo transfer. There can be a quick diagnosis of the prediction over a distance without expensive equipment when the image and CEE parameters are transmitted over the internet.

Since there is theoretically an infinite number of probabilities for the construction of the neural network architecture and numerous combinations of statistical functions, further investigations for patients are worthwhile. By selecting the hyperparameters and setting the random seed value within the program in various ways, the result can be changed, e.g., the prediction accuracy can be a little better or a little worse. Similar statements can be made about the dataset. If one uses the same deep neural network architecture and a different training dataset, for example, provided by a different institute, the prediction accuracy differs. This is one of the aspects of current AI technology. The AI in this study had not been tested for external data as an institutional joint research to validate the generalization ability. In the field of AI technology, a critical statistical method for evaluating the relative superiority of predictive ability between two classifiers is not well established. Therefore, at least the clear superiority in prediction accuracy, the advantages of the network architecture, or a wider variety of datasets that were included in the analysis should be considered before conventional practical use. To improve the AI classifier in the future and to examine not only conventional static values such as accuracy, but also robustness, stability, and reliability. It may be necessary to use the multiple-image data of blastocytes obtained by time-lapse methods from other institutes to evaluate the prediction accuracy, the increased amount of data, novel non-image data that would be significant and not yet discovered, such as genetic information or some patient biomarkers in the future, and the incorporation of the vastly improved neural network architecture.

CONCLUSION

Deep learning with a CNN for a blastocyst image and with networks of elementwise layers for independent CEE factors were used to develop the single AI classifier for predicting the probability of live birth. Due to the development of AI that does not harm the embryo, the embryo can be transferred after making the prediction. AI could bring benefits to the advancement of assisted reproductive technology.

ARTICLE HIGHLIGHTS
Research background

To acquire live births is the goal of assisted reproductive technology. No method has been established in practice to use non-morphological analysis and/or morphological analysis such as conventional morphological evaluations and time-lapse microscopy to predict the live birth of a blastocyst.

Research motivation

Artificial intelligence (AI) classifiers for blastocyst images to predict the live birth has been introduced in reproductive medicine recently.

Research objectives

The present study aimed to develop an AI classifier that combines blastocyst images and the morphological features and clinical information of the conventional embryo evaluation parameters such as maternal age to predict the probability of achieving a live birth.

Research methods

A total of 5691 images of blastocysts combined with conventional embryo evaluation parameters were used. A system in which the original architecture of the deep learning neural network was developed to predict the probability of live birth.

Research results

The number of independent clinical information for predicting live birth is 10. The best single AI classifier composed of ten layers of convolutional neural networks and each elementwise layer of ten factors was developed and obtained with 42792 as the number of training data points and 0.001 as the L2 regularization value. The accuracy, sensitivity, specificity, negative predictive value, positive predictive value, Youden J index, and area under the curve values for predicting live birth were 0.743, 0.638, 0.789, 0.831, 0.573, 0.427, and 0.740, respectively.

Research conclusions

AI classifiers have the potential of predicting live births that humans cannot predict. AI that can be trained by both morphological and non- morphological information may make progress in assisted reproductive technology.

Research perspectives

Due to the development of AI that does not harm the embryo, the embryo can be transferred after making the prediction. AI could bring benefits to the advancement of assisted reproductive technology.

ACKNOWLEDGEMENTS

The authors would like to express their sincere gratitude to the anonymous reviewers for their useful comments and suggestions on how to improve the quality of this paper.

Footnotes

Manuscript source: Invited manuscript

Corresponding Author's Membership in Professional Societies: Japanese Association of Medical Artificial Intelligence; Japan Society of Obstetrics and Gynecology; Japan Society of Gynecologic Oncology; Japan Society of Clinical Oncology; Japan Society for Endoscopic Surgery; Japan Society of Gynecologic and Obstetric Endoscopy and Minimally Invasive Therapy; Japan Society for Reproductive Medicine.

Specialty type: Reproductive biology

Country/Territory of origin: Japan

Peer-review report’s scientific quality classification

Grade A (Excellent): 0

Grade B (Very good): B, B, B

Grade C (Good): 0

Grade D (Fair): 0

Grade E (Poor): 0

P-Reviewer: Boon CS, Hou Y, Karmazanovsky GG S-Editor: Wang JL L-Editor: A P-Editor: Li JH

References
1.  Rienzi L, Vajta G, Ubaldi F. Predictive value of oocyte morphology in human IVF: a systematic review of the literature. Hum Reprod Update. 2011;17:34-45.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 164]  [Cited by in F6Publishing: 174]  [Article Influence: 12.4]  [Reference Citation Analysis (0)]
2.  Kirkegaard K, Ahlström A, Ingerslev HJ, Hardarson T. Choosing the best embryo by time lapse versus standard morphology. Fertil Steril. 2015;103:323-332.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 89]  [Cited by in F6Publishing: 94]  [Article Influence: 9.4]  [Reference Citation Analysis (0)]
3.  Dahdouh EM, Balayla J, Audibert F; Genetics Committee; Wilson RD; Audibert F; Brock JA; Campagnolo C; Carroll J; Chong K; Gagnon A; Johnson JA; MacDonald W; Okun N; Pastuck M; Vallée-Pouliot K. Technical Update: Preimplantation Genetic Diagnosis and Screening. J Obstet Gynaecol Can. 2015;37:451-463.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 80]  [Cited by in F6Publishing: 67]  [Article Influence: 7.4]  [Reference Citation Analysis (0)]
4.  Brezina PR, Kutteh WH. Clinical applications of preimplantation genetic testing. BMJ. 2015;350:g7611.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 64]  [Cited by in F6Publishing: 68]  [Article Influence: 7.6]  [Reference Citation Analysis (0)]
5.  Gleicher N, Metzger J, Croft G, Kushnir VA, Albertini DF, Barad DH. A single trophectoderm biopsy at blastocyst stage is mathematically unable to determine embryo ploidy accurately enough for clinical use. Reprod Biol Endocrinol. 2017;15:33.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 76]  [Cited by in F6Publishing: 83]  [Article Influence: 11.9]  [Reference Citation Analysis (0)]
6.  Miyagi Y, Fujiwara K, Oda T, Miyake T, Coleman RL. Development of New Method for the Prediction of Clinical Trial Results Using Compressive Sensing of Artificial Intelligence. J Biostat Biometric App. 2018;3:203.  [PubMed]  [DOI]  [Cited in This Article: ]
7.  VerMilyea M, Hall JMM, Diakiw SM, Johnston A, Nguyen T, Perugini D, Miller A, Picou A, Murphy AP, Perugini M. Development of an artificial intelligence-based assessment model for prediction of embryo viability using static images captured by optical light microscopy during IVF. Hum Reprod. 2020;35:770-784.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 75]  [Cited by in F6Publishing: 141]  [Article Influence: 47.0]  [Reference Citation Analysis (0)]
8.  Miyagi Y, Habara T, Hirata R, Hayashi N. Feasibility of artificial intelligence for predicting live birth without aneuploidy from a blastocyst image. Reprod Med Biol. 2019;18:204-211.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 34]  [Cited by in F6Publishing: 24]  [Article Influence: 4.8]  [Reference Citation Analysis (0)]
9.  Fukushima K. Neocognitron: a self organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol Cybern. 1980;36:193-202.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 2930]  [Cited by in F6Publishing: 1215]  [Article Influence: 27.6]  [Reference Citation Analysis (0)]
10.  Hubel DH, Wiesel TN. Receptive fields and functional architecture of monkey striate cortex. J Physiol. 1968;195:215-243.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 4159]  [Cited by in F6Publishing: 3372]  [Article Influence: 60.2]  [Reference Citation Analysis (0)]
11.  Hubel DH, Wiesel TN. Receptive fields of single neurones in the cat's striate cortex. J Physiol. 1959;148:574-591.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 2725]  [Cited by in F6Publishing: 2050]  [Article Influence: 78.8]  [Reference Citation Analysis (0)]
12.  Schmidhuber J. Deep learning in neural networks: an overview. Neural Netw. 2015;61:85-117.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 9280]  [Cited by in F6Publishing: 3785]  [Article Influence: 378.5]  [Reference Citation Analysis (0)]
13.  Miyagi Y, Habara T, Hirata R, Hayashi N. Feasibility of predicting live birth by combining conventional embryo evaluation with artificial intelligence applied to a blastocyst image in patients classified by age. Reprod Med Biol. 2019;18:344-356.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 15]  [Cited by in F6Publishing: 12]  [Article Influence: 2.4]  [Reference Citation Analysis (1)]
14.  Miyagi Y, Habara T, Hirata R, Hayashi N. Feasibility of deep learning for predicting live birth from a blastocyst image in patients classified by age. Reprod Med Biol. 2019;18:190-203.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 23]  [Cited by in F6Publishing: 26]  [Article Influence: 5.2]  [Reference Citation Analysis (0)]
15.  Miyagi Y, Miyake T. Potential of deep learning for predicting fetal weight of Japanese. Acta Med Okayama. 2020;In press.  [PubMed]  [DOI]  [Cited in This Article: ]
16.  Weiss RV, Clapauch R. Female infertility of endocrine origin. Arq Bras Endocrinol Metabol. 2014;58:144-152.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 45]  [Cited by in F6Publishing: 49]  [Article Influence: 5.4]  [Reference Citation Analysis (0)]
17.  Shirasuna K, Iwata H. Effect of aging on the female reproductive function. Contracept Reprod Med. 2017;2:23.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 41]  [Cited by in F6Publishing: 50]  [Article Influence: 7.1]  [Reference Citation Analysis (0)]
18.  Alpha Scientists in Reproductive Medicine and ESHRE Special Interest Group of Embryology. The Istanbul consensus workshop on embryo assessment: proceedings of an expert meeting. Hum Reprod. 2011;26:1270-1283.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 952]  [Cited by in F6Publishing: 1177]  [Article Influence: 90.5]  [Reference Citation Analysis (0)]
19.  Bengio Y, Courville A, Vincent P. Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell. 2013;35:1798-1828.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 6542]  [Cited by in F6Publishing: 2416]  [Article Influence: 219.6]  [Reference Citation Analysis (0)]
20.  LeCun YA, Bottou L, Orr GB, Müller KR.   Efficient Backprop. In: Montavon G, Orr GB, Müller KR, editors. Neural networks: Tricks of the trade. Berlin, Heidelberg: Springer, 2012: 9-48.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 755]  [Cited by in F6Publishing: 751]  [Article Influence: 62.6]  [Reference Citation Analysis (0)]
21.  LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-Based Learning Applied to Document Recognition. Proc IEEE. 1998;86:2278-2324.  [PubMed]  [DOI]  [Cited in This Article: ]
22.  LeCun Y, Boser B, Denker JS, Henderson D, Howard RE, Hubbard W, Jackel, LD. Backpropagation Applied to Handwritten Zip Code Recognition. Neural Comput. 1989;1:541-551.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 5472]  [Cited by in F6Publishing: 5492]  [Article Influence: 156.9]  [Reference Citation Analysis (0)]
23.  Serre T, Wolf L, Bileschi S, Riesenhuber M, Poggio T. Robust object recognition with cortex-like mechanisms. IEEE Trans Pattern Anal Mach Intell. 2007;29:411-426.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1182]  [Cited by in F6Publishing: 469]  [Article Influence: 27.6]  [Reference Citation Analysis (0)]
24.  Wiatowski T, Bölcskei H. A Mathematical Theory of Deep Convolutional Neural Networks for Feature Extraction. IEEE Trans Inf Theory. 2017;64:1845-1866.  [PubMed]  [DOI]  [Cited in This Article: ]
25.  Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J Mach Learn Res. 2014;15:1929-1958.  [PubMed]  [DOI]  [Cited in This Article: ]
26.  Nowlan SJ, Hinton GE.   Simplifying Neural Networks by Soft Weight-Sharing. Neural Comput 1992; 4: 473-493.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 337]  [Cited by in F6Publishing: 332]  [Article Influence: 10.4]  [Reference Citation Analysis (0)]
27.  Bengio Y  Learning Deep Architectures for AI, Founds Trends® in Mach Learn. Boston: Now Publishers Inc, 2009: 1-127.  [PubMed]  [DOI]  [Cited in This Article: ]
28.  Mutch J, Lowe DG. Object Class Recognition and Localization Using Sparse Features with Limited Receptive Fields. Int J Comput Vis. 2008;80:45-57.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 254]  [Cited by in F6Publishing: 94]  [Article Influence: 5.9]  [Reference Citation Analysis (0)]
29.  Neal RM. Connectionist Learning of Belief Networks. Artif Intell. 1992;56:71-113.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 189]  [Cited by in F6Publishing: 187]  [Article Influence: 5.8]  [Reference Citation Analysis (0)]
30.  Ciresan DC, Meier U, Masci J, Gambardella LM, Schmidhuber J.   Flexible, High Performance Convolutional Neural Networks for Image Classification. Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence; July 16-22; Barcelona, Spain. Menlo Park: AAAI Press, 2011: 1237-1242.  [PubMed]  [DOI]  [Cited in This Article: ]
31.  Scherer D, Müller A, Behnke S.   Evaluation of Pooling Operations in Convolutional Architectures for Object Recognition. In: Diamantaras K, Duch W, Iliadis LS, editors. Artificial Neural Networks – ICANN 2010. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer, 2010: 92-101.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 513]  [Cited by in F6Publishing: 513]  [Article Influence: 36.6]  [Reference Citation Analysis (0)]
32.  Huang FJ, LeCun Y.   Large-Scale Learning with Svm and Convolutional for Generic Object Categorization. In: 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition; 2006 June 17-22; New York, USA. IEEE, 2006: 284-291.  [PubMed]  [DOI]  [Cited in This Article: ]
33.  Jarrett K, Kavukcuoglu K, Ranzato M, LeCun Y.   What Is the Best Multi-Stage Architecture for Object Recognition? In: 2009 IEEE 12th international conference on computer vision; 2009 Sep 29-Oct 2; Kyoto, Japan; IEEE, 2009: 2146-2153.  [PubMed]  [DOI]  [Cited in This Article: ]
34.  Zheng Y, Liu Q, Chen E, Ge Y, Zhao JL.   Time Series Classification Using Multi-Channels Deep Convolutional Neural Networks. In: Li F, Li G, Hwang S, Yao B, Zhang Z, editors. Web-Age Information Management. WAIM 2014. Lecture Notes in Computer Science. Cham: Springer, 2014: 298-310.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 256]  [Cited by in F6Publishing: 255]  [Article Influence: 25.5]  [Reference Citation Analysis (0)]
35.  Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare MG, Graves A, Riedmiller M, Fidjeland AK, Ostrovski G, Petersen S, Beattie C, Sadik A, Antonoglou I, King H, Kumaran D, Wierstra D, Legg S, Hassabis D. Human-level control through deep reinforcement learning. Nature. 2015;518:529-533.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 10168]  [Cited by in F6Publishing: 3039]  [Article Influence: 337.7]  [Reference Citation Analysis (0)]
36.  Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A.   Going Deeper with Convolutions. Proceedings of the IEEE conference on computer vision and pattern recognition; 2015 June 7-12; Boston, USA. Computer Vision Foundation, 2015: 1-9.  [PubMed]  [DOI]  [Cited in This Article: ]
37.  Glorot X, Bordes A, Bengio Y.   Deep Sparse Rectifier Neural Networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics (AISTATS) 2011; 2011 Apr 11-13; Lauderdale, USA. AISTATS, 2011: 315-323.  [PubMed]  [DOI]  [Cited in This Article: ]
38.  Nair V, Hinton GE.   Rectified Linear Units Improve Restricted Boltzmann Machines. Proceedings of the 27th international conference on machine learning (ICML-10); 2010 June 21-24; Haifa, Israel. Omni press, 2010: 807-814.  [PubMed]  [DOI]  [Cited in This Article: ]
39.  Ioff, S, Szegedy C.   Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. Available from: https://arxiv.org/abs/1502.03167v3.  [PubMed]  [DOI]  [Cited in This Article: ]
40.  Krizhevsky A, Sutskever I, Hinton GE.   Imagenet Classification with Deep Convolutional Neural Networks. In: Pereira F, Burges CJC, Bottou L, Weinberger KQ, editors. Proceedings of the 25th International Conference on Neural Information Processing Systems; 2012 Dec 3-8; Lake Tahoe, USA. Red Hook: Curran Associates Inc., 2012: 1097-1105.  [PubMed]  [DOI]  [Cited in This Article: ]
41.  Bridle JS  Probabilistic Interpretation of Feedforward Classification Network Outputs, with Relationships to Statistical Pattern Recognition. In: Soulié FF, Hérault J, editors. Neurocomputing. Berlin, Heidelberg: Springer, 1990: 227-236.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 389]  [Cited by in F6Publishing: 363]  [Article Influence: 10.7]  [Reference Citation Analysis (0)]
42.  Kohavi R  A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection. In: Proceedings of the 14th international joint conference on Artificial intelligence; 1995 Aug 20-25; Montreal, Canada. San Francisco: Morgan Kaufmann Publishers Inc, 1995: 1137-1145.  [PubMed]  [DOI]  [Cited in This Article: ]
43.  Schaffer C. Selecting a Classification Method by Cross-Validation. Mach Learn. 1993;13:135-143.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 179]  [Cited by in F6Publishing: 179]  [Article Influence: 5.8]  [Reference Citation Analysis (0)]
44.  Refaeilzadeh P, Tang L, Liu H.   Cross-validation. In: Liu, L, Özsu, MT, editors. Encyclopedia of database systems. New York: Springer, 2009: 532-538.  [PubMed]  [DOI]  [Cited in This Article: ]
45.  Youden WJ. Index for rating diagnostic tests. Cancer. 1950;3:32-35.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in F6Publishing: 88]  [Reference Citation Analysis (0)]
46.  Unal I. Defining an Optimal Cut-Point Value in ROC Analysis: An Alternative Approach. Comput Math Methods Med. 2017;2017:3762651.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 240]  [Cited by in F6Publishing: 394]  [Article Influence: 56.3]  [Reference Citation Analysis (0)]
47.  Litjens G, Sánchez CI, Timofeeva N, Hermsen M, Nagtegaal I, Kovacs I, Hulsbergen-van de Kaa C, Bult P, van Ginneken B, van der Laak J. Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis. Sci Rep. 2016;6:26286.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 577]  [Cited by in F6Publishing: 516]  [Article Influence: 64.5]  [Reference Citation Analysis (0)]
48.  Ortiz A, Munilla J, Górriz JM, Ramírez J. Ensembles of Deep Learning Architectures for the Early Diagnosis of the Alzheimer's Disease. Int J Neural Syst. 2016;26:1650025.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 213]  [Cited by in F6Publishing: 161]  [Article Influence: 20.1]  [Reference Citation Analysis (0)]
49.  Gil D, Johnsson M, Chamizo JMG, Paya AS, Fernandez DR. Application of Artificial Neural Networks in the Diagnosis of Urological Dysfunctions. Expert Syst Appl. 2009;36:5754-5760.  [PubMed]  [DOI]  [Cited in This Article: ]
50.  Simões PW, Izumi NB, Casagrande RS, Venson R, Veronezi CD, Moretti GP, da Rocha EL, Cechinel C, Ceretta LB, Comunello E, Martins PJ, Casagrande RA, Snoeyer ML, Manenti SA. Classification of images acquired with colposcopy using artificial neural networks. Cancer Inform. 2014;13:119-124.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 34]  [Cited by in F6Publishing: 18]  [Article Influence: 1.8]  [Reference Citation Analysis (0)]
51.  Sato M, Horie K, Hara A, Miyamoto Y, Kurihara K, Tomio K, Yokota H. Application of deep learning to the classification of images from colposcopy. Oncol Lett. 2018;15:3518-3523.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 31]  [Cited by in F6Publishing: 38]  [Article Influence: 6.3]  [Reference Citation Analysis (0)]
52.  Miyagi Y, Takehara K, Miyake T. Application of deep learning to the classification of uterine cervical squamous epithelial lesion from colposcopy images. Mol Clin Oncol. 2019;11:583-589.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 12]  [Cited by in F6Publishing: 17]  [Article Influence: 3.4]  [Reference Citation Analysis (0)]
53.  Miyagi Y, Takehara K, Nagayasu Y, Miyake T. Application of deep learning to the classification of uterine cervical squamous epithelial lesion from colposcopy images combined with HPV types. Oncol Lett. 2020;19:1602-1610.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 20]  [Cited by in F6Publishing: 16]  [Article Influence: 4.0]  [Reference Citation Analysis (0)]
54.  Olczak J, Fahlberg N, Maki A, Razavian AS, Jilert A, Stark A, Sköldenberg O, Gordon M. Artificial intelligence for analyzing orthopedic trauma radiographs. Acta Orthop. 2017;88:581-586.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 293]  [Cited by in F6Publishing: 245]  [Article Influence: 35.0]  [Reference Citation Analysis (0)]
55.  Khosravi P, Kazemi E, Zhan Q, Malmsten JE, Toschi M, Zisimopoulos P, Sigaras A, Lavery S, Cooper LAD, Hickman C, Meseguer M, Rosenwaks Z, Elemento O, Zaninovic N, Hajirasouliha I. Deep learning enables robust assessment and selection of human blastocysts after in vitro fertilization. NPJ Digit Med. 2019;2:21.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 152]  [Cited by in F6Publishing: 182]  [Article Influence: 36.4]  [Reference Citation Analysis (0)]
56.  Sanders B. Uterine factors and infertility. J Reprod Med. 2006;51:169-176.  [PubMed]  [DOI]  [Cited in This Article: ]
57.  Ikhena DE, Bulun SE. Literature Review on the Role of Uterine Fibroids in Endometrial Function. Reprod Sci. 2018;25:635-643.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 29]  [Cited by in F6Publishing: 44]  [Article Influence: 6.3]  [Reference Citation Analysis (0)]
58.  Taylor E, Gomel V. The uterus and fertility. Fertil Steril. 2008;89:1-16.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 237]  [Cited by in F6Publishing: 217]  [Article Influence: 12.8]  [Reference Citation Analysis (0)]
59.  Tomassetti C, D'Hooghe T. Endometriosis and infertility: Insights into the causal link and management strategies. Best Pract Res Clin Obstet Gynaecol. 2018;51:25-33.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 49]  [Cited by in F6Publishing: 64]  [Article Influence: 10.7]  [Reference Citation Analysis (0)]
60.  Christ JP, Gunning MN, Palla G, Eijkemans MJC, Lambalk CB, Laven JSE, Fauser BCJM. Estrogen deprivation and cardiovascular disease risk in primary ovarian insufficiency. Fertil Steril. 2018;109:594-600.e1.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 36]  [Cited by in F6Publishing: 28]  [Article Influence: 4.7]  [Reference Citation Analysis (0)]
61.  Arronet GH, Eduljee SY, O'Brien JR. A nine-year survey of Fallopian tube dysfunction in human infertility. Fertil Steril. 1969;20:903-918.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 26]  [Cited by in F6Publishing: 27]  [Article Influence: 0.5]  [Reference Citation Analysis (0)]
62.  Segars JH, Herbert CM 3rd, Moore DE, Hill GA, Wentz AC, Winfield AC. Selective fallopian tube cannulation: initial experience in an infertile population. Fertil Steril. 1990;53:357-359.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 19]  [Cited by in F6Publishing: 19]  [Article Influence: 0.6]  [Reference Citation Analysis (0)]
63.  Practice Committee of the American Society for Reproductive Medicine. The role of immunotherapy in in vitro fertilization: a guideline. Fertil Steril. 2018;110:387-400.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 24]  [Cited by in F6Publishing: 24]  [Article Influence: 4.8]  [Reference Citation Analysis (0)]
64.  Hong YH, Kim SJ, Moon KY, Kim SK, Jee BC, Lee WD, Kim SH. Impact of presence of antiphospholipid antibodies on in vitro fertilization outcome. Obstet Gynecol Sci. 2018;61:359-366.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 15]  [Cited by in F6Publishing: 15]  [Article Influence: 2.5]  [Reference Citation Analysis (0)]
65.  Moreno I, Simon C. Relevance of assessing the uterine microbiota in infertility. Fertil Steril. 2018;110:337-343.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 70]  [Cited by in F6Publishing: 95]  [Article Influence: 19.0]  [Reference Citation Analysis (0)]
66.  Kroon SJ, Ravel J, Huston WM. Cervicovaginal microbiota, women's health, and reproductive outcomes. Fertil Steril. 2018;110:327-336.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 106]  [Cited by in F6Publishing: 134]  [Article Influence: 26.8]  [Reference Citation Analysis (0)]
67.  Campbell A, Fishel S, Bowman N, Duffy S, Sedler M, Thornton S. Retrospective analysis of outcomes after IVF using an aneuploidy risk model derived from time-lapse imaging without PGS. Reprod Biomed Online. 2013;27:140-146.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 155]  [Cited by in F6Publishing: 158]  [Article Influence: 14.4]  [Reference Citation Analysis (0)]
68.  LeCun Y, Haffner P, Bottou L, Bengio Y.   Object Recognition with Gradient-Based Learning. In: Forsyth DA, Mundy JL, di Gesú V, Cipolla R. Shape, contour and grouping in computer vision. Lecture Notes in Computer Science 1681. Berlin, Heidelberg: Springer, 1999: 319-345.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 343]  [Cited by in F6Publishing: 22]  [Article Influence: 0.9]  [Reference Citation Analysis (0)]
69.  He K, Zhang X, Ren S, Sun J.   Deep Residual Learning for Image Recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2016 June 27-30; Las Vegas, USA. IEEE, 2016: 770–778.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 72655]  [Cited by in F6Publishing: 19215]  [Article Influence: 2401.9]  [Reference Citation Analysis (0)]
70.  Hu J, Shen L, Sun G.   Squeeze-and-Excitation Networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2018 June 18-23; Salt Lake City, USA. IEEE, 2018: 7132–7141.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 8497]  [Cited by in F6Publishing: 2848]  [Article Influence: 474.7]  [Reference Citation Analysis (0)]