Published online Mar 25, 2026. doi: 10.5527/wjn.v15.i1.116879
Revised: January 12, 2026
Accepted: February 9, 2026
Published online: March 25, 2026
Processing time: 111 Days and 7.3 Hours
Over the last decade, the use of machine learning (ML) techniques in problem modeling and solving has increased significantly, including in kidney trans
To compare various ML models with LR in predicting DGF, focusing on donor characteristics.
We analyzed 523 deceased donor kidney transplants performed between 2010 and 2020 across three transplant centers. The dataset included 14 donors, 3 trans
The best-performing model for each problem type achieved accuracies of 70% (RF), 70% (RF), 58% (RF), and 61% (XGB) for donor-only, donor + transplant, donor + recipient, and donor + transplant + recipient, respectively. LR achieved accuracies of 57%, 66%, 52% and 66%; however, these models generally showed low sensitivity and high specificity. Across most of them, significant predictors included donor creatinine, age, and mean blood pressure, cold ischemia time (transplant variable), and recipient smoking condition.
While most ML models outperformed LR, the differences were not substantial. This may be attributed to the small dataset size, which likely contributed to the overall poor performance. We recommend using these complex models with high-quality datasets that include a sufficient number of variables and observations to fully leverage their potential. The key question for future research is determining the dataset size required for ML to become the pri
Core Tip: Machine learning (ML) is increasingly used in kidney transplantation research, including predicting delayed graft function. This study compares six ML models with logit across four donor, transplant, and recipient variable combinations. The dataset comprises 44.7% delayed graft function-positive cases. All methods have similar performances, with accuracies between 58%-70%. Important predictors included donor creatinine, age, and mean blood pressure, cold-ischemia time, and recipient smoking condition. Although ML approaches slightly outperformed logit, overall performance remained modest, likely due to limited sample-size. Further research should define dataset scale and quality for ML to become a primary analytic tool for predicting kidney transplant outcomes.
- Citation: Salgado C, Gonzalez Cohens F, Vera FA, Ruiz R, Velasquez JD, Gonzalez FM. Prediction of graft outcomes after kidney transplantation: When standard statistics compare to machine learning techniques. World J Nephrol 2026; 15(1): 116879
- URL: https://www.wjgnet.com/2220-6124/full/v15/i1/116879.htm
- DOI: https://dx.doi.org/10.5527/wjn.v15.i1.116879
Kidney transplantation is the standard treatment for end-stage renal disease. Its superior outcomes compared with hemodialysis and peritoneal dialysis have led to growing waiting lists worldwide, where new patients are added faster than they can receive a donor kidney[1]. As waiting lists grow and donor organs remain limited, selecting the most appropriate recipients and optimizing graft survival become increasingly important[2]. Accurately predicting key clinical outcomes following kidney transplantation is therefore a central goal for treating physicians. Such predictive capabilities are essential for improving patient prognosis, optimizing kidney allocation (particularly for organs from extended-criteria donors), and supporting clinical decision-making[1]. To meet this need, statistical models have become essential tools for predicting post-transplant outcomes. However, many commonly used models rely on linear assumptions and may fail to capture the complex, nonlinear interactions between clinical variables[3]. In the past decade, advances in computational power and data availability have enabled the development of more sophisticated analytical tools capable of analyzing large datasets and uncovering patterns often missed by traditional statistical models. These techniques, broadly referred to as machine learning (ML), can handle high-dimensional data, model nonlinear relationships, and improve predictive accuracy, making them particularly suitable for clinical outcome prediction and survival analysis[3]. In this study, we evaluate the ability of several ML algorithms to predict delayed graft function (DGF) after deceased-donor kidney transplantation. Specifically, we compare their performance with that of traditional logistic regression (LR) using donor, transplant, and recipient characteristics as predictor variables.
This study draws on registry data from three kidney transplant centers in Chile, covering 765 deceased donor kidney transplants performed between 1989 and 2020. Nonetheless, based on significant differences found in clinical variables across decades (Kruskal-Wallis at 95%), we decided to include 523 observations corresponding to the 2010-2020 decade. This study was approved by the Institutional Review Board (Comité Ético Científico del Servicio de Salud Metropolitano Oriente). Written informed consent was obtained from all kidney transplant recipients upon their inclusion on the transplant centers’ waiting lists, authorizing the use of their anonymized clinical data.
The dataset included 14 donors, 3 transplants, and 64 recipient features, predictors, or independent variables. Never
The dataset comprised 43.5% DGF-positive and 56.5% DGF-negative patients. It was split into 80% for training and 20% for validation/testing. We utilized LR[5] together with six ML models to predict DGF after kidney transplantation, inclu
For all models, hyperparameters were optimized using random search and 10-fold cross-validation. Model per
The following sections show the description of clinical characteristics for each predictor set, and the results of the LR and ML models.
The donors were mostly male (59.1%), 42.6 ± 14.6 years old, died mainly because of stroke (39.2%) or trauma (33.4%), 16.8% were considered as extended criteria donors, and most did not have hypertension (77.1%). Table 1 shows the summary of all the available characteristics, together with their completeness in the dataset (i.e., proportion of obser
| Donor feature | Summary statistics, mean ± SD (range) | Completeness (%) |
| Sex | Female: 40.9%; male: 59.1% | 94 |
| Age (years) | 42.6 ± 14.6 (4-73) | 98 |
| Weight (kg) | 74.5 ± 17.3 (15-180) | 19 |
| Height (cm) | 166.7 ± 10.8 (97-190) | 19 |
| Body mass index (kg/m2) | 26.6 ± 4.8 (16-56) | 19 |
| Cause of death | Hemorrhagic stroke: 39.2%, trauma: 33.4%, ischemic stroke: 11.7%, other: 11.0%, anoxia: 4.7% | 100 |
| Extended criteria donor (ECD) | Yes: 16.8%, no: 83.2% | 100 |
| Blood type | O: 55.8%, A: 31.8%, B: 11.4%, AB: 1.1% | 98 |
| Hypertension | Yes: 22.9%, no: 77.1% | 83 |
| Diabetes mellitus | Type 1 0.8%, type 2 4.4%, no: 94.8% | 80 |
| Serum creatinine (mg/dL) | 0.88 ± 0.37 (0.20-2.82) | 89 |
| Mean blood pressure (mmHg) | 83.9 ± 13.1 (50-120) | 57 |
| Diuresis (mL/hour) | 154.6 ± 123.6 (0-700) | 56 |
Table 2 shows the characteristics related to the transplant procedure itself, with their corresponding statistical metrics and data completeness percentages.
| Transplant feature | Summary statistics, mean ± SD (range) | Completeness (%) |
| Origin of the kidney | Local: 38.6%, national: 61.4% | 83 |
| Cold ischemia time (hours) | 18.99 ± 6.05 (3.50, 40.15) | 93 |
| Warm ischemia time (minutes) | 39.49 ± 11.57 (10, 90) | 71 |
The recipients were mostly male too (54.3%), 46.4 ± 12.9 years old, and most of them had no or a small number of comorbidities (average Charlson score of less than 3), but most had hypertension (84.9%) and showed lab values accor
| Recipient feature | Summary statistics, mean ± SD (range) | Completeness (%) |
| Recipient characteristics | ||
| Sex | Female: 45.7%, male: 54.3% | 100 |
| Age (years) | 46.4 ± 12.9 (2-76) | 98 |
| Weight (kg) | 66.0 ± 11.8 (28.2-115) | 92 |
| Height (m) | 1.63 ± 0.09 (1.00-1.87) | 74 |
| Body mass index (m/kg2) | 24.6 ± 3.1 (16.6-38.9) | 74 |
| Blood type | O: 53.0%, A: 32.1%, B: 12.3%, AB: 2.51% | 99 |
| Number of transplants (n) | 1: 83.5%, 2: 15%, 3: 1.5% | 51 |
| Time on the waiting list (months) | 40.2 ± 31.7 (1-191) | 57 |
| Pre-transplant dialysis time (months) | 69.3 ± 44.2 (0-384) | 95 |
| Residual diuresis (mL/day) | 324.3 ± 491.5 (0-2500) | 63 |
| Comorbidities | ||
| Hypertension | Yes: 84.9%, no: 15.1% | 94 |
| Coronary artery disease | Yes: 5.9%, no: 94.1% | 90 |
| Congestive heart failure | Yes: 3.2%, no: 96.8% | 90 |
| Arrhythmias | Yes: 2.6%, no: 97.4% | 90 |
| Peripheral vascular disease | Symptomatic: 2.8%, asymptomatic: 0.6%, no: 96.6% | 89 |
| DM | Yes: 9.7%, no: 90.3% | 91 |
| DM type | Type 1: 16.2%, type 2: 82.4% | 90 |
| Cancer | Yes: 2.3%, no: 97.7% | 90 |
| Uropathy | Yes: 2.6%, no: 97.4% | 90 |
| HIV | Yes: 1.7%, no: 98.3% | 90 |
| Other physical | Yes: 5.0%, no: 95.0% | 90 |
| Other psychiatric | Yes: 2.6%, no: 97.4% | 90 |
| Charlson score | 2.92 ± 1.23 (2-10) | 42 |
| Clinical history | ||
| Transfusions | Yes: 30.3%, no: 69.7% | 64 |
| Previous organ transplant | Yes: 0.56%, no: 99.44% | 70 |
| Tobacco use | Yes: 47.5%, no: 52.5% | 69 |
| Alcohol use | Yes: 26.1%, no: 73.9% | 72 |
| Other drugs | Yes: 0.2%, no: 99.8% | 72 |
| Cause of chronic kidney disease | Unknown: 44.5%, non-diabetes mellitus glomerulopathy: 27.1%, congenital and cystic: 8.0%, diabetic kidney disease: 6.9%, other: 6.8%, hypertensive or vascular: 3.8%, tubulointerstitial: 3.0% | 99 |
| Dialysis | Yes: 99.7%, no: 0.3% | 97 |
| Dialysis type | HD: 91.0%, PD: 5.1%, combination: 3.9% | 90 |
| Laboratory feature of the recipient | Summary statistics, mean ± SD (range) | Completeness (%) |
| Serum creatinine (mg/dL) | 8.5 ± 2.6 (1.04-18.7) | 55 |
| Proteinuria (g) | 12.5 ± 36.6 (0-148) | 4 |
| Cholesterol (mg/dL) | 182.0 ± 44.0 (100-320) | 11 |
| Phosphorus (mg/dL) | 5.0 ± 1.6 (1.4-9.9) | 48 |
| Calcium (mg/dL) | 9.1 ± 1.0 (5.1-12.0) | 48 |
| PTH (pg/mL) | 387.1 ± 371.5 (2.5-2292) | 43 |
| Albumin (g/dL) | 4.3 ± 0.3 (3.1-5.5) | 38 |
| Hb (g/dL) | 10.9 ± 1.6 (5.9-17.0) | 37 |
| CMV | Positive: 78.5%, negative: 21.5% | 60 |
| Chagas | Positive: 2.8%, negative: 97.2% | 61 |
| Toxoplasma | Positive: 31.9%, negative: 68.1% | 61 |
| HTLV-1 | Positive: 23.1%, negative: 76.9% | 2 |
| PPD | Positive: 43.2%, negative: 56.8% | 5 |
Both types of models exhibited variable and similar performance. While LR obtained an AUC-ROC ranging from 0.49 to 0.68 and an accuracy between 0.51 and 0.62, the six ML models obtained an AUC-ROC ranging from 0.35 to 0.81 and an accuracy between 0.48 and 0.70. The best-performing model for each dataset achieved AUC-ROCs of 0.81 (GB), 0.71 (RF), 0.67 (GB), and 0.62 (XGB and GB), and accuracies of 0.07 (RF), 0.70 (RF), 0.58 (decision trees, RF, and XGB), and 0.61 (XGB) respectively for D, DT, DR, and DTR. Nevertheless, as shown in Tables 5 and 6, none of the ML models showed consistent and substantially better results than LR, and even some of them performed slightly better (or worse) than tossing a coin. Indeed, when running Kruskal-Wallis tests, we found no significant differences between the models (P-value = 0.14 and P-value = 0.22, respectively for AUC-ROC and Accuracy), nor between LR and each model separately, as shown in Table 6.
| Metric | AUC-ROC | Accuracy | ||||||
| Model and data | D | DT | DR | DTR | D | DT | DR | DTR |
| LR | 0.49 | 0.68 | 0.53 | 0.67 | 0.51 | 0.58 | 0.58 | 0.62 |
| SVM | 0.35 | 0.62 | 0.51 | 0.51 | 0.57 | 0.57 | 0.53 | 0.53 |
| DET | 0.67 | 0.45 | 0.58 | 0.51 | 0.58 | 0.49 | 0.58 | 0.48 |
| RF | 0.78 | 0.71 | 0.57 | 0.52 | 0.70 | 0.70 | 0.58 | 0.50 |
| GB | 0.81 | 0.70 | 0.67 | 0.62 | 0.63 | 0.63 | 0.56 | 0.60 |
| XGB | 0.75 | 0.66 | 0.60 | 0.62 | 0.60 | 0.63 | 0.58 | 0.61 |
| MLP | 0.68 | 0.70 | 0.50 | 0.47 | 0.61 | 0.61 | 0.53 | 0.49 |
| Metric | Sensitivity | Specificity | ||||||
| Model and data | D | DT | DR | DTR | D | DT | DR | DTR |
| LR | 0.33 | 0.44 | 0.18 | 0.56 | 0.67 | 0.71 | 0.95 | 0.67 |
| SVM | 0 | 0 | 0 | 0 | 1 | 1 | 0.93 | 0.93 |
| DET | 0.44 | 0.29 | 0.38 | 0.50 | 0.69 | 0.64 | 0.73 | 0.47 |
| RF | 0.44 | 0.50 | 0.32 | 0.52 | 0.89 | 0.84 | 0.78 | 0.49 |
| GB | 0.21 | 0.50 | 0 | 0.47 | 0.96 | 0.73 | 0.98 | 0.69 |
| XGB | 0.09 | 0.50 | 0.41 | 0.61 | 0.98 | 0.73 | 0.71 | 0.64 |
| MLP | 0.53 | 0.56 | 0.56 | 0.44 | 0.67 | 0.64 | 0.51 | 0.53 |
Table 6 shows the sensitivity and specificity of the corresponding models. The results are highly heterogeneous across datasets and models; sensitivity values are consistently low, indicating an overall poor ability to predict DGF (the positive class of the target variable). The only models achieving values above 0.5 for this metric were the Multilayer Perceptron models trained on the D, DT, and DR datasets, with scores ranging from 0.53 to 0.56. For the DTR dataset, LR, RF, and XGB also exceeded this threshold, obtaining values of 0.56, 0.52, and 0.61, respectively. As shown in Table 7, there were no statistically significant differences in these metrics between LR and each of the ML models. In terms of interpretability, Table 8 shows significant variables according to the final multivariate LRs. On the other hand, Table 9 presents permutation feature importance and Shapley values of some of the top-performing ML models, different from LR (GB with D, RF with DT, and GB with DR data). Across most models, LR and ML, significant predictors included the donor’s creatinine, age, cold ischemia time, and the recipient's smoking condition. For LR, the predictor that is found relevant in most models is the donor’s age. Whereas, from Table 9, the predictors that were found relevant in all models are the donor’s age, and in most models, the donor’s creatinine level, and mean blood pressure.
| Comparison | AUC-ROC P value | Accuracy P value | Sensitivity P value | Specificity P value |
| LR vs SVM | 0.2454 | 0.2396 | 0.0139 | 0.0778 |
| LR vs DET | 0.4678 | 0.2186 | 0.8845 | 0.3836 |
| LR vs RF | 0.3865 | 0.5516 | 0.6631 | 0.7715 |
| LR vs GB | 0.1913 | 0.2425 | 0.7728 | 0.1465 |
| LR vs XGB | 0.5637 | 0.2367 | 0.7728 | 0.6612 |
| LR vs MLP | 0.8845 | 0.7702 | 0.1804 | 0.0384 |
| Data | Variable | Odds ratio | P value |
| DTR | |||
| Donor (D) | Age | 82.4 | < 0.01 |
| Transplant (T) | Cold ischemia time (hours) | 30.8 | 0.001 |
| Recipient (R) | Residual diuresis (mL/day) | 0.1 | 0.006 |
| Smoking (BV1 = no smoking) | 15.5 | 0.02 | |
| DR | |||
| Recipient (R) | Residual diuresis (mL/day) | 0.1 | 0.008 |
| Smoking (BV1 = no smoking) | 10.4 | 0.02 | |
| DT | |||
| Transplant (T) | Cold ischemia time (hours) | 28.9 | 0.001 |
| Donor (D) | Age | 60.8 | < 0.01 |
| D | |||
| Donor (D) | Age | 35.9 | < 0.01 |
| Model and data | Relevant predictors and the data set from which they come | Mean of error increase (from PFI) | Standard deviation of error increase (from PFI) | Mean and direction of SHAP values |
| GB with D | Creatinine (D) | 0.05 | 0.02 | 0.07 (+ -) |
| Age (D) | 0.03 | 0.01 | 0.09 (+) | |
| Stroke death (D) | 0.02 | 0.01 | 0.03 (+) | |
| ECD (D) | 0.02 | 0.02 | 0.03 (+) | |
| MBP (D) | 0.01 | 0.01 | 0.01 (-) | |
| RF with DT | Age (D) | 0.06 | 0.02 | 0.03 (+) |
| MBP (D) | 0.04 | 0.01 | 0.02 (-) | |
| Cold ischemia time (T) | 0.03 | 0.04 | 0.05 (+) | |
| Creatinine (D) | 0.01 | 0.01 | 0.02 (+ -) | |
| Warm ischemia time (T) | 0.01 | 0.02 | 0.05 (+) | |
| GB with DR | Age (D) | 0.04 | 0.01 | 0.03 (+) |
| Smoking (R) | 0.03 | 0.02 | 0.02 (+) | |
| MBP (D) | 0.02 | 0.01 | 0.01 (-) | |
| Creatinine (D) | 0.02 | 0.01 | 0.01 (+ -) |
DGF incidence is a critical adverse event following kidney transplantation, with implications for both early and po
At first glance, it may seem paradoxical that ML models did not outperform LR, given the prevailing trend toward artificial intelligence tools. However, this finding is not novel. Prior studies have shown that the AUC values of LR and ML models for clinical risk prediction are often comparable when the comparisons are conducted with low risk of bias. Interestingly, ML models tend to demonstrate superior performance in studies where the risk of bias is higher[3]. Another critical consideration before implementing ML models in clinical medicine is the quality and completeness of the data being analyzed. As shown in Tables 1, 2, 3, and 4, many predictors exhibited low or insufficient levels of completeness. Moreover, several clinically relevant features related to graft quality, such as the administration of vasoactive and vaso
ML models are designed to learn directly and automatically from data[15]. In contrast, regression models rely on predefined theoretical frameworks and assumptions, and their performance can be enhanced through human inter
In our case, we believe that the similar results between LR and ML models are driven by a “small” dataset with a large amount of missing data for some potentially relevant variables. ML models couldn’t leverage all their potential since the explanatory variables included were already the variables that traditional models have shown to be relevant, and no new non-linear interactions could be obtained from ML models. Nonetheless, and despite the overall moderate performance, the models revealed several interesting patterns in the predictor variables. As expected from the literature, most LR and ML models identified donor age and donor serum creatinine as important predictors. However, it is interesting that the mean blood pressure of the donor and whether the recipient was an active smoker stood out as consistently relevant variables. Definitely, interactions worth exploring in future work.
Although the predictive performance was lower than expected, and the dataset lacked some of the quality typically required for ML models to excel, our results achieved performance comparable to the Kidney Donor Risk Index while using only about one-tenth of the original sample size. Moreover, the identification of a potentially novel association suggests a promising avenue for future investigation.
| 1. | Hariharan S, Israni AK, Danovitch G. Long-Term Survival after Kidney Transplantation. N Engl J Med. 2021;385:729-743. [RCA] [PubMed] [DOI] [Full Text] [Cited by in Crossref: 115] [Cited by in RCA: 443] [Article Influence: 88.6] [Reference Citation Analysis (0)] |
| 2. | Lentine KL, Smith JM, Lyden GR, Miller JM, Booker SE, Dolan TG, Temple KR, Weiss S, Handarova D, Israni AK, Snyder JJ. OPTN/SRTR 2023 Annual Data Report: Kidney. Am J Transplant. 2025;25:S22-S137. [RCA] [PubMed] [DOI] [Full Text] [Full Text (PDF)] [Cited by in Crossref: 12] [Cited by in RCA: 79] [Article Influence: 79.0] [Reference Citation Analysis (0)] |
| 3. | Christodoulou E, Ma J, Collins GS, Steyerberg EW, Verbakel JY, Van Calster B. A systematic review shows no performance benefit of machine learning over logistic regression for clinical prediction models. J Clin Epidemiol. 2019;110:12-22. [RCA] [PubMed] [DOI] [Full Text] [Cited by in Crossref: 935] [Cited by in RCA: 1155] [Article Influence: 165.0] [Reference Citation Analysis (0)] |
| 4. | Widaman KF. III. Missing data: What to do with or without them. Monogr Soc Res Child Dev. 2006;71:42-64. [RCA] [DOI] [Full Text] [Cited by in Crossref: 79] [Cited by in RCA: 72] [Article Influence: 3.6] [Reference Citation Analysis (0)] |
| 5. | Lai TL, Robbins H, Wei CZ. Strong consistency of least squares estimates in multiple regression. Proc Natl Acad Sci U S A. 1978;75:3034-3036. [RCA] [PubMed] [DOI] [Full Text] [Cited by in Crossref: 60] [Cited by in RCA: 28] [Article Influence: 1.8] [Reference Citation Analysis (0)] |
| 6. | Cortes C, Vapnik VN. Support-vector networks. Mach Learn. 1995;20:273-297. [RCA] [DOI] [Full Text] [Cited by in Crossref: 22481] [Cited by in RCA: 10958] [Article Influence: 644.6] [Reference Citation Analysis (0)] |
| 7. | von Winterfeldt D, Edwards W. Decision Analysis and Behavioral Research. Cambridge University Press, 1986. |
| 8. | Rebala G, Ravi A, Churiwala S. An Introduction to Machine Learning. Springer, 2019. [DOI] [Full Text] |
| 9. | Friedman JH. Greedy function approximation: A gradient boosting machine. Ann Statist. 2001;29. [RCA] [DOI] [Full Text] [Cited by in Crossref: 8988] [Cited by in RCA: 9244] [Article Influence: 369.8] [Reference Citation Analysis (0)] |
| 10. | Chen TQ, Guestrin C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2016: 785-794. [RCA] [DOI] [Full Text] [Cited by in Crossref: 12755] [Cited by in RCA: 9651] [Article Influence: 965.1] [Reference Citation Analysis (1)] |
| 11. | Popescu MC, Balas VE, Perescu-Popescu L, Mastorakis N. Multilayer perceptron and neural networks. WSEAS Trans Cir Sys. 2009;8:579-588. |
| 12. | Zens TJ, Danobeitia JS, Leverson G, Chlebeck PJ, Zitur LJ, Redfield RR, D'Alessandro AM, Odorico S, Kaufman DB, Fernandez LA. The impact of kidney donor profile index on delayed graft function and transplant outcomes: A single-center analysis. Clin Transplant. 2018;32:e13190. [RCA] [PubMed] [DOI] [Full Text] [Cited by in Crossref: 59] [Cited by in RCA: 103] [Article Influence: 14.7] [Reference Citation Analysis (0)] |
| 13. | Rao PS, Schaubel DE, Guidinger MK, Andreoni KA, Wolfe RA, Merion RM, Port FK, Sung RS. A comprehensive risk quantification score for deceased donor kidneys: the kidney donor risk index. Transplantation. 2009;88:231-236. [RCA] [PubMed] [DOI] [Full Text] [Cited by in Crossref: 686] [Cited by in RCA: 837] [Article Influence: 49.2] [Reference Citation Analysis (1)] |
| 14. | Almasri J, Tello M, Benkhadra R, Morrow AS, Hasan B, Farah W, Alvarez Villalobos N, Mohammed K, Allen JP, Prokop LJ, Wang Z, Kasiske BL, Israni AK, Murad MH. A Systematic Review for Variables to Be Collected in a Transplant Database for Improving Risk Prediction. Transplantation. 2019;103:2591-2601. [RCA] [PubMed] [DOI] [Full Text] [Cited by in Crossref: 5] [Cited by in RCA: 12] [Article Influence: 2.0] [Reference Citation Analysis (0)] |
| 15. | Mitchell JB. Machine learning methods in chemoinformatics. Wiley Interdiscip Rev Comput Mol Sci. 2014;4:468-481. [RCA] [PubMed] [DOI] [Full Text] [Full Text (PDF)] [Cited by in Crossref: 243] [Cited by in RCA: 270] [Article Influence: 22.5] [Reference Citation Analysis (0)] |
| 16. | Boulesteix AL, Schmid M. Machine learning versus statistical modeling. Biom J. 2014;56:588-593. [RCA] [PubMed] [DOI] [Full Text] [Cited by in Crossref: 87] [Cited by in RCA: 63] [Article Influence: 5.3] [Reference Citation Analysis (0)] |
