Copyright
©The Author(s) 2025.
World J Clin Cases. Apr 16, 2025; 13(11): 100966
Published online Apr 16, 2025. doi: 10.12998/wjcc.v13.i11.100966
Published online Apr 16, 2025. doi: 10.12998/wjcc.v13.i11.100966
Table 1 Neural network models discussed in the manuscript
| Model/scoring system | Primary use case | Strengths | Limitations |
| Convolutional neural networks | Image-based tasks (e.g., computed tomography scans and X-rays) | High accuracy in spatial feature extraction | Computationally expensive |
| Recurrent neural networks | Time-series predictions (e.g., sepsis progression) | Captures temporal dependencies effectively | Potentially high computational cost |
| Multilayer perceptron | Nonlinear relationship modeling (e.g., ICU mortality) | Flexible, integrates with hybrid systems | Prone to overfitting if not regularized |
| Balanced random forests | Handling imbalanced datasets | Interpretable, robust to class imbalance | Requires careful tuning of hyperparameters |
| Sequential Organ Failure Assessment | Assessing organ failure severity | Widely validated, clinically interpretable | Limited to scoring; no predictive modeling |
| Acute Physiology and Chronic Health Evaluation | Evaluating ICU patient mortality risk | Comprehensive, includes chronic health factors | Limited in real-time adaptability |
Table 2 List of currently available neural network models
| Model | Variations | Use cases | Strengths | Weaknesses |
| Multilayer perceptron | N/A | Classification and regression tasks | Simple architecture and good for baseline models | Not ideal for spatial or sequential data. Can overfit with high dimensional data |
| Convolutional neural networks | AlexNet | Image recognition | Captures spatial hierarchies | Computationally intensive |
| VGGNet | Object detection | Effective for image processing | Requires large datasets | |
| ResNet | Complex computer vision tasks | Residual learning avoids vanishing gradient | Requires higher computation | |
| Inception | Image recornition with lower computations | Efficient use of resources | Architecture complexity | |
| MobileNet | Mobile and embedded vision applications | Lightweight and efficient | Trade-off in accuracy for efficiency | |
| RNN | LSTM | Language modeling | Handles sequential data | Vanishing gradient problem |
| Gated recurrent unit | Time series forecasting | Simplified version of LSTM | Less powerful for complex tasks | |
| Bidirectional RNN | Speech recognition | Considers past and future context | Computationally expensive | |
| GAN | DCGAN | Image generation | Generates high quality data | Training instability |
| CycleGAN | Unsupervised image-to-image translation | Advances data augmentation | Mode collapse issues | |
| StyleGAN | Synthetic image creation for design tasks | Generates photorealistic images | Computationally expensive | |
| Autoencoders | Variational autoencoders | Dimensionality reduction, generative tasks | Effective for feature extraction | Blurry reconstructions |
| Denoising autoencoders | Anomaly detection and noise reduction | Robust against noisy inputs | Limited generative capability | |
| Transformers | Bidirectional Encoder Representations from Transformers | Contextual embeddings for natural language processing tasks | Captures long-range dependencies | High computational requirements |
| General Purpose Transformers series | Generative tasks (e.g. text generation) | Powerful generative abilities | Requires vast amounts of training data | |
| T5 | Text summarization, translation | Task-agnostic and flexible | Computationally intensive | |
| Graph neural networks | Graph convolutional networks | Social network analysis, biological modeling | Handles graph-structured data | Scalability issues |
| Graph attention networks | Recommendation systems | Captures relational information | Complex architecture | |
| GraphSAGE | Molecular modeling, protein interactions | Effective for inductive learning | Requires large-scale graph sampling | |
| Self organizing maps | N/A | Data visualization | Intuitive mapping and visualization | Less effective for high-dimensional data |
| Boltzmann machines | Restricted boltzmann machines | Collaborative filtering, dimensionality reduction | Probabilistic feature learning | Difficult to train |
| Deep belief networks | Feature learning and pretraining | Effective for unsupervised learning | Computationally expensive | |
| Deep reinforcement learning models | Deep Q networks | Game playing (e.g., AlphaGo) | Learns optimal policies | Sample inefficiency |
| Proximal policy optimization | Robotics, autonomous navigation | Handles high-dimensional inputs | Requires hyperparameter tuning | |
| Actor critic methods | Autonomous systems | Balances policy and value learning | May require extensive exploration |
- Citation: Sridhar GR, Yarabati V, Gumpeny L. Predicting outcomes using neural networks in the intensive care unit. World J Clin Cases 2025; 13(11): 100966
- URL: https://www.wjgnet.com/2307-8960/full/v13/i11/100966.htm
- DOI: https://dx.doi.org/10.12998/wjcc.v13.i11.100966
