Copyright
©The Author(s) 2025.
World J Psychiatry. Nov 19, 2025; 15(11): 108199
Published online Nov 19, 2025. doi: 10.5498/wjp.v15.i11.108199
Published online Nov 19, 2025. doi: 10.5498/wjp.v15.i11.108199
Table 1 Technical features of large language models
| Technologies | Characteristics |
| PT | Large-scale medical knowledge learning: Through training on massive medical literature and case data, the model captures medical language patterns and basic pathological features |
| Reducing annotation dependency: The model can utilize unannotated medical texts (such as electronic health records and papers) for initial training | |
| Versatility foundation: It provides a general medical semantic understanding capability for subsequent tasks (such as diagnosis and report generation) | |
| SFT | Precise task adaptation: Optimize model performance for specific medical tasks, such as disease classification and image recognition |
| High accuracy: Enhance the reliability of the model in specialized areas through professionally annotated data, such as cases labeled by doctors | |
| Enhanced compliance: Adjust model outputs to meet privacy or ethical requirements, such as anonymization processes | |
| Agent | Automated processes: Performing repetitive tasks such as medical record organization and appointment reminders to enhance healthcare efficiency |
| Multimodal interaction: Enabling patient-doctor communication and report interpretation through a combination of voice, text, and image | |
| Real-time decision support: Dynamically providing diagnostic and treatment suggestions, such as drug titration, in conjunction with a rule engine | |
| RAG | Real-time knowledge integration: Incorporate the latest medical databases, such as PubMed and clinical guidelines, to prevent outdated knowledge within the model |
| Evidence traceability: Generate results accompanied by references to facilitate verification of reliability by medical professionals | |
| Mitigation of hallucination risk: Generate content based on authoritative knowledge bases to minimize the likelihood of the model fabricating medical information | |
| PE | Output controllability: Structured instructions guide the model to generate standardized results |
| Flexible domain adaptation: Adjusting prompt words can quickly switch application scenarios | |
| Reduced training costs: Optimizing performance on specific tasks (such as improving the accuracy of rare disease descriptions) without retraining the model |
- Citation: Wang YF, Li MD, Wang SH, Fang Y, Sun J, Lu L, Yan W. Large language models in clinical psychiatry: Applications and optimization strategies. World J Psychiatry 2025; 15(11): 108199
- URL: https://www.wjgnet.com/2220-3206/full/v15/i11/108199.htm
- DOI: https://dx.doi.org/10.5498/wjp.v15.i11.108199
