Editorial Open Access
Copyright ©The Author(s) 2024. Published by Baishideng Publishing Group Inc. All rights reserved.
World J Psychiatry. Oct 19, 2024; 14(10): 1415-1421
Published online Oct 19, 2024. doi: 10.5498/wjp.v14.i10.1415
Large multimodal models assist in psychiatry disorders prevention and diagnosis of students
Xin-Qiao Liu, Xin Wang, School of Education, Tianjin University, Tianjin 300350, China
Hui-Rui Zhang, Faculty of Education, The Open University of China, Beijing 100039, China
ORCID number: Xin-Qiao Liu (0000-0001-6620-4119); Xin Wang (0009-0001-1098-0589).
Author contributions: Liu XQ and Zhang HR designed the study; Liu XQ, Wang X and Zhang HR wrote the manuscript; All authors have read and approved the final manuscript.
Conflict-of-interest statement: All the authors report no relevant conflicts of interest for this article.
Open-Access: This article is an open-access article that was selected by an in-house editor and fully peer-reviewed by external reviewers. It is distributed in accordance with the Creative Commons Attribution NonCommercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: https://creativecommons.org/Licenses/by-nc/4.0/
Corresponding author: Xin-Qiao Liu, PhD, Associate Professor, School of Education, Tianjin University, No. 135 Yaguan Road, Jinnan District, Tianjin 300350, China. xinqiaoliu@pku.edu.cn
Received: April 10, 2024
Revised: September 3, 2024
Accepted: September 25, 2024
Published online: October 19, 2024
Processing time: 190 Days and 1.2 Hours

Abstract

Students are considered one of the groups most affected by psychological problems. Given the highly dangerous nature of mental illnesses and the increasingly serious state of global mental health, it is imperative for us to explore new methods and approaches concerning the prevention and treatment of mental illnesses. Large multimodal models (LMMs), as the most advanced artificial intelligence models (i.e. ChatGPT-4), have brought new hope to the accurate prevention, diagnosis, and treatment of psychiatric disorders. The assistance of these models in the promotion of mental health is critical, as the latter necessitates a strong foundation of medical knowledge and professional skills, emotional support, stigma mitigation, the encouragement of more honest patient self-disclosure, reduced health care costs, improved medical efficiency, and greater mental health service coverage. However, these models must address challenges related to health, safety, hallucinations, and ethics simultaneously. In the future, we should address these challenges by developing relevant usage manuals, accountability rules, and legal regulations; implementing a human-centered approach; and intelligently upgrading LMMs through the deep optimization of such models, their algorithms, and other means. This effort will thus substantially contribute not only to the maintenance of students’ health but also to the achievement of global sustainable development goals.

Key Words: Large multimodal models; ChatGPT; Psychiatric disorders; Mental health; Student

Core Tip: Large multimodal models represented by ChatGPT have become a new approach for diagnosing, treating, and addressing students’ mental health issues. However, there are, notably, both opportunities and challenges in the diagnosis and prevention of mental disorders by students. To unleash the full potential of large multimodal models and truly achieve the empowerment of psychological well-being through technology, it is necessary to obtain a correct understanding of their strengths and weaknesses and to continuously explore the organic integration of artificial intelligence and students’ mental health.



INTRODUCTION

Various sectors of society have increasingly recognized the important role of mental health in maintaining students’ well-being and achieving global development goals. Mental disorders, clinically significant impairments in cognition, emotion regulation, or behavior[1], are considered important risk factors that impact students’ survival and development[2,3]. A study from two national surveys revealed a general deterioration in the mental health of United States college students, and the rates of depression, anxiety, nonsuicidal self-injury, suicidal ideation, and suicide attempts markedly increased between 2007 and 2018, with rates doubling over the period in many cases[4]. In addition, the global coronavirus disease 2019 pandemic has further intensified mental health risks for students and has posed significant threats and challenges to global health. Among 874 Bangladeshi university students, 40% experienced moderate to severe anxiety during the pandemic, 72% had depressive symptoms, and 53% had moderate to poor mental health conditions[5]. Students’ mental health issues have therefore become a universal, challenging, and consensual problem that urgently requires attention and solutions on a global scale.

With the continuous advancement of the information revolution and the rapid development of machine learning (ML) and deep learning technologies such as natural language processing (NLP) and computer vision, artificial intelligence (AI)-based large-scale screening and intervention methods have gradually entered the public’s view and been applied in psychiatric research and practice. Representative examples include chatbots, such as ChatGPT, which have tremendous potential for the prevention and diagnosis of mental disorders because of their convenience, efficiency, anonymity, and cost-effectiveness[6]. By leveraging the internet, big data, and large multimodal models (LMMs), digital intervention methods have become new approaches for the diagnosis, treatment, and resolution of student mental health issues in the modern era[7,8]. However, in addition to their benefits, the deep application of AI-based methods also introduces risks and challenges concerning hallucinations, knowledge modeling, ethics, and morality[9].

In conclusion, LMMs akin to ChatGPT represent emerging technologies that present both opportunities and challenges in the diagnosis and prevention of students’ mental disorders. This article aims to summarize the advantages and limitations of LMMs represented by ChatGPT and provide some future development prospects. Accordingly, it constitutes a substantial contribution to safeguarding student health and promoting the integration of technology into student mental health services.

LMMS AND STUDENT MENTAL HEALTH

LMMs, upgraded LLMs, are powerful AI models based on ML and deep learning techniques. LMMs overcome the limitations of LLMs, which can handle only text data and can comprehensively learn, train, analyze, and process various types of data inputs, including text, images, videos, and audio, to provide answers to questions. The workflow of LMMs primarily includes: (1) Encoding input data from multiple modalities such as text, speech, images, and videos to extract features; (2) Fusing the extracted features via modules such as decoders; and (3) Outputting the corresponding results on the basis of specific questions and contexts, e.g., identifying important information in an image on the basis of given textual prompts or generating descriptions of the image’s content. Specifically, textual data are mostly processed and encoded via transformers such as BERT, whereas features from image and video data mostly rely on convolutional neural networks (CNNs) and vision transformers for analysis and recognition. CNNs also play important roles in extracting speech information. ChatGPT-4, developed by OpenAI, is a typical representative of LMMs. OpenAI describes ChatGPT as an interactive conversational model[10]. A chatbot, ChatGPT, undergoes continuous iterations and upgrades and possesses powerful language comprehension and text generation capabilities. It understand and respond to natural language inputs through the use of generative pretrained transformers, providing prompt and human-like written responses to user requests[11].

Internet-based digital interventions have been proven to be both effective and feasible in the prevention and diagnosis of students’ mental illnesses; Woebot, Tess, MePlusMe, and DeStressify are important representatives in this field. Woebot is a relational agent embedded in the Woebot LIFE smartphone application, providing guided self-help based on cognitive-behavioral therapy through text-based conversation[12]. A two-week randomized controlled trial involving college students revealed that Woebot can alleviate psychological issues and significantly reduce depressive symptoms compared with a control group using National Institute of Mental Health e-books. The Woebot group also exhibited greater overall satisfaction and emotional awareness (EA)[13]. The psychological AI chatbot Tess, designed by X2AI Inc., represents a more convenient and efficient solution for addressing mental health issues, significantly reducing depression and anxiety symptoms among college students[14] and affirming the potential of AI in the field of mental health. MePlusMe is an online support system specifically developed to improve the mental health, well-being, and study skills of higher education students. Its usage significantly reduces users’ anxiety and depression levels while increasing their sense of well-being[15]. Additionally, DeStressify, a mindfulness-based application, provides mindfulness-based exercises ranging from 3 to 23 minutes in duration in audio, video, or text formats[16], offering accessible and effective support to improve the mental health status of college students.

Hence, represented by ChatGPT-4, LMMs are currently the most advanced language models, bringing new hope and opportunities for achieving specific prevention, diagnosis, and treatment goals for students’ mental well-being. First, concerning the prevention of students’ psychiatric disorders, LMMs with powerful algorithms and computational capabilities can not only identify psychological counseling information within programs or software but also conduct comprehensive analyses of various types of information. This enables the identification of larger groups suffering from mental health issues or potential “victims” of psychological risk, providing students with more accurate, extensive, and timely early screening and intervention regarding mental health problems, making preventive measures truly possible. Second, with respect to the diagnosis of students’ psychiatric disorders, compared with traditional chatbots, which only process text information, LMMs are not confined to “paper-based” interactions. They can analyze data such as images, audio, and videos provided by users in detail. This means that LMMs can interact with students similar to real and professional mental health providers, comprehensively assessing various kinds of information. This enables LMMs to provide more comprehensive and accurate psychological diagnostic results to students suffering from psychological problems. Third, in terms of the treatment of students’ psychiatric disorders, LMMs consider various types of personal information of patients and can thereby formulate more personalized and appropriate treatment plans for students. Additionally, LMMs can provide round-the-clock follow-up services, promptly track changes in students’ psychological status during medication use, and revise and adjust treatment plans accordingly. This creates a virtuous cycle of “comprehensive diagnosis-personalized treatment-continuous monitoring-timely adjustment,” substantially contributing to the relief and cure of students’ psychological problems.

THE POTENTIAL OF LMMS REPRESENTED BY CHATGPT FOR MENTAL HEALTH PROMOTION OF STUDENTS
Knowledge and professional skills in mental health service provision

Sufficient medical knowledge is crucial for meeting patients’ health care needs and providing optimal care[17]. Without proper medical qualifications, even with advanced model designs and technological advancements, it is impossible to meet the threshold for implementing clinical treatment, let alone provide the best medical care to patients. The United States Medical Licensing Examination (USMLE), a three-step examination for medical licensure in the United States, assesses the ability of health care professionals to apply knowledge, concepts, principles, and patient-centered basic skills[18]. It is also an important foundation for providing safe and efficient mental health services to students. ChatGPT has demonstrated good performance on the USMLE, performing at or near the passing threshold across all three exams without any specialized training or reinforcement while displaying a high level of concordance and insightfulness in its explanations[19]. Additionally, ChatGPT has the potential to provide friendly and detailed responses to users’ queries, thereby meeting their need for life-support knowledge[20]. This indicates that LMMs represented by ChatGPT possess a strong foundation of medical knowledge, extensive professional skills, and thus the potential to provide mental health services to students.

Emotional support

Emotional support has a significant effect on shaping students’ personalities, alleviating physical and mental stress, and rectifying psychological problems. Advances in science have made it possible for AI to recognize human cognition and emotions[21]. AI with human psychological cognition can not only simulate the rational thinking of the “brain” and replicate the emotional thinking of the “heart” but also enable emotional interaction between humans and machines[22]. A study has shown that ChatGPT can outperform humans in EA assessment; its EA performance has also significantly improved over time, almost reaching the upper limit score of the Levels of EA Scale (98 out of 100)[23]. This finding indicates that LMMs represented by ChatGPT exhibit great potential for human emotion recognition and simulation. They can analyze and identify emotion-related text, images, videos, or audio input by students and provide timely and targeted emotional support on the basis of their current emotional states, thereby promptly identifying and alleviating users’ physical and mental stress to provide new insights into and avenues for resolving psychological issues.

Mitigating stigma risks and encouraging honest patient self-disclosure

Stigma is a social process that excludes individuals who are perceived to be potential sources of illness and may pose a threat to effective social living[24]. Although mental disorders can be treated or prevented[25], public stigma toward mental illness remains widespread[26], a major barrier for students struggling with mental health problems seeking professional help and services. As an AI chatbot, ChatGPT possesses characteristics such as personification, virtualization, and anonymity, which allow the maximum avoidance of stigma risks and encourage students to engage in more honest self-disclosure. On the one hand, this anthropomorphic feature facilitates a closer bond between ChatGPT and users, leading to increased emotional attachment and social feedback[27], thus reducing their resistance to mental health services within a harmonious environment. On the other hand, the virtualization and anonymity characteristics exhibited by ChatGPT during interactions can effectively alleviate patients’ fears of self-disclosure, thereby guiding them to be more truthful in disclosing sensitive or even stigmatized information related to their own conditions[28], significantly enhancing the specificity and effectiveness of students’ mental health services.

Reduction in health care costs, improvement in health care efficiency, and increase in mental health service coverage

ChatGPT, as an advanced NLP tool, has the potential to completely transform the health care industry[29]. As an AI model, ChatGPT does not require food, does not become tired or fatigued[6], and does not experience physical or mental health issues. It can therefore provide students with uninterrupted, high-quality psychological counseling or treatment services via the internet, significantly reducing the costs associated with mental health care in terms of facilities, personnel, and operations. The LMMs represented by ChatGPT have great potential for drug development, disease diagnosis, workflow simplification, and medical assistance, which can promote overall health care efficiency. For example, they can provide doctors with references for professional decision-making on the basis of students’ medical history, medical information, and various test results, maximizing the efficiency of mental health screening and diagnosis and thereby providing students with more targeted, efficient and comprehensive mental health care services.

Moreover, LMMs can provide mental health services anytime, anywhere, and at any time when help is needed, whether to express one’s psychological troubles, seek advice for mental issues, or receive psychological advice and whether it is windy, rainy, or snowy, without stepping out of the house or spending an enormous amount of money, merely with an electronic device connected to the internet. This omnipresent nature of LMMs therefore extends beyond the limitations of the external natural environment, is free from the constraints of the economic conditions of mental health work at the household level, and eliminates disparities in health care development among countries, effectively improving the accessibility and coverage of mental health care services. In short, they make the affordable and ubiquitous mental health care services required by students in need of reality.

CHALLENGES AND ETHICAL CONSIDERATIONS
Health concerns

Although LMMs, such as ChatGPT, show tremendous potential for mental health services and support, they also carry certain safety risks that may threaten the well-being of students. One study[30] noted that ChatGPT did not obtain a passing score in the Chinese National Medical Licensing Examination. This study also revealed a clear gap in ChatGPT’s understanding and interpretation of NMLE knowledge compared with that of medical students. Moreover, its ability to practice medicine in key areas, such as user trust, medication need determination, psychotropic medication prescribing, and cross-checking, is not yet clear and remains to be further examined. Hence, there are certain risks related to its diagnosis and students’ mental health safety. Soon thereafter, it cannot and should not replace the leading role and professional status of health care professionals in students’ mental health promotion.

Security concerns

“Malicious actors” may use ChatGPT for cyberattacks or exploit vulnerabilities within it to bypass ethical constraints and leak private information, turning ChatGPT into the highly harmful “ThreatGPT”[31]. ChatGPT, driven by its core technology of big data, requires the extensive collection of students’ personal and usage information, inevitably raising issues of data ownership, data usage consent, data representativeness, and biases[32] while being unable to guarantee the confidentiality of the information entrusted to it[33]. From the individual perspective, the personal information of students with mental illnesses is highly sensitive and confidential. If maliciously stolen or unintentionally disclosed without authorization, it can greatly infringe upon the personal rights of students, disrupt their normal life, and worsen their mental condition. From an institutional perspective, if health care institutions excessively rely on AI tools, without strong security measures, they will be vulnerable to malicious attacks. This can easily lead to information leaks, financial losses, or even paralysis of the entire health care system.

Hallucination concerns

“Hallucination” refers to instances in which a model generates text that is factually incorrect or contextually inappropriate. A lack of real-world knowledge, biased or inaccurate training data, and other factors contribute to the occurrence of hallucination problems[34]. The field of mental health also entails hallucination problems. As a study has shown, ChatGPT exhibits spontaneous citation fabrication in psychiatric literature retrieval, with a matching rate of only 6% between a provided citation list and an actual manuscript[35]. The powerful NLP capabilities of LMM allow it to generate fluent replies through training, and the “authoritativeness” and “professionalism” it displays in its responses can easily convince users of the accuracy of these results, disregarding potential hallucination problems and accuracy risks. Therefore, hallucination problems have become an important obstacle that needs to be overcome in the promotion of students’ mental health via LMMs.

Ethical concerns

The widespread promotion and application of ChatGPT in the health care field inevitably sparked discussions on ethical issues. First, the legal risks associated with the use of ChatGPT have garnered public attention, and the question of “who takes responsibility?”[36] has become an important issue that LMMs represented by ChatGPT must address during their usage. In contrast to human entities (i.e., professional health care providers), who are bound by health care-related laws and professional ethical standards, virtual entities akin to ChatGPT (AI) have unclear responsibility attributions in medical practice. Research has explicitly suggested that “ChatGPT is fun, but not an author”[37]. In this context, if a student experiences mental health issues as a result of incorrect information provided by ChatGPT, does ChatGPT bear legal responsibility? Second, ChatGPT implies challenges via inconsistent moral guidance. A study on ethical dilemmas revealed that ChatGPT lacks a firm moral stance but that it easily provides moral advice, which can influence users’ moral judgments[38]. Typically, students struggling with mental health issues already experience significant psychological stress and sensitive psychological states. If ChatGPT exhibits an inconsistent moral stance or provides inappropriate advice, it can potentially cause psychological harm to students or even induce high-risk behaviors such as self-harm or suicide. Finally, intellectual property rights regarding the usage of ChatGPT must be considered. If ChatGPT is trained on large datasets that contain copyrighted or proprietary information, there is a risk of intellectual property infringement[39].

CONCLUSION

Mental health is crucial for students’ well-being and achievement of global development goals. Through the integration of advanced technologies, LMMs, such as ChatGPT, have demonstrated great potential for the prevention and diagnosis of students’ mental illnesses. However, LMMs also entail various challenges, including health concerns, security concerns, hallucination concerns, and ethical concerns. In light of these challenges, this paper presents three perspectives on the future application of LMMs represented by ChatGPT in the field of student mental health. First, it is necessary to develop relevant usage manuals, accountability rules[40], and legal regulations. Given the ethical and safety concerns that need to be addressed during the use of LMMs, the formulation of usage manuals, accountability mechanisms, and legal regulations is a priority for its further application in the field of students’ mental health. Second, it is essential to adhere to the principle of human centricity[41]. In the provision of students’ mental health services and treatments, it is crucial to uphold the principle of human centricity and recognize the dominant role of health care professionals in mental health practices, whereby AI serves as an auxiliary tool. Important decisions related to students and their illnesses must be made by professional psychiatrists or approved before taking effect. Finally, it is crucial to optimize and improve their models and algorithms to enhance LMM performance. First, it is important to stay up-to-date with the latest algorithm advancements, avoiding complacency and ensuring that all application algorithms remain state-of-the-art. Second, models should be updated and supplemented with various mental health-related data from different countries and organizations while adhering to the principles of intellectual property protection. Third, the introduction of a user feedback mechanism is essential. In the practical application of LMMs in the field of mental health, a user feedback mechanism should be added to focus on and adjust the response results of user feedback on sensory errors and doubtful responses, as well as the appropriateness of the response language, to improve the accuracy and fluency of the response language through continuous correction, promote the optimization and improvement of the models, and build a closer bond between AI tools and patients.

Footnotes

Provenance and peer review: Invited article; Externally peer reviewed.

Peer-review model: Single blind

Specialty type: Psychiatry

Country of origin: China

Peer-review report’s classification

Scientific Quality: Grade B

Novelty: Grade A

Creativity or Innovation: Grade A

Scientific Significance: Grade B

P-Reviewer: Ghimire R S-Editor: Li L L-Editor: A P-Editor: Yuan YY

References
1.  World Health Organization  Mental disorders. Jun 8, 2022. [cited 20 March 2024]. Available from: https://www.who.int/news-room/fact-sheets/detail/mental-disorders.  [PubMed]  [DOI]  [Cited in This Article: ]
2.  Liu XQ, Wang X. Adolescent suicide risk factors and the integration of social-emotional skills in school-based prevention programs. World J Psychiatry. 2024;14:494-506.  [PubMed]  [DOI]  [Cited in This Article: ]  [Reference Citation Analysis (0)]
3.  Liu X, Zhang Y, Gao W, Cao X. Developmental trajectories of depression, anxiety, and stress among college students: a piecewise growth mixture model analysis. Humanit Soc Sci Commun. 2023;10:736.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 17]  [Cited by in F6Publishing: 20]  [Article Influence: 20.0]  [Reference Citation Analysis (0)]
4.  Duffy ME, Twenge JM, Joiner TE. Trends in Mood and Anxiety Symptoms and Suicide-Related Outcomes Among U.S. Undergraduates, 2007-2018: Evidence From Two National Surveys. J Adolesc Health. 2019;65:590-598.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 197]  [Cited by in F6Publishing: 212]  [Article Influence: 42.4]  [Reference Citation Analysis (0)]
5.  Faisal RA, Jobe MC, Ahmed O, Sharker T. Mental Health Status, Anxiety, and Depression Levels of Bangladeshi University Students During the COVID-19 Pandemic. Int J Ment Health Addict. 2022;20:1500-1515.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 54]  [Cited by in F6Publishing: 97]  [Article Influence: 32.3]  [Reference Citation Analysis (0)]
6.  Palanica A, Flaschner P, Thommandram A, Li M, Fossat Y. Physicians' Perceptions of Chatbots in Health Care: Cross-Sectional Web-Based Survey. J Med Internet Res. 2019;21:e12887.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 108]  [Cited by in F6Publishing: 122]  [Article Influence: 24.4]  [Reference Citation Analysis (0)]
7.  Liu XQ, Guo YX, Zhang XR, Zhang LX, Zhang YF. Digital interventions empowering mental health reconstruction among students after the COVID-19 pandemic. World J Psychiatry. 2023;13:397-401.  [PubMed]  [DOI]  [Cited in This Article: ]  [Reference Citation Analysis (0)]
8.  Cao XJ, Liu XQ. Artificial intelligence-assisted psychosis risk screening in adolescents: Practices and challenges. World J Psychiatry. 2022;12:1287-1297.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in CrossRef: 9]  [Cited by in F6Publishing: 27]  [Article Influence: 13.5]  [Reference Citation Analysis (9)]
9.  Wu T, He S, Liu J, Sun S, Liu K, Han Q, Tang Y. A Brief Overview of ChatGPT: The History, Status Quo and Potential Future Development. IEEE/CAA J Autom Sinica. 2023;10:1122-1136.  [PubMed]  [DOI]  [Cited in This Article: ]
10.  OpenAI  ChatGPT: Optimizing Language Models for Dialogue. Nov 30, 2022. [cited 20 March 2024]. Available from: https://openai.com/blog/chatgpt.  [PubMed]  [DOI]  [Cited in This Article: ]
11.  Salvagno M, Taccone FS, Gerli AG. Can artificial intelligence help for scientific writing? Crit Care. 2023;27:75.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 165]  [Cited by in F6Publishing: 167]  [Article Influence: 167.0]  [Reference Citation Analysis (0)]
12.  Durden E, Pirner MC, Rapoport SJ, Williams A, Robinson A, Forman-Hoffman VL. Changes in stress, burnout, and resilience associated with an 8-week intervention with relational agent "Woebot". Internet Interv. 2023;33:100637.  [PubMed]  [DOI]  [Cited in This Article: ]  [Reference Citation Analysis (0)]
13.  Fitzpatrick KK, Darcy A, Vierhile M. Delivering Cognitive Behavior Therapy to Young Adults With Symptoms of Depression and Anxiety Using a Fully Automated Conversational Agent (Woebot): A Randomized Controlled Trial. JMIR Ment Health. 2017;4:e19.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 766]  [Cited by in F6Publishing: 534]  [Article Influence: 76.3]  [Reference Citation Analysis (5)]
14.  Fulmer R, Joerin A, Gentile B, Lakerink L, Rauws M. Using Psychological Artificial Intelligence (Tess) to Relieve Symptoms of Depression and Anxiety: Randomized Controlled Trial. JMIR Ment Health. 2018;5:e64.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 200]  [Cited by in F6Publishing: 176]  [Article Influence: 29.3]  [Reference Citation Analysis (0)]
15.  Goozee R, Barrable A, Lubenko J, Papadatou-Pastou M, Haddad M, McKeown E, Hirani SP, Martin M, Tzotzoli P. Investigating the feasibility of MePlusMe, an online intervention to support mental health, well-being, and study skills in higher education students. J Ment Health. 2022;1-11.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 3]  [Reference Citation Analysis (0)]
16.  Lee RA, Jung ME. Evaluation of an mHealth App (DeStressify) on University Students' Mental Health: Pilot Trial. JMIR Ment Health. 2018;5:e2.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 71]  [Cited by in F6Publishing: 53]  [Article Influence: 8.8]  [Reference Citation Analysis (0)]
17.  Zerbo O, Massolo ML, Qian Y, Croen LA. A Study of Physician Knowledge and Experience with Autism in Adults in a Large Integrated Healthcare System. J Autism Dev Disord. 2015;45:4002-4014.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 115]  [Cited by in F6Publishing: 85]  [Article Influence: 9.4]  [Reference Citation Analysis (0)]
18.  United States Medical Licensing Examination  United States Medical Licensing Examination. [cited 20 March 2024]. Available from: https://www.usmle.org/.  [PubMed]  [DOI]  [Cited in This Article: ]
19.  Kung TH, Cheatham M, Medenilla A, Sillos C, De Leon L, Elepaño C, Madriaga M, Aggabao R, Diaz-Candido G, Maningo J, Tseng V. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digit Health. 2023;2:e0000198.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1564]  [Cited by in F6Publishing: 1028]  [Article Influence: 1028.0]  [Reference Citation Analysis (0)]
20.  Zhu L, Mou W, Yang T, Chen R. ChatGPT can pass the AHA exams: Open-ended questions outperform multiple-choice format. Resuscitation. 2023;188:109783.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in F6Publishing: 30]  [Reference Citation Analysis (0)]
21.  Zhou Y, Zhang T, Zhang L, Xue Z, Bao M, Liu L. A Study on the Cognition and Emotion Identification of Participative Budgeting Based on Artificial Intelligence. Front Psychol. 2022;13:830342.  [PubMed]  [DOI]  [Cited in This Article: ]  [Reference Citation Analysis (0)]
22.  Zhao J, Wu M, Zhou L, Wang X, Jia J. Cognitive psychology-based artificial intelligence review. Front Neurosci. 2022;16:1024316.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in F6Publishing: 8]  [Reference Citation Analysis (0)]
23.  Elyoseph Z, Hadar-Shoval D, Asraf K, Lvovsky M. ChatGPT outperforms humans in emotional awareness evaluations. Front Psychol. 2023;14:1199058.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in F6Publishing: 30]  [Reference Citation Analysis (0)]
24.  Bhanot D, Singh T, Verma SK, Sharad S. Stigma and Discrimination During COVID-19 Pandemic. Front Public Health. 2020;8:577018.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 203]  [Cited by in F6Publishing: 168]  [Article Influence: 56.0]  [Reference Citation Analysis (0)]
25.  Schnyder N, Panczak R, Groth N, Schultze-Lutter F. Association between mental health-related stigma and active help-seeking: systematic review and meta-analysis. Br J Psychiatry. 2017;210:261-268.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 316]  [Cited by in F6Publishing: 328]  [Article Influence: 46.9]  [Reference Citation Analysis (0)]
26.  Parcesepe AM, Cabassa LJ. Public stigma of mental illness in the United States: a systematic literature review. Adm Policy Ment Health. 2013;40:384-399.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 232]  [Cited by in F6Publishing: 258]  [Article Influence: 25.8]  [Reference Citation Analysis (0)]
27.  Zhang A, Patrick Rau P. Tools or peers? Impacts of anthropomorphism level and social role on emotional attachment and disclosure tendency towards intelligent agents. Comput Hum Behav. 2023;138:107415.  [PubMed]  [DOI]  [Cited in This Article: ]  [Reference Citation Analysis (0)]
28.  Lucas GM, Gratch J, King A, Morency L. It’s only a computer: Virtual humans increase willingness to disclose. Comput Hum Behav. 2014;37:94-100.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 337]  [Cited by in F6Publishing: 352]  [Article Influence: 35.2]  [Reference Citation Analysis (0)]
29.  Zheng Y, Wang L, Feng B, Zhao A, Wu Y. Innovating Healthcare: The Role of ChatGPT in Streamlining Hospital Workflow in the Future. Ann Biomed Eng. 2024;52:750-753.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 7]  [Cited by in F6Publishing: 4]  [Article Influence: 4.0]  [Reference Citation Analysis (0)]
30.  Wang X, Gong Z, Wang G, Jia J, Xu Y, Zhao J, Fan Q, Wu S, Hu W, Li X. ChatGPT Performs on the Chinese National Medical Licensing Examination. J Med Syst. 2023;47:86.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 6]  [Cited by in F6Publishing: 26]  [Article Influence: 26.0]  [Reference Citation Analysis (0)]
31.  Gupta M, Akiri C, Aryal K, Parker E, Praharaj L. From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy. IEEE Access. 2023;11:80218-80245.  [PubMed]  [DOI]  [Cited in This Article: ]
32.  Cohen IG. What Should ChatGPT Mean for Bioethics? Am J Bioeth. 2023;23:8-16.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 20]  [Cited by in F6Publishing: 32]  [Article Influence: 32.0]  [Reference Citation Analysis (0)]
33.  Abdulai AF, Hung L. Will ChatGPT undermine ethical values in nursing education, research, and practice? Nurs Inq. 2023;30:e12556.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in F6Publishing: 20]  [Reference Citation Analysis (0)]
34.  Jha S, Jha SK, Lincoln P, Bastian ND, Velasquez A, Neema S.   Dehallucinating Large Language Models Using Formal Methods Guided Iterative Prompting. IEEE International Conference on Assured Autonomy (ICAA); 2023; Laurel, MD, United States, 2023: 149-152.  [PubMed]  [DOI]  [Cited in This Article: ]
35.  McGowan A, Gui Y, Dobbs M, Shuster S, Cotter M, Selloni A, Goodman M, Srivastava A, Cecchi GA, Corcoran CM. ChatGPT and Bard exhibit spontaneous citation fabrication during psychiatry literature search. Psychiatry Res. 2023;326:115334.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 39]  [Cited by in F6Publishing: 30]  [Article Influence: 30.0]  [Reference Citation Analysis (0)]
36.  Naik N, Hameed BMZ, Shetty DK, Swain D, Shah M, Paul R, Aggarwal K, Ibrahim S, Patil V, Smriti K, Shetty S, Rai BP, Chlosta P, Somani BK. Legal and Ethical Consideration in Artificial Intelligence in Healthcare: Who Takes Responsibility? Front Surg. 2022;9:862322.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 51]  [Cited by in F6Publishing: 152]  [Article Influence: 76.0]  [Reference Citation Analysis (0)]
37.  Thorp HH. ChatGPT is fun, but not an author. Science. 2023;379:313.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in F6Publishing: 276]  [Reference Citation Analysis (0)]
38.  Krügel S, Ostermaier A, Uhl M. ChatGPT's inconsistent moral advice influences users' judgment. Sci Rep. 2023;13:4569.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in F6Publishing: 14]  [Reference Citation Analysis (0)]
39.  Zhong Y, Chen YJ, Zhou Y, Lyu YA, Yin JJ, Gao YJ. The Artificial intelligence large language models and neuropsychiatry practice and research ethic. Asian J Psychiatr. 2023;84:103577.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 14]  [Cited by in F6Publishing: 19]  [Article Influence: 19.0]  [Reference Citation Analysis (0)]
40.  van Dis EAM, Bollen J, Zuidema W, van Rooij R, Bockting CL. ChatGPT: five priorities for research. Nature. 2023;614:224-226.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in F6Publishing: 289]  [Reference Citation Analysis (0)]
41.  Ozmen Garibay O, Winslow B, Andolina S, Antona M, Bodenschatz A, Coursaris C, Falco G, Fiore SM, Garibay I, Grieman K, Havens JC, Jirotka M, Kacorri H, Karwowski W, Kider J, Konstan J, Koon S, Lopez-gonzalez M, Maifeld-carucci I, Mcgregor S, Salvendy G, Shneiderman B, Stephanidis C, Strobel C, Ten Holter C, Xu W. Six Human-Centered Artificial Intelligence Grand Challenges. Int J Hum–Comput Int. 2023;39:391-437.  [PubMed]  [DOI]  [Cited in This Article: ]