Editorial Open Access
Copyright ©The Author(s) 2024. Published by Baishideng Publishing Group Inc. All rights reserved.
World J Psychiatry. Mar 19, 2024; 14(3): 334-341
Published online Mar 19, 2024. doi: 10.5498/wjp.v14.i3.334
Potential use of large language models for mitigating students’ problematic social media use: ChatGPT as an example
Xin-Qiao Liu, Zi-Ru Zhang, School of Education, Tianjin University, Tianjin 300350, China
ORCID number: Xin-Qiao Liu (0000-0001-6620-4119).
Author contributions: Liu XQ designed the study; Liu XQ and Zhang ZR wrote the manuscript; all authors have approved the final manuscript.
Conflict-of-interest statement: The authors declare no conflict of interests.
Open-Access: This article is an open-access article that was selected by an in-house editor and fully peer-reviewed by external reviewers. It is distributed in accordance with the Creative Commons Attribution NonCommercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: https://creativecommons.org/Licenses/by-nc/4.0/
Corresponding author: Xin-Qiao Liu, PhD, Associate Professor, School of Education, Tianjin University, No. 135 Yaguan Road, Jinnan District, Tianjin 300350, China. xinqiaoliu@pku.edu.cn
Received: December 5, 2023
Peer-review started: December 5, 2023
First decision: January 6, 2024
Revised: January 15, 2024
Accepted: February 5, 2024
Article in press: February 5, 2024
Published online: March 19, 2024
Processing time: 104 Days and 22.9 Hours

Abstract

The problematic use of social media has numerous negative impacts on individuals' daily lives, interpersonal relationships, physical and mental health, and more. Currently, there are few methods and tools to alleviate problematic social media, and their potential is yet to be fully realized. Emerging large language models (LLMs) are becoming increasingly popular for providing information and assistance to people and are being applied in many aspects of life. In mitigating problematic social media use, LLMs such as ChatGPT can play a positive role by serving as conversational partners and outlets for users, providing personalized information and resources, monitoring and intervening in problematic social media use, and more. In this process, we should recognize both the enormous potential and endless possibilities of LLMs such as ChatGPT, leveraging their advantages to better address problematic social media use, while also acknowledging the limitations and potential pitfalls of ChatGPT technology, such as errors, limitations in issue resolution, privacy and security concerns, and potential overreliance. When we leverage the advantages of LLMs to address issues in social media usage, we must adopt a cautious and ethical approach, being vigilant of the potential adverse effects that LLMs may have in addressing problematic social media use to better harness technology to serve individuals and society.

Key Words: Problematic use of social media; Social media; Large language models; ChatGPT; Chatbots

Core Tip: Large language models (LLMs) such as ChatGPT have opened a new chapter in the field of intelligent dialog and human history. Through the use of LLMs, better solutions can be provided for problematic social media use, thus mitigating issues associated with its use. In addition to enhancing technological improvements and improving the objectivity and rationality of large language generation results, it is imperative for society, individuals, and various other parties to collectively establish a favorable environment for the application of artificial intelligence.



INTRODUCTION

Social media is ubiquitous in people's daily lives and plays an increasingly significant role. According to statistical data, as of October 2023, the number of global social media users had reached 4.95 billion, accounting for 61.7% of the global population, and the number of social media users continues to grow at an accelerating pace[1]. The latest data indicate that 90% of internet users use social media every month; a typical social media user is active on or visits an average of six to seven social platforms monthly, spending an average of 2 h and 24 min on social media daily. People spend approximately 15% of their waking hours using social media[1]. Social media has facilitated communication and contact between people, helping them maintain social relationships and obtain social support. Research has shown that using social media contributes to an increase in positive self-views, such as narcissism and self-esteem[2]. As people's time and intensity of mobile social media use have increased, certain disadvantages have gradually become more prominent[3]. The negative effects of individual social media use are often referred to as "problematic social media use". Currently, there are two main perspectives on the nature of this concept. One view is that it is a nonpathological problematic use[4] associated with mild to moderate psychological and physiological symptoms (e.g., anxiety, depression). Another view is that it should be considered a pathological addiction[5], which explains the difficulty in controlling behavior[6] and that social media overuse can have similar negative psychological and physiological consequences similar to other addictive behaviors[7]. To enhance the comprehensiveness of the argument, this article incorporates both views.

Problematic social media use can have adverse effects on individuals’ work and study, family relationships and social interactions, as well as health and well-being[8]. Previous studies have shown that users with higher social media use volume and frequency are more likely to experience sleep disturbances[9], increased anxiety levels and depressive tendencies[10], which harm both the physical and psychological health of users[11,12]. Research has also shown that internet usage time has a bidirectional impact on depression symptoms and attention deficit hyperactivity disorder, with this risk being particularly severe in patients with poor previous mental health conditions[13]. During the coronavirus disease 2019 (COVID-19) pandemic, due to home isolation and social distancing requirements, individuals' anxiety levels and overall societal experiences of negative emotions increased[14]. In this situation, the use of social media increased exponentially[15], particularly on platforms such as TikTok, Pinterest, Reddit, Facebook, Snapchat, Instagram, LinkedIn, and Twitter[16]. The growth range of active social media users during the pandemic was between 8% and 38%[17]. The increased use of social media during the COVID-19 pandemic led to lifestyle changes, such as reduced physical activity, more frequent sleep problems, and higher substance use levels. To some extent, social media serves as a tool for addressing anxiety and negative emotions[18], but at the same time, COVID-19-related information overload, with much of the information being sensationalized or even incorrect, exacerbated people's anxiety and fear, thereby reducing their sense of well-being[19].

Based on various theories, several methods have been applied to alleviate problematic social media use. Currently, cognitive-behavioral therapy has been widely used in addiction research related to pathological internet overuse[20]. It is one of the most widely used psychotherapies[21], and its main focus has been on patients’ irrational cognitive problems. Changes in patients’ views and attitudes toward the already used people or toward changing psychological problems[22], such as cognitive restructuring and technical support, have been used to help students realize the negative consequences of their addiction to social media and the potential benefits of reducing social media usage[23]. For children and adolescents, family-based interventions help address internet addiction[24]. On the one hand, it is necessary to improve communication, enhance the quality of parent-child relationships, and help adolescents perceive social support and regulate emotions. On the other hand, it is important to teach family members to monitor internet use to prevent and address teenage internet addiction[25]. There is also some preliminary evidence that group-based face-to-face interaction, multimodal counseling, and motivational interviewing are effective at alleviating internet addiction[26]. There is relatively limited research on alleviating problematic social media use, and specific related measures are rather limited. Therefore, there is still a need to further explore the potential of technology and tools in this regard.

Large language models (LLMs) are deep learning models trained on a large amount of text data (with parameters reaching billions), which are capable of generating natural language text or understanding the meaning of text and subsequently performing natural language processing tasks such as text classification, question answering, and dialog[27]. Computational linguistic research indicates that LLMs can significantly outperform other Natural Language Processing algorithms[28]. To enhance the ability of natural language understanding, researchers have introduced the Transformer architecture[29], which can better represent semantic information at a deeper level for deep learning. The Transformer architecture has become the foundation of LLMs, and a variety of architectures and pathways have been built based on the Transformer[30]. LLMs are expected to serve as foundational models for solving various tasks and are considered important approaches for achieving artificial general intelligence. One of the typical applications of LLMs, such as ChatGPT, is an AI chatbot based on OpenAI's GPT (Generative Pre-trained Transformer), which has been trained on a large amount of text data, including books, news articles, websites, and Wikipedia, to generate human-like text[31]. ChatGPT exhibits flexible performance in natural language processing, outperforming other models[32]. In recent years, ChatGPT has received much attention in a variety of areas, including mental health services[33]. ChatGPT has great potential for addressing problematic social media use, such as providing information and resources through integration with search engines and providing real-time monitoring and intervention in problematic social media use through integration with social media.

LLMS CAN ALLEVIATE PROBLEMATIC SOCIAL MEDIA USE: THE CASE OF CHATGPT
Serve as an anonymous channel for communication and venting

Users can share their confusion, challenges, and anxieties about social media use with ChatGPT, which can provide emotional support and advice to help users cope with problematic social media use.

Individuals may be restricted by certain real-life factors, such as time and space, leading to insufficient communication and venting with others in daily life. ChatGPT can compensate for the lack of communication and provide 24-hour online support and companionship for users[34]; however, as ChatGPT does not think or form judgments on its own, people may be more willing to disclose information to a chatbot than to real human communication partners, thereby changing the nature and outcomes of disclosure[35]. Thus, ChatGPT can help people with problematic social media use by compensating for the limitations of real-world conditions while also acting as an anonymous communication object to enhance the objectivity and efficiency of chats.

Research indicates that when adolescents do not experience emotional responses, sufficient care and attention at home, do not receive appropriate supervision and monitoring or are unable to engage in open communication, they may use social media more frequently[25]. In this scenario, ChatGPT can act as a means of communicating and venting by engaging in conversations with users in a friendly manner, thereby reducing issues resulting from inadequate communication on social media. Chatbots and conversational agents have been used for more than half a century, and research indicates their potential in addressing mental health concerns[36], with well-known examples including ELIZA, ALICE, and SmarterChild[37]. Compared to previous chatbots, the ChatGPT chatbot has evolved from being static database-driven to a blend of real-time learning and evolutionary algorithms and has learned new responses and contexts based on real-time interactions with humans[38]. ChatGPT understands and learns from the users' language and internal thinking, ultimately generating well-focused, logical, and organized responses.

Personalized information and resources can be provided to help resolve problems

The advancement of technology and the widespread use of the internet have made it easier for all demographic groups to access health information[39]. Currently, an increasing number of online users are using chatbots and other artificial intelligence systems to obtain information and assistance[40]. When individuals encounter problematic social media usage, they can seek relevant information and resources by querying the internet or consulting chatbots to help them understand and resolve the issue. On the one hand, ChatGPT can respond to various queries and generate responses using internet resources, providing users with the required information and resources. Internet search engines such as Google's Bard and Microsoft's Bing have already integrated conversational artificial intelligence chatbots, such as ChatGPT, to enhance search efficiency by summarizing relevant content for users[41].

On the other hand, due to the high heterogeneity of each user, ChatGPT can capture various keywords during interactions to provide personalized information and resources, catering to the individualized needs of users. This personalized support can focus on the user's specific circumstances, such as demographic characteristics (gender, age, race, etc.), personal experiences, environmental conditions, and potential causes of problematic social media usage, and can provide tailored information and resources accordingly. These include: (1) Curricular information and resources focused on cognitive-behavioral skill enhancement[42], such as the harm of problematic social media use and the benefits that may result from improvement[23]; (2) a problematic social media screening and evaluation tool[43], which can be used for self-monitoring and assessment; and (3) the design of an Internet-based intervention program[42] to help users regulate their own state and solve problems. This information and resources can help users identify and improve problematic social media use.

Real-time monitoring and intervention of problematic social media usage behavior

ChatGPT can analyze users' activity characteristics and perform data analysis during their use of social media. It can monitor the content and quality of what users browse on social media, as well as the duration, time periods, and frequency of their social media usage, to assess the reasonableness of their social media usage and promptly identify problematic usage. By analyzing the posts users make on social media, ChatGPT can detect potential problematic social media usage based on the language used in these posts.

Social media posts primarily consist of textual language and, to a certain extent, reflect individuals' mental health status, serving as a potential source of information about their thoughts and feelings about their own condition[44]. Through natural language processing, ChatGPT identifies users' emotions and feelings during social media usage and monitors the negative effects of social media usage on users. Additionally, in conjunction with websites or mobile applications, when problematic social media usage is detected, ChatGPT can provide real-time feedback to users and intervene with content (such as relevant mental health educational texts, videos, interactive tools) and actions. For example, with appropriate programming, ChatGPT can send specific messages to customers when it detects that a user has been watching a video for too long by inserting a public service video with the participation of a celebrity that the user is familiar with, etc., which may improve customer compliance[45]. It also encourages them to bring positive changes to their daily lives and address possible barriers, such as encouraging users to relax their eyesight, exercise, and spend more time with their loved ones and friends[45]. It can also inform clients of stress coping strategies, dietary recommendations, physical activity based on current conditions and user preferences, and routines[45]. This content and these actions can intervene in problematic social media usage to some extent.

POTENTIAL RISK
There are limitations to mitigating problematic social media use

Due to the potential for errors in ChatGPT operations, it cannot be guaranteed that all the generated results are reasonable when applying ChatGPT to mitigate users' problematic social media usage.

First, ChatGPT may operate based on erroneous data. The data that ChatGPT learns from are sourced from the public internet, including but not limited to webpages, books, social media, and conversational data. Due to the vast amount of data and the limitations in current filtering technologies, ChatGPT often replicates text without reliably citing original sources or authors, leading to the inclusion of biased and erroneous content in the dataset.

Second, ChatGPT operates with "hallucinations", which is considered a significant issue in LLMs[46]. Many researchers have noted that ChatGPT sometimes presents fluent and convincing sentences that contain factual inaccuracies, false statements, and erroneous data[47], a phenomenon referred to as "hallucinations". Users with problematic social media usage tendencies are more likely to belong to a group with limited access to information sources. Without the ability to discern errors, they may be misled by false information and inappropriate recommendations when using ChatGPT, which is detrimental to improving problematic social media usage. Finally, in reality, there is still a "digital divide" and a "knowledge divide" between urban and rural areas, and the accessibility of large language modeling technologies and services is unevenly distributed. As a result, the benefits of ChatGPT do not reach all individuals in a balanced way[38], and there is bias in addressing problematic social media use.

There is bias in alleviating problematic social media usage

ChatGPT cannot grasp the nuances of a user's life history and current situation, which may be the root of mental health issues[48].

First, ChatGPT is unable to comprehend many complex factors that impact users, such as socioeconomic status, education, cultural influences, and family dynamics, all of which can have profound effects on a person's mental state. Additionally, the benefits of ChatGPT cannot equally reach all individuals. Research has shown that the richness of ChatGPT language responses and the comprehensibility of writing in some languages are significantly inferior to those in English[49]. This suggests that languages that have not been fully researched may be left out of the ChatGPT revolution. Therefore, people in different language environments may not achieve the same effectiveness in using ChatGPT to address problematic social media usage.

This may cause privacy and security issues for users

Using ChatGPT requires providing a large amount of data, such as users' account information, user content, communication information, and social media information[50]. The initial purpose of collecting data is to serve users, but this process may lead to privacy violations, the illegal use of personal information, and the leakage of state secrets. This not only causes trouble for the users themselves but also impacts a broader range of social groups and areas, sometimes leading to immeasurable losses. OpenAI's regulation of the handling of personal information depends entirely on the privacy laws of different countries[50], and while OpenAI claims to be compliant with the GDPR and other relevant laws, these measures may not fully address the privacy concerns of individuals with respect to ChatGPT[51].

Users' overreliance on ChatGPT may lead to another extreme

The interactions generated by chatbots such as ChatGPT are more similar to real human interactions, which may lead people to rely on them excessively[52], resulting in unsafe and irrational usage. Users may become overly reliant on chatbots, as they can access them 24/7 with just a click, potentially exacerbating addictive behavior[48]. Research has shown that autonomy is directly correlated with positive treatment outcomes and is common in effective treatment interventions[53,54] and that enhancing autonomy is important for reducing problematic social media use. Overcommunication with ChatGPT may lead individuals to reduce their communication and interaction with peers. If users perceive ChatGPT as a more important communication entity than real people, prioritizing interactions with machines over human interactions could lead them to become increasingly detached from real society, resulting in negative impacts[55]. With ChatGPT, people can directly access needed knowledge without autonomy, thereby inhibiting their ability to exercise critical thinking and to evaluate and analyze comprehensive information[56], which is detrimental to their psychological well-being.

The definition of rights and responsibilities for using ChatGPT is still vague. These rights and responsibilities include awareness of the limitations and biases of ChatGPT and that OpenAI, as the developer, is responsible for ensuring that the ChatGPT algorithms are autonomous and beneficial to the users[57]. The impact on people cannot be measured by something tangible; the use of ChatGPT is at one's own risk[58], and once the harm is done, the consequences are on the users themselves and cannot be remedied by something tangible.

DISCUSSION

The era of deep integration between artificial intelligence and human life has arrived. While there is great hope for artificial intelligence to address problematic social media usage, developing accurate algorithms alone cannot solve these issues. The future development of large language models in mitigating problematic social media use is worthy of discussion.

First, addressing the ethical issues and societal impacts of LLMs such as ChatGPT is vital. This requires society as a whole to establish a common understanding and strive to create a positive environment for the use of ChatGPT. On the one hand, guidelines for the use and application of content-generating AI tools such as ChatGPT need to be formulated to establish clear legal boundaries. On the other hand, education and outreach should be provided to ensure that managers and users understand the guidelines, help users enhance their digital literacy, strengthen their rational judgment capabilities, and help them grasp the appropriate methods for using the technology in a responsible and informed manner, reducing potential harm.

Second, in the technical realm, it is essential to continue to strengthen research and development efforts. Developers need to improve the training of ChatGPT to enhance the objectivity and rationality of the results it generates. In addition, OpenAI applies human feedback reinforcement learning techniques to unsupervised and fine-tuned GPT models to maintain objectivity in judging whether the results align more with human performance, further optimizing the parameter weighting in the GPT model and thereby generating more rational results. There is also a need to strengthen the professional skills and ethical training of AI trainers and improve the overall professional environment to ensure that human factors do not exacerbate biases or ethical issues and to improve the accuracy and reduce the biases of these LLMs. Adherence to relevant data protection laws and regulations is necessary to ensure that user privacy is protected at every stage of data collection, storage, analysis, use, and sharing. The results of LLM operations need to be assessed, and errors need to be corrected promptly. There is a continuous need to establish and refine benchmarks for evaluating the performance of ChatGPT with various users or groups to promptly detect and eliminate any adverse effects in its operation.

Finally, there is a need for additional professional and targeted development in the area of problematic social media usage. The current practice of using artificial intelligence to address problematic social media usage largely focuses on fundamental conversation, counseling, and monitoring functions with limited targeting. Due to limited research on problematic social media use, there are knowledge gaps in defining, identifying, and understanding the psychobiological mechanisms behind problematic social media use[59], and future research should attempt to fill these gaps using standardized methods. Therefore, further specialized development is necessary, focusing on the characteristics and factors influencing problematic social media usage, understanding user characteristics, and developing a standardized methodology for detecting and classifying problematic use of social media into problematic use intensity levels/stages[59] in order to more effectively address these issues. In the introduction of ChatGPT, the uniqueness of problematic social media usage and its users should be considered to avoid exacerbating problems while resolving problematic social media usage. The role of professional medical intervention should not be overlooked or avoided simply because of ChatGPT's capabilities. Given the shortcomings of ChatGPT in complex psychological, emotional, and sociocultural aspects, it is important to integrate human therapists and others to work in conjunction with ChatGPT to leverage their combined strengths in alleviating problematic social media usage.

CONCLUSION

Large language models can serve as an anonymous channel for communication and venting, provide personalized information and resources to help resolve the problem, and provide real-time monitoring and intervention of problematic social media usage behavior. Limitations, bias, privacy issues, and overreliance represent potential risks of large language models in mitigating problematic social media use. When LLMs such as ChatGPT are used to address problematic social media usage, it is crucial to have a full understanding of the technology's strengths and weaknesses and to strive to minimize its negative impact.

Footnotes

Provenance and peer review: Invited article; Externally peer reviewed.

Peer-review model: Single blind

Specialty type: Psychiatry

Country/Territory of origin: China

Peer-review report’s scientific quality classification

Grade A (Excellent): 0

Grade B (Very good): B

Grade C (Good): 0

Grade D (Fair): 0

Grade E (Poor): 0

P-Reviewer: Hashmi UM, Malaysia S-Editor: Lin C L-Editor: A P-Editor: Chen YX

References
1.  Datareportal  Global Social Media Statistics. Oct 2023 [cited 21 November 2023]. Available from: https://datareportal.com/social-media-users.  [PubMed]  [DOI]  [Cited in This Article: ]
2.  Gentile B, Twenge JM, Freeman EC, Campbell WK. The effect of social networking websites on positive self-views: An experimental investigation. Comput Hum Behav. 2012;28:1929-1933.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 123]  [Cited by in F6Publishing: 126]  [Article Influence: 10.5]  [Reference Citation Analysis (0)]
3.  Drahosová M, Balco P. The analysis of advantages and disadvantages of use of social media in European Union. Procedia Comput Sci. 2017;109:1005-1009.  [PubMed]  [DOI]  [Cited in This Article: ]
4.  Kittinger R, Correia CJ, Irons JG. Relationship between Facebook use and problematic Internet use among college students. Cyberpsychol Behav Soc Netw. 2012;15:324-327.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 88]  [Cited by in F6Publishing: 62]  [Article Influence: 5.2]  [Reference Citation Analysis (0)]
5.  Kuss DJ, Griffiths MD. Online social networking and addiction--a review of the psychological literature. Int J Environ Res Public Health. 2011;8:3528-3552.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1123]  [Cited by in F6Publishing: 638]  [Article Influence: 49.1]  [Reference Citation Analysis (0)]
6.  Echeburúa E, de Corral P. [Addiction to new technologies and to online social networking in young people: A new challenge]. Adicciones. 2010;22:91-95.  [PubMed]  [DOI]  [Cited in This Article: ]
7.  Gosling SD, Augustine AA, Vazire S, Holtzman N, Gaddis S. Manifestations of personality in Online Social Networks: self-reported Facebook-related behaviors and observable profile information. Cyberpsychol Behav Soc Netw. 2011;14:483-488.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 387]  [Cited by in F6Publishing: 183]  [Article Influence: 14.1]  [Reference Citation Analysis (0)]
8.  Andreassen CS, Pallesen S. Social network site addiction - an overview. Curr Pharm Des. 2014;20:4053-4061.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 289]  [Cited by in F6Publishing: 267]  [Article Influence: 29.7]  [Reference Citation Analysis (0)]
9.  Levenson JC, Shensa A, Sidani JE, Colditz JB, Primack BA. The association between social media use and sleep disturbance among young adults. Prev Med. 2016;85:36-41.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 187]  [Cited by in F6Publishing: 153]  [Article Influence: 19.1]  [Reference Citation Analysis (0)]
10.  Gao WJ, Hu Y, Ji JL, Liu XQ. Relationship between depression, smartphone addiction, and sleep among Chinese engineering students during the COVID-19 pandemic. World J Psychiatry. 2023;13:361-375.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in F6Publishing: 4]  [Reference Citation Analysis (2)]
11.  Perez E, Donovan EK, Soto P, Sabet SM, Ravyts SG, Dzierzewski JM. Trading likes for sleepless nights: A lifespan investigation of social media and sleep. Sleep Health. 2021;7:474-477.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1]  [Cited by in F6Publishing: 2]  [Article Influence: 0.7]  [Reference Citation Analysis (0)]
12.  Alonzo R, Hussain J, Stranges S, Anderson KK. Interplay between social media use, sleep quality, and mental health in youth: A systematic review. Sleep Med Rev. 2021;56:101414.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 34]  [Cited by in F6Publishing: 100]  [Article Influence: 25.0]  [Reference Citation Analysis (0)]
13.  George MJ, Russell MA, Piontak JR, Odgers CL. Concurrent and Subsequent Associations Between Daily Digital Technology Use and High-Risk Adolescents' Mental Health Symptoms. Child Dev. 2018;89:78-88.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 73]  [Cited by in F6Publishing: 75]  [Article Influence: 10.7]  [Reference Citation Analysis (0)]
14.  Gao J, Zheng P, Jia Y, Chen H, Mao Y, Chen S, Wang Y, Fu H, Dai J. Mental health problems and social media exposure during COVID-19 outbreak. PLoS One. 2020;15:e0231924.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1367]  [Cited by in F6Publishing: 1252]  [Article Influence: 313.0]  [Reference Citation Analysis (0)]
15.  Islam MS, Sujan MSH, Tasnim R, Mohona RA, Ferdous MZ, Kamruzzaman S, Toma TY, Sakib MN, Pinky KN, Islam MR, Siddique MAB, Anter FS, Hossain A, Hossen I, Sikder MT, Pontes HM. Problematic Smartphone and Social Media Use Among Bangladeshi College and University Students Amid COVID-19: The Role of Psychological Well-Being and Pandemic Related Factors. Front Psychiatry. 2021;12:647386.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 74]  [Cited by in F6Publishing: 58]  [Article Influence: 19.3]  [Reference Citation Analysis (0)]
16.  MediaBriefAdmin  KalaGato Report: COVID 19 Digital Impact: A Boon for Social Media? Apr 6, 2020. [cited 21 November 2023]. Available from: https://mediabrief.com/kalagato-vocid-19-digital-impact-report-part-1/.  [PubMed]  [DOI]  [Cited in This Article: ]
17.  Statista  Growth of monthly active users of selected social media platforms worldwide from 2019 to 2021. [cited 21 Nov 2023]. Available from: https://www.statista.com/statistics/1219318/social-media-platforms-growth-of-mau-worldwide/.  [PubMed]  [DOI]  [Cited in This Article: ]
18.  Kardefelt-Winther D. A conceptual and methodological critique of internet addiction research: Towards a model of compensatory internet use. Comput Hum Behav. 2014;31:351-354.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 629]  [Cited by in F6Publishing: 369]  [Article Influence: 36.9]  [Reference Citation Analysis (0)]
19.  Liu H, Liu W, Yoganathan V, Osburg VS. COVID-19 information overload and generation Z's social media discontinuance intention during the pandemic lockdown. Technol Forecast Soc Change. 2021;166:120600.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 121]  [Cited by in F6Publishing: 85]  [Article Influence: 28.3]  [Reference Citation Analysis (0)]
20.  Liu M, Peng W. Cognitive and psychological predictors of the negative outcomes associated with playing MMOGs (massively multiplayer online games). Comput Hum Behav. 2009;25:1306-1311.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 126]  [Cited by in F6Publishing: 82]  [Article Influence: 5.5]  [Reference Citation Analysis (0)]
21.  Liu XQ, Guo YX, Xu Y. Risk factors and digital interventions for anxiety disorders in college students: Stakeholder perspectives. World J Clin Cases. 2023;11:1442-1457.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in CrossRef: 35]  [Cited by in F6Publishing: 29]  [Article Influence: 29.0]  [Reference Citation Analysis (9)]
22.  Benjamin CL, Puleo CM, Settipani CA, Brodman DM, Edmunds JM, Cummings CM, Kendall PC. History of cognitive-behavioral therapy in youth. Child Adolesc Psychiatr Clin N Am. 2011;20:179-189.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 48]  [Cited by in F6Publishing: 51]  [Article Influence: 3.9]  [Reference Citation Analysis (0)]
23.  Hou YB, Xiong D, Jiang TL, Song LL, Wang Q. Social media addiction: Its impact, mediation, and intervention. Cyberpsychol. 2019;13.  [PubMed]  [DOI]  [Cited in This Article: ]
24.  Yen JY, Yen CF, Chen CC, Chen SH, Ko CH. Family factors of internet addiction and substance use experience in Taiwanese adolescents. Cyberpsychol Behav. 2007;10:323-329.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 278]  [Cited by in F6Publishing: 283]  [Article Influence: 16.6]  [Reference Citation Analysis (0)]
25.  Karaer Y, Akdemir D. Parenting styles, perceived social support and emotion regulation in adolescents with internet addiction. Compr Psychiatry. 2019;92:22-27.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 91]  [Cited by in F6Publishing: 108]  [Article Influence: 21.6]  [Reference Citation Analysis (0)]
26.  Orzack MH, Voluse AC, Wolf D, Hennen J. An ongoing study of group treatment for men involved in problematic Internet-enabled sexual behavior. Cyberpsychol Behav. 2006;9:348-360.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 69]  [Cited by in F6Publishing: 71]  [Article Influence: 3.9]  [Reference Citation Analysis (0)]
27.  Yang X, Chen A, PourNejatian N, Shin HC, Smith KE, Parisien C, Compas C, Martin C, Costa AB, Flores MG, Zhang Y, Magoc T, Harle CA, Lipori G, Mitchell DA, Hogan WR, Shenkman EA, Bian J, Wu Y. A large language model for electronic health records. NPJ Digit Med. 2022;5:194.  [PubMed]  [DOI]  [Cited in This Article: ]
28.  Huang AH, Wang H, Yang Y. FinBERT: A Large Language Model for Extracting Information from Financial Text*. Contemp Account Res. 2023;40:806-481.  [PubMed]  [DOI]  [Cited in This Article: ]
29.  Praveen SV, Vajrobol V. Understanding the Perceptions of Healthcare Researchers Regarding ChatGPT: A Study Based on Bidirectional Encoder Representation from Transformers (BERT) Sentiment Analysis and Topic Modeling. Ann Biomed Eng. 2023;51:1654-1656.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in F6Publishing: 10]  [Reference Citation Analysis (0)]
30.  Harrer S. Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine. EBioMedicine. 2023;90:104512.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 146]  [Cited by in F6Publishing: 88]  [Article Influence: 88.0]  [Reference Citation Analysis (0)]
31.  Cooper G. Examining Science Education in ChatGPT: An Exploratory Study of Generative Artificial Intelligence. J Sci Educ Technol. 2023;444-452.  [PubMed]  [DOI]  [Cited in This Article: ]
32.  Cheng SW, Chang CW, Chang WJ, Wang HW, Liang CS, Kishimoto T, Chang JP, Kuo JS, Su KP. The now and future of ChatGPT and GPT in psychiatry. Psychiatry Clin Neurosci. 2023;77:592-596.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 18]  [Cited by in F6Publishing: 25]  [Article Influence: 25.0]  [Reference Citation Analysis (0)]
33.  Aminah S, Hidayah N, Ramli M. Considering ChatGPT to be the first aid for young adults on mental health issues. J Public Health (Oxf). 2023;45:e615-e616.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 7]  [Cited by in F6Publishing: 7]  [Article Influence: 7.0]  [Reference Citation Analysis (0)]
34.  Eshghie M, Eshghie M. ChatGPT as a Therapist Assistant: A Suitability Study. SSRN Electronic J. 2023;.  [PubMed]  [DOI]  [Cited in This Article: ]
35.  Lucas GM, Gratch J, King A, Morency LP. It’s only a computer: Virtual humans increase willingness to disclose. Comput Hum Behav. 2014;37:94-100.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 337]  [Cited by in F6Publishing: 352]  [Article Influence: 35.2]  [Reference Citation Analysis (0)]
36.  Vaidyam AN, Wisniewski H, Halamka JD, Kashavan MS, Torous JB. Chatbots and Conversational Agents in Mental Health: A Review of the Psychiatric Landscape. Can J Psychiatry. 2019;64:456-464.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 287]  [Cited by in F6Publishing: 228]  [Article Influence: 45.6]  [Reference Citation Analysis (0)]
37.  Kuhail MA, Alturki N, Alramlawi S, Alhejori K. Interacting with educational chatbots: A systematic review. Educ Inf Technol. 2023;28:973-1018.  [PubMed]  [DOI]  [Cited in This Article: ]
38.  Cao XJ, Liu XQ. Artificial intelligence-assisted psychosis risk screening in adolescents: Practices and challenges. World J Psychiatry. 2022;12:1287-1297.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in CrossRef: 9]  [Cited by in F6Publishing: 27]  [Article Influence: 13.5]  [Reference Citation Analysis (9)]
39.  Bernhardt JM, Chaney JD, Chaney BH, Hall AK. New media for health education: a revolution in progress. Health Educ Behav. 2013;40:129-132.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 20]  [Cited by in F6Publishing: 20]  [Article Influence: 1.8]  [Reference Citation Analysis (0)]
40.  Miner AS, Laranjo L, Kocaballi AB. Chatbots in the fight against the COVID-19 pandemic. NPJ Digit Med. 2020;3:65.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 151]  [Cited by in F6Publishing: 95]  [Article Influence: 23.8]  [Reference Citation Analysis (0)]
41.  Van Noorden R. ChatGPT-like AIs are coming to major science search engines. Nature. 2023;620:258.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 2]  [Cited by in F6Publishing: 3]  [Article Influence: 3.0]  [Reference Citation Analysis (0)]
42.  Liu XQ, Guo YX, Wang X. Delivering substance use prevention interventions for adolescents in educational settings: A scoping review. World J Psychiatry. 2023;13:409-422.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in F6Publishing: 10]  [Reference Citation Analysis (8)]
43.  Austermann MI, Thomasius R, Paschke K. Assessing Problematic Social Media Use in Adolescents by Parental Ratings: Development and Validation of the Social Media Disorder Scale for Parents (SMDS-P). J Clin Med. 2021;10.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 6]  [Cited by in F6Publishing: 6]  [Article Influence: 2.0]  [Reference Citation Analysis (0)]
44.  Mohr DC, Zhang M, Schueller SM. Personal Sensing: Understanding Mental Health Using Ubiquitous Sensors and Machine Learning. Annu Rev Clin Psychol. 2017;13:23-47.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 352]  [Cited by in F6Publishing: 287]  [Article Influence: 41.0]  [Reference Citation Analysis (0)]
45.  He Y, Liang K, Han B, Chi X. A digital ally: The potential roles of ChatGPT in mental health services. Asian J Psychiatr. 2023;88:103726.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 2]  [Cited by in F6Publishing: 11]  [Article Influence: 11.0]  [Reference Citation Analysis (0)]
46.  Eysenbach G. The Role of ChatGPT, Generative Language Models, and Artificial Intelligence in Medical Education: A Conversation With ChatGPT and a Call for Papers. JMIR Med Educ. 2023;9:e46885.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 18]  [Cited by in F6Publishing: 183]  [Article Influence: 183.0]  [Reference Citation Analysis (0)]
47.  van Dis EAM, Bollen J, Zuidema W, van Rooij R, Bockting CL. ChatGPT: five priorities for research. Nature. 2023;614:224-226.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in F6Publishing: 289]  [Reference Citation Analysis (0)]
48.  Kretzschmar K, Tyroll H, Pavarini G, Manzini A, Singh I; NeurOx Young People’s Advisory Group. Can Your Phone Be Your Therapist? Young People's Ethical Perspectives on the Use of Fully Automated Conversational Agents (Chatbots) in Mental Health Support. Biomed Inform Insights. 2019;11:1178222619829083.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 74]  [Cited by in F6Publishing: 63]  [Article Influence: 12.6]  [Reference Citation Analysis (0)]
49.  Seghier ML. ChatGPT: not all languages are equal. Nature. 2023;615:216.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in F6Publishing: 19]  [Reference Citation Analysis (0)]
50.  OpenAI  Privacy policy. Nov 14 2023 [cited 11 January 2024]. Available from: https://openai.com/policies/privacy-policy.  [PubMed]  [DOI]  [Cited in This Article: ]
51.  Wu XD, Duan R, Ni JB. Unveiling security, privacy, and ethical concerns of ChatGPT. J Inf and Intelligence. 2023;.  [PubMed]  [DOI]  [Cited in This Article: ]
52.  Khawaja Z, Bélisle-Pipon JC. Your robot therapist is not your therapist: understanding the role of AI-powered mental health chatbots. Front Digit Health. 2023;5:1278186.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in F6Publishing: 2]  [Reference Citation Analysis (0)]
53.  Grodniewicz JP, Hohol M. Waiting for a digital therapist: three challenges on the path to psychotherapy delivered by artificial intelligence. Front Psychiatry. 2023;14:1190084.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 21]  [Cited by in F6Publishing: 8]  [Article Influence: 8.0]  [Reference Citation Analysis (0)]
54.  Beatty C, Malik T, Meheli S, Sinha C. Evaluating the Therapeutic Alliance With a Free-Text CBT Conversational Agent (Wysa): A Mixed-Methods Study. Front Digit Health. 2022;4:847991.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1]  [Cited by in F6Publishing: 27]  [Article Influence: 13.5]  [Reference Citation Analysis (0)]
55.  Hamdoun S, Monteleone R, Bookman T, Michael K. AI-based and digital mental health apps: balancing need and risk. IEEE Technol Soc Mag. 2023;42:25-36.  [PubMed]  [DOI]  [Cited in This Article: ]
56.  Choi EPH, Lee JJ, Ho MH, Kwok JYY, Lok KYW. Chatting or cheating? The impacts of ChatGPT and other artificial intelligence language models on nurse education. Nurse Educ Today. 2023;125:105796.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 15]  [Cited by in F6Publishing: 18]  [Article Influence: 18.0]  [Reference Citation Analysis (0)]
57.  Wang C, Liu S, Yang H, Guo J, Wu Y, Liu J. Ethical Considerations of Using ChatGPT in Health Care. J Med Internet Res. 2023;25:e48009.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 137]  [Cited by in F6Publishing: 78]  [Article Influence: 78.0]  [Reference Citation Analysis (0)]
58.  Emsley R. ChatGPT: these are not hallucinations - they're fabrications and falsifications. Schizophrenia (Heidelb). 2023;9:52.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 8]  [Cited by in F6Publishing: 35]  [Article Influence: 35.0]  [Reference Citation Analysis (0)]
59.  Morris R, Moretta T, Potenza MN. The Psychobiology of Problematic Use of Social Media. Curr Behav Neurosci Rep. 2023;10:65-74.  [PubMed]  [DOI]  [Cited in This Article: ]