BPG is committed to discovery and dissemination of knowledge
Minireviews Open Access
Copyright ©The Author(s) 2026. Published by Baishideng Publishing Group Inc. All rights reserved.
World J Psychiatry. Jan 19, 2026; 16(1): 110249
Published online Jan 19, 2026. doi: 10.5498/wjp.v16.i1.110249
Exploring artificial intelligence literacy’s role in healthy behaviors and mental health
Jaewon Lee, Department of Social Welfare, Inha University, Incheon 22212, South Korea
Jennifer Allen, School of Social Work, Michigan State University, East Lansing, MI 48824, United States
Gyuhyun Choi, Integrative Arts Therapy, Dongduk Women’s University, Seoul 02748, South Korea
ORCID number: Jaewon Lee (0000-0002-8479-4586).
Author contributions: Lee J, Allen J, and Choi G contributed to editorial changes in the manuscript and approved the final manuscript.
Conflict-of-interest statement: All the authors report no relevant conflicts of interest for this article.
Open Access: This article is an open-access article that was selected by an in-house editor and fully peer-reviewed by external reviewers. It is distributed in accordance with the Creative Commons Attribution NonCommercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: https://creativecommons.org/Licenses/by-nc/4.0/
Corresponding author: Jaewon Lee, PhD, Assistant Professor, Department of Social Welfare, Inha University, Inha-ro 100, Incheon 22212, South Korea. j343@inha.ac.kr
Received: June 5, 2025
Revised: June 12, 2025
Accepted: October 11, 2025
Published online: January 19, 2026
Processing time: 209 Days and 21.1 Hours

Abstract

Healthy behavior has long been linked to mental health outcomes. However, the role of artificial intelligence (AI) literacy in shaping healthy behaviors and its potential impact on mental health remains underexplored. This paper presents a scoping review offering a novel perspective on the intersection of healthy behaviors, mental health, and AI literacy. By examining how individuals’ understanding of AI influences their choices regarding nutrition and their susceptibility to mental health issues, the current study explores emerging trends in health behavior decision-making. This emphasizes the need for integrating AI literacy into mental health and health behaviors education, as well as the development of AI-driven tools to support healthier behavior choices. It highlights that individuals with low AI literacy may misinterpret or overly depend on AI guidance, resulting in maladaptive health choices, while those with high AI literacy may be more likely to engage reflectively and sustain positive behaviors. The paper outlines the importance of inclusive education, user-centered design, and community-based support systems to enhance AI literacy for digitally marginalized groups. AI literacy may be positioned as a key determinant of health equity, better allowing for interdisciplinary strategies that empower individuals to make informed, autonomous decisions that promote both physical and mental health.

Key Words: Artificial intelligence literacy; Mental health; Healthy behavior; Digital health education; Technology acceptance

Core Tip: This study highlights how artificial intelligence literacy could play a pivotal role in shaping health behaviors and mitigating mental health problems. Bridging physical and psychological health and digital literacy, this study argues for integrating artificial intelligence education into health promotion strategies to foster more effective and equitable healthy behaviors and mental health support systems.



INTRODUCTION

Healthy behaviors - such as proper nutrition, regular exercise, and adequate sleep - have consistently been associated with reduced risks of depression and other mental health issues[1-6]. In this review, “healthy behavior” refers to lifestyle practices that align with widely accepted public health recommendations, including maintaining a balanced diet, engaging in regular physical activity, and getting sufficient restorative sleep. Despite this, adoption of these behaviors remains inconsistent. With the digital transformation of health services, artificial intelligence (AI) is increasingly present in consumer-facing health technologies[7,8]. From diet-tracking apps to mental health chatbots, AI interfaces are positioned to influence personal health decisions. However, the efficacy of these technologies depends not solely on access, but also on users’ ability to understand and navigate them. This review examines the conceptual and practical implications of AI literacy - defined as the capacity to understand, evaluate, and responsibly use AI tools - in influencing health-related behavior and psychological outcomes, especially depression. To achieve this, the review integrates empirical findings to analyze how AI literacy impacts behavioral health outcomes and to identify both enabling mechanisms and potential risks associated with AI-mediated health decisions.

REVIEW PROCESS

This study conducted a systematic review methodology to examine the current state of research on the interconnections among AI literacy, healthy behaviors, and mental health outcomes. The review strategy was designed to capture a broad yet focused range of studies that align with the conceptual scope of the research. Multiple academic databases were systematically searched, including PubMed, Scopus, Web of Science, and Google Scholar, to identify relevant peer-reviewed publications from the year 2020 through early 2025. The search was guided by combinations of key terms such as “AI literacy”, “artificial intelligence and health”, “digital health skills”, “mental health”, “health behavior”, “technology use”, and “psychological well-being”. To ensure relevance and quality, specific inclusion criteria were applied. First, studies had to address AI-related knowledge, skills, or attitudes as they relate to individual health behaviors or mental health status. Second, eligible research needed to examine populations across various age groups, with particular interest in adult and older adult samples, given their increased vulnerability to digital divides and health disparities. Third, studies were required to assess at least one mental health outcome - such as stress, anxiety, depression, or life satisfaction - or behavioral health indicators like physical activity, sleep, or digital engagement for well-being. Exclusion criteria were applied to filter out studies lacking empirical data, theoretical grounding, or a clear connection between AI literacy and health-related outcomes. Articles focused solely on technical aspects of AI without human health dimensions were also excluded. The resulting body of literature enabled a critical synthesis of how AI literacy is positioned in health contexts and identified emerging themes around its potential to influence or mediate psychological and behavioral health outcomes. This review informs future directions for digital inclusion strategies and interventions targeting mental well-being through technological empowerment.

AI LITERACY

AI literacy encompasses both cognitive and behavioral competencies. Cognitively, it involves understanding what AI is, how it works, and the societal implications of its application[9,10]. Behaviorally, it includes the ability to interact meaningfully with AI systems, apply critical thinking in interpreting their outputs, and make autonomous decisions[9]. These abilities are increasingly critical as AI becomes embedded in mobile health tools, digital therapeutics, and wearable technologies. A lack of AI literacy may hinder individuals from fully leveraging health technologies or lead to blind reliance without skepticism[9]. AI literacy thus lies at the intersection of health literacy, digital literacy, and ethical literacy.

HEALTHY BEHAVIORS AND MENTAL HEALTH

A large body of literature supports the inverse relationship between certain behaviors and depressive symptoms. For example, a balanced diet has been linked to lower rates of depression[2-4]. Regular physical activity is also associated with improved mood regulation and lower levels of mental health problems[1,5]. Further, sleep hygiene correlates with improved affective functioning, which positively influences mental health[6]. These behaviors, however, are often not intuitive, and their adoption can be shaped by external guidance - now increasingly mediated by AI systems. The extent to which individuals can discern beneficial guidance from misleading or generalized AI-generated advice is where AI literacy becomes essential.

THE ROLE OF AI LITERACY IN HEALTH DECISION-MAKING

Modern AI tools often present outputs with high confidence but minimal context, potentially misleading users. For example, a chatbot might recommend caloric restrictions without accounting for user-specific medical conditions. Those with low AI literacy may fail to question these suggestions or understand their algorithmic basis, resulting in unintended health consequences. In contrast, AI-literate individuals are more likely to identify overly deterministic outputs, seek corroborative sources, or input more contextually relevant data[9]. Consequently, AI literacy can influence not just the uptake of digital health technologies, but the quality of decisions made through them, with downstream implications for psychological resilience and autonomy.

AI LITERACY AND HEALTHY BEHAVIORAL PATTERNS

AI-driven technologies now assist in a wide spectrum of healthful behaviors, including physical activity, sleep tracking, stress management, and mindfulness, in addition to nutrition[11,12]. These tools typically leverage sensors, algorithms, and personalized data to nudge users toward positive behavioral choices[13,14]. For instance, smartwatches can remind users to stand or move, while AI-powered sleep apps offer customized routines to improve sleep hygiene. Yet, the utility of such features depends heavily on how users interpret and interact with these suggestions.

Individuals with low AI literacy may use these tools in a mechanistic or overly reliant way - following prompts without fully understanding their basis or adapting them to their personal contexts. This can lead to ineffective behavior change or even frustration when expected outcomes do not materialize. For example, a user might comply with recommended exercise prompts but feel disheartened when this does not immediately lead to weight loss or mood improvement, resulting in abandonment of healthy routines and lowered self-efficacy.

In contrast, AI-literate individuals are more likely to treat algorithmic outputs as flexible guidance rather than strict rules. They are better positioned to evaluate the credibility of AI recommendations, personalize behavioral goals, and integrate technology use into a broader understanding of health[9]. This reflective engagement fosters more consistent and sustainable behavior changes, including regular physical activity, improved sleep routines, and balanced stress responses - all of which are empirically linked to improved mood and reduced depression risk[1,5,15,16]. Thus, AI literacy not only supports the technical use of tools but facilitates the development of internal motivation and adaptive health behaviors. For example, individuals with low AI literacy may engage in unhealthy behaviors by subjectively interpreting recommendations from AI tools, whereas those with high AI literacy are more likely to adopt healthy behaviors by appropriately evaluating and integrating AI-generated guidance.

MENTAL HEALTH AND DIGITAL HEALTH NAVIGATION

Mental health is shaped by a combination of individual vulnerabilities, environmental stressors, and how people interact with and make sense of the information they receive, but also by one's ability to navigate complex informational environments. AI-powered mental health tools such as mood-tracking apps and cognitive behavioral chatbots are increasingly accessible, yet their effectiveness hinges on the user’s interpretive skills[7,9,17,18]. Individuals with low AI literacy may misinterpret chatbot feedback as definitive or diagnostic, amplifying feelings of inadequacy or helplessness when results do not match expectations or if feedback appears overly scripted. Moreover, low AI literacy can lead to passive or disengaged interactions with these tools - users use AI in ways that are not appropriate, may fail to personalize settings, abandon use after unsatisfactory outcomes, or ignore privacy-related risks[9]. However, even individuals with high AI literacy are not immune to risk. Sophisticated users may experience increased anxiety due to constant self-monitoring, or develop unhealthy perfectionistic standards informed by AI-generated feedback. Moreover, algorithmic bias can result in inaccurate or harmful recommendations despite critical engagement. High literacy does not fully mitigate concerns around data privacy either, as even informed users may be vulnerable to opaque data practices embedded in AI systems. To mitigate these risks, high-literacy users can adopt strategies such as setting boundaries on technology use through “digital detox” periods, seeking periodic clinician oversight to contextualize AI-generated insights, and balancing data-driven feedback with subjective well-being measures. Importantly, the use of AI in mental health should not be equated with clinical diagnosis or treatment. Ethical limitations, including the risk of misdiagnosis, lack of empathy, and limited accountability, highlight the need for clearly delineating the role of AI tools as supportive - rather than substitutive - of professional mental health care. This disengagement can prevent consistent behavioral patterns (e.g., journaling, reflection, goal tracking) that support therapeutic progress. In contrast, AI-literate individuals can navigate these systems with a clearer understanding of their limitations, using them as supplemental tools while actively monitoring their own emotional responses. By combining technical expertise with protective habits, high-literacy users can maintain psychological balance while still benefiting from AI-assisted mental health tools. Higher AI literacy can support better adherence to mental health interventions, enhanced help-seeking behavior, and greater continuity in self-care routines. These behavioral changes are protective against the progression of depressive symptoms, particularly in populations with limited access to traditional mental health services. Therefore, improving AI literacy is not only a matter of technology use - it may influence psychological resilience and long-term behavioral health.

IMPLICATIONS FOR EDUCATION AND POLICY

Public health systems and educational institutions must recognize AI literacy as a determinant of digital health equity. Curricula that include AI concepts - such as algorithmic bias, data privacy, and human-AI interaction - should be integrated into school programs, adult education, and community health workshops. Policies promoting transparency in AI design (e.g., explainable AI) and inclusive user testing must also be prioritized. Moreover, funding for community-based digital literacy programs should explicitly address the health applications of AI, targeting populations at risk of digital exclusion, including older adults, lower-income groups, and individuals with limited formal education.

FUTURE DIRECTIONS AND RECOMMENDATIONS

Research should explore how AI literacy interacts with psychological factors such as perceived self-efficacy, locus of control, and health motivation. Large-scale surveys and mixed-methods studies could illuminate demographic disparities in AI literacy and its behavioral consequences. Intervention studies could evaluate the effectiveness of AI literacy training modules in improving health decision quality and mental health metrics. Collaborative efforts among technologists, clinicians, educators, and policymakers are critical to foster a supportive ecosystem where AI enhances human’s physical and psychological health.

CONCLUSION

AI literacy is emerging as a key enabler of proactive and sustainable health behavior. As individuals increasingly interact with AI-enabled tools that guide dietary choices, physical activity, sleep routines, and stress management, their capacity to comprehend and contextualize algorithmic outputs can determine whether these interactions foster positive or detrimental habits. Rather than reiterating the presence of AI in health contexts, this review emphasizes how AI literacy functions as a bridge between intention and action. Those with sufficient AI literacy are not only more likely to critically evaluate recommendations, but also to integrate them meaningfully into daily routines, enhancing self-regulation and reducing vulnerability to mental health challenges such as depression. AI literacy enhances digital agency, equipping users to balance guidance with personal values and contextual needs, which is especially important in managing mental health over time. To further support the development of AI literacy and reinforce user agency, the role of human facilitators - such as health coaches, digital navigators, or community educators - becomes essential. These individuals can provide personalized assistance, helping users not only interpret AI outputs more effectively but also build the confidence to engage with technology critically and autonomously. For users who may be less familiar or comfortable with AI technologies, lowering entry barriers is crucial. Developers should prioritize intuitive, user-friendly interfaces with clear explanations of how AI-generated recommendations are formed. Tooltips, tutorials, and layered levels of guidance can make these tools more accessible. Additionally, integrating human facilitators - such as health coaches, digital navigators, or community educators - can provide personalized support, helping users build confidence in interpreting and using AI tools appropriately. Public health campaigns and educational programs should also offer practical workshops and real-world demonstrations to demystify AI use in everyday health decisions. By fostering safe, supportive learning environments, individuals who might otherwise be excluded can begin to engage with digital tools at their own pace. AI literacy is not an auxiliary digital skill, but a foundational competency that supports informed engagement with modern health technologies. Cultivating this literacy through education, inclusive design, and policy attention will be essential to empower individuals in navigating AI-rich environments while sustaining both physical and mental well-being. Table 1 provides an overview of this paper in relation to AI literacy’s role.

Table 1 Framework of artificial intelligence literacy’s role in healthy behaviors and mental health.
Key concept
Academic summary
AI literacyRefers to a set of cognitive and behavioral competencies enabling individuals to understand, critically evaluate, and effectively engage with AI technologies
Influence on health behaviorsIndividuals with high AI literacy are more likely to make autonomous, context-sensitive decisions regarding nutrition, physical activity, and sleep hygiene
Implications for mental healthHigher AI literacy facilitates sustained engagement with digital mental health tools, promotes psychological resilience, and reduces vulnerability to depression
Interpretation of AI outputsAI-literate users are better equipped to question overly deterministic or context-deficient algorithmic recommendations, thereby avoiding maladaptive outcomes
Digital health equityAI literacy functions as a determinant of equitable access to and benefit from health technologies, necessitating support for digitally marginalized populations
Educational imperativesCurricular integration of AI-related content, including algorithmic bias and ethical implications, is critical for fostering informed digital health engagement
Policy considerationsEmphasizes the importance of explainable AI, inclusive user-centered design, and community-based initiatives to build digital competence at scale
Footnotes

Provenance and peer review: Invited article; Externally peer reviewed.

Peer-review model: Single blind

Specialty type: Psychiatry

Country of origin: South Korea

Peer-review report’s classification

Scientific Quality: Grade B, Grade B

Novelty: Grade A, Grade A

Creativity or Innovation: Grade A, Grade A

Scientific Significance: Grade B, Grade B

P-Reviewer: Yan J, Chief Physician, Full Professor, China S-Editor: Bai Y L-Editor: A P-Editor: Zhang L

References
1.  Chekroud SR, Gueorguieva R, Zheutlin AB, Paulus M, Krumholz HM, Krystal JH, Chekroud AM. Association between physical exercise and mental health in 1·2 million individuals in the USA between 2011 and 2015: a cross-sectional study. Lancet Psychiatry. 2018;5:739-746.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 480]  [Cited by in RCA: 626]  [Article Influence: 89.4]  [Reference Citation Analysis (0)]
2.  Ekinci GN, Sanlier N. The relationship between nutrition and depression in the life process: A mini-review. Exp Gerontol. 2023;172:112072.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 63]  [Reference Citation Analysis (0)]
3.  Grajek M, Krupa-Kotara K, Białek-Dratwa A, Sobczyk K, Grot M, Kowalski O, Staśkiewicz W. Nutrition and mental health: A review of current knowledge about the impact of diet on mental health. Front Nutr. 2022;9:943998.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 2]  [Cited by in RCA: 82]  [Article Influence: 27.3]  [Reference Citation Analysis (0)]
4.  Lang UE, Beglinger C, Schweinfurth N, Walter M, Borgwardt S. Nutritional aspects of depression. Cell Physiol Biochem. 2015;37:1029-1043.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 190]  [Cited by in RCA: 165]  [Article Influence: 16.5]  [Reference Citation Analysis (0)]
5.  Mikkelsen K, Stojanovska L, Polenakovic M, Bosevski M, Apostolopoulos V. Exercise and mental health. Maturitas. 2017;106:48-56.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 293]  [Cited by in RCA: 520]  [Article Influence: 65.0]  [Reference Citation Analysis (0)]
6.  Yasugaki S, Okamura H, Kaneko A, Hayashi Y. Bidirectional relationship between sleep and depression. Neurosci Res. 2025;211:57-64.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 37]  [Cited by in RCA: 76]  [Article Influence: 76.0]  [Reference Citation Analysis (0)]
7.  Greenhill AT  AI‐Enabled Consumer‐Facing Health Technology. In: Byrne MF, Parsa N, Greenhill AT, Chahal D, Ahmad O, Bagci U, editors. AI in Clinical Medicine. Hoboken: John Wiley & Sons, Inc, 2023: 407-425.  [PubMed]  [DOI]
8.  Mullankandy DS, Kazmi I, Islam T, Phia WJ. Emerging Trends in AI-Driven Health Tech: A Comprehensive Review and Future Prospects. Eur J Technol. 2024;8:25-40.  [PubMed]  [DOI]  [Full Text]
9.  Long D, Magerko B.   What is AI Literacy? Competencies and Design Considerations. In: Bernhaupt R, Mueller F. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems; Apr 25-30; Honolulu HI, United States. New York: ACM Digital Library, 2020: 1-16.  [PubMed]  [DOI]
10.  Ng DTK, Leung JKL, Chu SKW, Qiao MS. Conceptualizing AI literacy: An exploratory review. Comput Educ Artif Intell. 2021;2:100041.  [PubMed]  [DOI]  [Full Text]
11.  Olawade DB, Wada OZ, Odetayo A, David-olawade AC, Asaolu F, Eberhardt J. Enhancing mental health with Artificial Intelligence: Current trends and future prospects. J Med Surg Public Health. 2024;3:100099.  [PubMed]  [DOI]  [Full Text]
12.  Su Z, Figueiredo MC, Jo J, Zheng K, Chen Y. Analyzing Description, User Understanding and Expectations of AI in Mobile Health Applications. AMIA Annu Symp Proc. 2020;2020:1170-1179.  [PubMed]  [DOI]
13.  Li YH, Li YL, Wei MY, Li GY. Innovation and challenges of artificial intelligence technology in personalized healthcare. Sci Rep. 2024;14:18994.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 67]  [Reference Citation Analysis (0)]
14.  Shajari S, Kuruvinashetti K, Komeili A, Sundararaj U. The Emergence of AI-Based Wearable Sensors for Digital Health Technology: A Review. Sensors (Basel). 2023;23:9498.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 133]  [Reference Citation Analysis (0)]
15.  Orzechowska A, Bliźniewska-Kowalska K, Gałecki P, Szulc A, Płaza O, Su KP, Georgescu D, Gałecka M. Ways of Coping with Stress among Patients with Depressive Disorders. J Clin Med. 2022;11:6500.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 8]  [Reference Citation Analysis (0)]
16.  Pandi-Perumal SR, Monti JM, Burman D, Karthikeyan R, BaHammam AS, Spence DW, Brown GM, Narashimhan M. Clarifying the role of sleep in depression: A narrative review. Psychiatry Res. 2020;291:113239.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 57]  [Cited by in RCA: 209]  [Article Influence: 41.8]  [Reference Citation Analysis (0)]
17.  Anisha SA, Sen A, Bain C. Evaluating the Potential and Pitfalls of AI-Powered Conversational Agents as Humanlike Virtual Health Carers in the Remote Management of Noncommunicable Diseases: Scoping Review. J Med Internet Res. 2024;26:e56114.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 1]  [Cited by in RCA: 17]  [Article Influence: 17.0]  [Reference Citation Analysis (0)]
18.  Tully SM, Longoni C, Appel G. Lower Artificial Intelligence Literacy Predicts Greater AI Receptivity. J Mark. 2025;89:1-20.  [PubMed]  [DOI]  [Full Text]