Published online Jan 19, 2026. doi: 10.5498/wjp.v16.i1.110249
Revised: June 12, 2025
Accepted: October 11, 2025
Published online: January 19, 2026
Processing time: 209 Days and 21.1 Hours
Healthy behavior has long been linked to mental health outcomes. However, the role of artificial intelligence (AI) literacy in shaping healthy behaviors and its potential impact on mental health remains underexplored. This paper presents a scoping review offering a novel perspective on the intersection of healthy be
Core Tip: This study highlights how artificial intelligence literacy could play a pivotal role in shaping health behaviors and mitigating mental health problems. Bridging physical and psychological health and digital literacy, this study argues for integrating artificial intelligence education into health promotion strategies to foster more effective and equitable healthy behaviors and mental health support systems.
- Citation: Lee J, Allen J, Choi G. Exploring artificial intelligence literacy’s role in healthy behaviors and mental health. World J Psychiatry 2026; 16(1): 110249
- URL: https://www.wjgnet.com/2220-3206/full/v16/i1/110249.htm
- DOI: https://dx.doi.org/10.5498/wjp.v16.i1.110249
Healthy behaviors - such as proper nutrition, regular exercise, and adequate sleep - have consistently been associated with reduced risks of depression and other mental health issues[1-6]. In this review, “healthy behavior” refers to lifestyle practices that align with widely accepted public health recommendations, including maintaining a balanced diet, en
This study conducted a systematic review methodology to examine the current state of research on the interconnections among AI literacy, healthy behaviors, and mental health outcomes. The review strategy was designed to capture a broad yet focused range of studies that align with the conceptual scope of the research. Multiple academic databases were systematically searched, including PubMed, Scopus, Web of Science, and Google Scholar, to identify relevant peer-reviewed publications from the year 2020 through early 2025. The search was guided by combinations of key terms such as “AI literacy”, “artificial intelligence and health”, “digital health skills”, “mental health”, “health behavior”, “technology use”, and “psychological well-being”. To ensure relevance and quality, specific inclusion criteria were applied. First, studies had to address AI-related knowledge, skills, or attitudes as they relate to individual health behaviors or mental health status. Second, eligible research needed to examine populations across various age groups, with particular interest in adult and older adult samples, given their increased vulnerability to digital divides and health disparities. Third, studies were required to assess at least one mental health outcome - such as stress, anxiety, depression, or life satisfaction - or behavioral health indicators like physical activity, sleep, or digital engagement for well-being. Exclusion criteria were applied to filter out studies lacking empirical data, theoretical grounding, or a clear connection between AI literacy and health-related outcomes. Articles focused solely on technical aspects of AI without human health dimensions were also excluded. The resulting body of literature enabled a critical synthesis of how AI literacy is positioned in health contexts and identified emerging themes around its potential to influence or mediate psychological and behavioral health outcomes. This review informs future directions for digital inclusion strategies and interventions targeting mental well-being through technological empowerment.
AI literacy encompasses both cognitive and behavioral competencies. Cognitively, it involves understanding what AI is, how it works, and the societal implications of its application[9,10]. Behaviorally, it includes the ability to interact mea
A large body of literature supports the inverse relationship between certain behaviors and depressive symptoms. For example, a balanced diet has been linked to lower rates of depression[2-4]. Regular physical activity is also associated with improved mood regulation and lower levels of mental health problems[1,5]. Further, sleep hygiene correlates with improved affective functioning, which positively influences mental health[6]. These behaviors, however, are often not intuitive, and their adoption can be shaped by external guidance - now increasingly mediated by AI systems. The extent to which individuals can discern beneficial guidance from misleading or generalized AI-generated advice is where AI literacy becomes essential.
Modern AI tools often present outputs with high confidence but minimal context, potentially misleading users. For example, a chatbot might recommend caloric restrictions without accounting for user-specific medical conditions. Those with low AI literacy may fail to question these suggestions or understand their algorithmic basis, resulting in unintended health consequences. In contrast, AI-literate individuals are more likely to identify overly deterministic outputs, seek corroborative sources, or input more contextually relevant data[9]. Consequently, AI literacy can influence not just the uptake of digital health technologies, but the quality of decisions made through them, with downstream implications for psychological resilience and autonomy.
AI-driven technologies now assist in a wide spectrum of healthful behaviors, including physical activity, sleep tracking, stress management, and mindfulness, in addition to nutrition[11,12]. These tools typically leverage sensors, algorithms, and personalized data to nudge users toward positive behavioral choices[13,14]. For instance, smartwatches can remind users to stand or move, while AI-powered sleep apps offer customized routines to improve sleep hygiene. Yet, the utility of such features depends heavily on how users interpret and interact with these suggestions.
Individuals with low AI literacy may use these tools in a mechanistic or overly reliant way - following prompts without fully understanding their basis or adapting them to their personal contexts. This can lead to ineffective behavior change or even frustration when expected outcomes do not materialize. For example, a user might comply with recom
In contrast, AI-literate individuals are more likely to treat algorithmic outputs as flexible guidance rather than strict rules. They are better positioned to evaluate the credibility of AI recommendations, personalize behavioral goals, and integrate technology use into a broader understanding of health[9]. This reflective engagement fosters more consistent and sustainable behavior changes, including regular physical activity, improved sleep routines, and balanced stress responses - all of which are empirically linked to improved mood and reduced depression risk[1,5,15,16]. Thus, AI literacy not only supports the technical use of tools but facilitates the development of internal motivation and adaptive health behaviors. For example, individuals with low AI literacy may engage in unhealthy behaviors by subjectively interpreting recommendations from AI tools, whereas those with high AI literacy are more likely to adopt healthy behaviors by appropriately evaluating and integrating AI-generated guidance.
Mental health is shaped by a combination of individual vulnerabilities, environmental stressors, and how people interact with and make sense of the information they receive, but also by one's ability to navigate complex informational environments. AI-powered mental health tools such as mood-tracking apps and cognitive behavioral chatbots are increasingly accessible, yet their effectiveness hinges on the user’s interpretive skills[7,9,17,18]. Individuals with low AI literacy may misinterpret chatbot feedback as definitive or diagnostic, amplifying feelings of inadequacy or helplessness when results do not match expectations or if feedback appears overly scripted. Moreover, low AI literacy can lead to passive or disengaged interactions with these tools - users use AI in ways that are not appropriate, may fail to personalize settings, abandon use after unsatisfactory outcomes, or ignore privacy-related risks[9]. However, even individuals with high AI literacy are not immune to risk. Sophisticated users may experience increased anxiety due to constant self-monitoring, or develop unhealthy perfectionistic standards informed by AI-generated feedback. Moreover, algorithmic bias can result in inaccurate or harmful recommendations despite critical engagement. High literacy does not fully mitigate concerns around data privacy either, as even informed users may be vulnerable to opaque data practices em
Public health systems and educational institutions must recognize AI literacy as a determinant of digital health equity. Curricula that include AI concepts - such as algorithmic bias, data privacy, and human-AI interaction - should be in
Research should explore how AI literacy interacts with psychological factors such as perceived self-efficacy, locus of control, and health motivation. Large-scale surveys and mixed-methods studies could illuminate demographic disparities in AI literacy and its behavioral consequences. Intervention studies could evaluate the effectiveness of AI literacy training modules in improving health decision quality and mental health metrics. Collaborative efforts among technologists, clinicians, educators, and policymakers are critical to foster a supportive ecosystem where AI enhances human’s physical and psychological health.
AI literacy is emerging as a key enabler of proactive and sustainable health behavior. As individuals increasingly interact with AI-enabled tools that guide dietary choices, physical activity, sleep routines, and stress management, their capacity to comprehend and contextualize algorithmic outputs can determine whether these interactions foster positive or detrimental habits. Rather than reiterating the presence of AI in health contexts, this review emphasizes how AI literacy functions as a bridge between intention and action. Those with sufficient AI literacy are not only more likely to critically evaluate recommendations, but also to integrate them meaningfully into daily routines, enhancing self-regulation and reducing vulnerability to mental health challenges such as depression. AI literacy enhances digital agency, equipping users to balance guidance with personal values and contextual needs, which is especially important in managing mental health over time. To further support the development of AI literacy and reinforce user agency, the role of human fa
| Key concept | Academic summary |
| AI literacy | Refers to a set of cognitive and behavioral competencies enabling individuals to understand, critically evaluate, and effectively engage with AI technologies |
| Influence on health behaviors | Individuals with high AI literacy are more likely to make autonomous, context-sensitive decisions regarding nutrition, physical activity, and sleep hygiene |
| Implications for mental health | Higher AI literacy facilitates sustained engagement with digital mental health tools, promotes psychological resilience, and reduces vulnerability to depression |
| Interpretation of AI outputs | AI-literate users are better equipped to question overly deterministic or context-deficient algorithmic recommendations, thereby avoiding maladaptive outcomes |
| Digital health equity | AI literacy functions as a determinant of equitable access to and benefit from health technologies, necessitating support for digitally marginalized populations |
| Educational imperatives | Curricular integration of AI-related content, including algorithmic bias and ethical implications, is critical for fostering informed digital health engagement |
| Policy considerations | Emphasizes the importance of explainable AI, inclusive user-centered design, and community-based initiatives to build digital competence at scale |
| 1. | Chekroud SR, Gueorguieva R, Zheutlin AB, Paulus M, Krumholz HM, Krystal JH, Chekroud AM. Association between physical exercise and mental health in 1·2 million individuals in the USA between 2011 and 2015: a cross-sectional study. Lancet Psychiatry. 2018;5:739-746. [RCA] [PubMed] [DOI] [Full Text] [Cited by in Crossref: 480] [Cited by in RCA: 626] [Article Influence: 89.4] [Reference Citation Analysis (0)] |
| 2. | Ekinci GN, Sanlier N. The relationship between nutrition and depression in the life process: A mini-review. Exp Gerontol. 2023;172:112072. [RCA] [PubMed] [DOI] [Full Text] [Cited by in RCA: 63] [Reference Citation Analysis (0)] |
| 3. | Grajek M, Krupa-Kotara K, Białek-Dratwa A, Sobczyk K, Grot M, Kowalski O, Staśkiewicz W. Nutrition and mental health: A review of current knowledge about the impact of diet on mental health. Front Nutr. 2022;9:943998. [RCA] [PubMed] [DOI] [Full Text] [Full Text (PDF)] [Cited by in Crossref: 2] [Cited by in RCA: 82] [Article Influence: 27.3] [Reference Citation Analysis (0)] |
| 4. | Lang UE, Beglinger C, Schweinfurth N, Walter M, Borgwardt S. Nutritional aspects of depression. Cell Physiol Biochem. 2015;37:1029-1043. [RCA] [PubMed] [DOI] [Full Text] [Cited by in Crossref: 190] [Cited by in RCA: 165] [Article Influence: 16.5] [Reference Citation Analysis (0)] |
| 5. | Mikkelsen K, Stojanovska L, Polenakovic M, Bosevski M, Apostolopoulos V. Exercise and mental health. Maturitas. 2017;106:48-56. [RCA] [PubMed] [DOI] [Full Text] [Cited by in Crossref: 293] [Cited by in RCA: 520] [Article Influence: 65.0] [Reference Citation Analysis (0)] |
| 6. | Yasugaki S, Okamura H, Kaneko A, Hayashi Y. Bidirectional relationship between sleep and depression. Neurosci Res. 2025;211:57-64. [RCA] [PubMed] [DOI] [Full Text] [Cited by in Crossref: 37] [Cited by in RCA: 76] [Article Influence: 76.0] [Reference Citation Analysis (0)] |
| 7. | Greenhill AT. AI‐Enabled Consumer‐Facing Health Technology. In: Byrne MF, Parsa N, Greenhill AT, Chahal D, Ahmad O, Bagci U, editors. AI in Clinical Medicine. Hoboken: John Wiley & Sons, Inc, 2023: 407-425. |
| 8. | Mullankandy DS, Kazmi I, Islam T, Phia WJ. Emerging Trends in AI-Driven Health Tech: A Comprehensive Review and Future Prospects. Eur J Technol. 2024;8:25-40. [DOI] [Full Text] |
| 9. | Long D, Magerko B. What is AI Literacy? Competencies and Design Considerations. In: Bernhaupt R, Mueller F. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems; Apr 25-30; Honolulu HI, United States. New York: ACM Digital Library, 2020: 1-16. |
| 10. | Ng DTK, Leung JKL, Chu SKW, Qiao MS. Conceptualizing AI literacy: An exploratory review. Comput Educ Artif Intell. 2021;2:100041. [DOI] [Full Text] |
| 11. | Olawade DB, Wada OZ, Odetayo A, David-olawade AC, Asaolu F, Eberhardt J. Enhancing mental health with Artificial Intelligence: Current trends and future prospects. J Med Surg Public Health. 2024;3:100099. [DOI] [Full Text] |
| 12. | Su Z, Figueiredo MC, Jo J, Zheng K, Chen Y. Analyzing Description, User Understanding and Expectations of AI in Mobile Health Applications. AMIA Annu Symp Proc. 2020;2020:1170-1179. [PubMed] |
| 13. | Li YH, Li YL, Wei MY, Li GY. Innovation and challenges of artificial intelligence technology in personalized healthcare. Sci Rep. 2024;14:18994. [RCA] [PubMed] [DOI] [Full Text] [Full Text (PDF)] [Cited by in RCA: 67] [Reference Citation Analysis (0)] |
| 14. | Shajari S, Kuruvinashetti K, Komeili A, Sundararaj U. The Emergence of AI-Based Wearable Sensors for Digital Health Technology: A Review. Sensors (Basel). 2023;23:9498. [RCA] [PubMed] [DOI] [Full Text] [Full Text (PDF)] [Cited by in RCA: 133] [Reference Citation Analysis (0)] |
| 15. | Orzechowska A, Bliźniewska-Kowalska K, Gałecki P, Szulc A, Płaza O, Su KP, Georgescu D, Gałecka M. Ways of Coping with Stress among Patients with Depressive Disorders. J Clin Med. 2022;11:6500. [RCA] [PubMed] [DOI] [Full Text] [Full Text (PDF)] [Cited by in RCA: 8] [Reference Citation Analysis (0)] |
| 16. | Pandi-Perumal SR, Monti JM, Burman D, Karthikeyan R, BaHammam AS, Spence DW, Brown GM, Narashimhan M. Clarifying the role of sleep in depression: A narrative review. Psychiatry Res. 2020;291:113239. [RCA] [PubMed] [DOI] [Full Text] [Cited by in Crossref: 57] [Cited by in RCA: 209] [Article Influence: 41.8] [Reference Citation Analysis (0)] |
| 17. | Anisha SA, Sen A, Bain C. Evaluating the Potential and Pitfalls of AI-Powered Conversational Agents as Humanlike Virtual Health Carers in the Remote Management of Noncommunicable Diseases: Scoping Review. J Med Internet Res. 2024;26:e56114. [RCA] [PubMed] [DOI] [Full Text] [Cited by in Crossref: 1] [Cited by in RCA: 17] [Article Influence: 17.0] [Reference Citation Analysis (0)] |
