# Enhancing Mental Healthcare with AI: Ethical Challenges and Policy Recommendations [Rachna Saralkar, MD, MS](https://www.linkedin.com/in/rachna-saralkar-md/); [Dominic Sisti, PhD](https://medicalethicshealthpolicy.med.upenn.edu/faculty-all/dominic-sisti) *We gratefully acknowledge [Nigam Shah, MBBS, PhD](https://profiles.stanford.edu/nigam-shah) and [Clinton Castro, PhD](https://sites.google.com/site/clintongmcastro/home?authuser=0) for their valuable insights and guidance that contributed to the development of this article.* ## Abstract [[Generative vs. Traditional AI|Artificial intelligence (AI)]] can significantly improve mental healthcare; however, the integration of AI into mental healthcare also brings ethical challenges that demand careful examination of their consequences. To effectively implement these novel technologies on a broad scale, this article offers users, developers, and policymakers resources to deepen their understanding of the ethical frameworks, applications, and accountability considerations surrounding AI in mental healthcare. Policy recommendations, presented in [[Enhancing Mental Health Care with AI#Table 1 Final Recommendations for AI-tool Implementation unique to Mental Healthcare| Table 1]], balance algorithmic fairness and non-maleficence to [[Deontology#Negative and Positive Duties|avoid harm]] with patient rights, beneficence and respect for autonomy in order to maximize benefits. ## Introduction [[Generative vs. Traditional AI|Artificial intelligence (AI)]] offers significant potential to enhance the accessibility and personalization of mental healthcare. AI could help solve major deficits in the current system, including better access, increased operational efficiency, reduced clinician burnout, improved diagnostic accuracy, optimized treatment selection, and real world monitoring of symptoms and behaviors[^thakkar]. Advances in AI for measuring and analyzing biometric, behavioral, emotional, cognitive, and psychological aspects of daily life are rapidly becoming more diverse, accurate, and accessible. AI is poised to revolutionize our approach to health, wellness, and chronic disease management. However, the integration of AI into mental healthcare also brings ethical challenges that demand careful examination of their consequences. There are already robust discussions of topics regarding AI in healthcare broadly, including [the regulatory landscape of AI](https://bipartisanpolicy.org/download/?file=/wp-content/uploads/2023/11/BPC-Health_AI-Public-Health_R02.pdf) and [[Ethical considerations for Gen AI|generative AI]], as a result of the recent meteoric rise of large language models (LLMs). Rather than reiterating this well-documented landscape, this article examines the applications and challenges unique to AI in ***mental healthcare***[^genhealth]. It overviews the landscape of AI application in mental healthcare through the lens of foundational moral theories, including Kantian [[Deontology| deontology]] and classical [[Utilitarianism for Inseparable| utilitarianism]], while also incorporating contemporary approaches such as [[Principle-based bioethical theories| principle-based]] and [[Casuistry in Mental Health and AI|casuistic (case based)]] theories. **Five fundamental challenges are addressed:** 1. Establishing ground truth and correcting bias 2. Obtaining informed consent 3. Upholding privacy, preventing misuse, and mitigating unintended consequences 4. Balancing access to and quality of autonomous agents 5. Creating a value-based reimbursement system ## The Five Fundamental Challenges of AI in Mental Healthcare ### Challenge 1: Establishing ground truth and correcting bias Mental health diagnosis and treatment face challenges due to the subjective aspects of the field, i.e., the lack of [[Ground truth MH|ground truth]]. Currently, diagnoses primarily rely on self-reported symptoms due to the absence of widely accepted biological markers for psychiatric conditions. Furthermore, the epidemiological and diagnostic datasets that are available are likely not fully representative of all demographic groups: machine learning (ML) models are built on labeled datasets, which have been repeatedly shown to inadequately represent diverse demographic groups[^research]. This is because clinical psychiatric research has historically excluded marginalized populations and individuals with severe mental illness or suicidal risk, resulting in [[Data Bias MH AI|biased outcomes]][^sociodemo][^race]. In mental health, this can manifest as diagnostic or treatment recommendations that are inaccurate or harmful for certain populations, reinforcing existing health disparities. ##### Ethical Issues: Data Bias & Algorithmic Transparency Patient-reported outcome measures offer valuable insights, but our reliance on subjective data, coupled with difficulties in standardizing objective biomarkers for mental health, introduces variability into the field. This core issue with the data creates a significant challenge for AI applications in mental health. The variability and inconsistency in these "ground truths" cannot be resolved through model adjustments or training techniques. Instead, the effectiveness of AI tools in this area is largely determined by the quality and standardization of the input data. Historical AI medical datasets and trials have shown a [long-standing issue with bias](https://minorityhealth.hhs.gov/news/shedding-light-healthcare-algorithmic-and-artificial-intelligence-bias). A classic example is of an algorithm used by Optum to assign patient risk levels. The algorithm systematically under-risked Black patients compared to White patients with similar health conditions. Importantly, the researchers found that this bias stemmed not from the algorithm's development, but from "label bias" in the training data[^racialbias]. This exemplifies the need for developers to show that their algorithms are fair. Problems of [[Algorithmic Fairness|fairness]] in mental health datasets for AI/ML stem from potential biases in data collection and representation, which can lead to algorithmic discrimination against underrepresented or marginalized groups in mental health diagnosis, treatment recommendations, or resource allocation. ##### Policy Recommendations: Incentivize Quality MH Datasets and Algorithmic Governance and Fairness Many guidelines already exist and are generally consistent in calling for AI that is fair, appropriate, valid, effective, and safe (FAVES)[^faves] , and transparent, explainable and inspectable. When institutions confront [[Algorithmic Fairness|fairness]] issues in deployed algorithms, they typically have three options: (1) Implement an algorithmic fix, (2) maintain [[Human In the Loop|human oversight]] to ensure fair treatment across subgroups, or (3) discontinue use of the algorithm[^fairness]. Decision-makers should consult experts and carefully weigh these options, considering past cases and context-specific factors. This approach aligns with [[Casuistry in Mental Health and AI|casuistic reasoning]], allowing for nuanced, case-by-case decisions that balance technological benefits with ethical concerns. In mental health, the first step to simplifying the evaluation of fairness is to ensure that the initial dataset is sufficiently diverse to represent the intended population. There is a critical [[Need for quality data sets| need for high quality data sets]], as existing electronic health record (EHR) data is insufficient for properly training AI. This insufficiency stems from inconsistent reporting, limited structured data, and lack of standardization in documenting mental health specific data. To address these issues, we need to improve EHR data collection through standardized patient-reported outcomes, structured fields for key metrics, and designs that accommodate future data types. Regulatory incentives for creating and using high-quality datasets are crucial. In summary, AI tools used in mental healthcare should prioritize [[Algorithmic Transparency|transparency]] about data labels (and when needed other aspects of model development), clarify the intended use case and show fairness. A combination of regulatory hurdles and slow clinician adoption may initially push the field to implement more explainable models to establish trust with users. As the safety and efficacy of more complex (aka black box) models are proven, and trust strengthens, less transparent solutions can be gradually tested in real-world settings. ### Challenge 2: Obtaining informed consent [[Informed Consent MH AI| Informed consent]] involves a shared decision-making process in which patients are provided with necessary information regarding the risks and benefits of healthcare options. Ensuring appropriate informed consent is particularly important as AI is increasingly incorporated into mental health treatment and diagnosis in therapy chatbots, digital phenotyping for clinical decision support (CDS) and direct to consumer (DTC) apps. ##### Ethical Issues: Ensuring informed consent Given the nascent nature of this field and the large number of AI tools being used without adequate transparency about their biases and risks, the use of these models in healthcare must prioritize patient safety and [[AI and Autonomy|autonomy]]. Patients should be actively informed about AI applications specific to their treatment, including the potential benefits, risks, and alternative options that do not involve AI technologies. ##### Policy Recommendations: Standardized consent To safeguard patient rights, policies should mandate standardized informed consent processes and educational materials tailored to specific AI-based applications ([[D2C Apps, Informed consent#Direct to Consumer Applications Informed Consent|consumer apps]], [[D2C Apps, Informed consent#Digital Phenotyping & Precision Psychiatry Informed Consent|digital phenotyping, precision psychiatry]], [[AI Scribes|AI-Scribes]] and [[D2C Apps, Informed consent#Ambient Monitoring Informed Consent|ambient monitoring]]) in mental health. Minimum requirements for obtaining meaningful consent should be established, and guidelines for respecting patient preferences should be implemented, including opt-out mechanisms and protocols for addressing consent revocations or changes. An example of the questions that need to be asked is the [[AI Capacity Checklist|AI Capacity Checklist]]. Auditing and compliance measures should monitor adherence to consent standards across healthcare providers and institutions. ### Challenge 3: Upholding privacy, preventing misuse, and mitigating unintended consequences The integration of [[Precision Psychiatry & AI| precision psychiatry]] and [[Ambient Intelligence for Insep article| ambient intelligence]] into healthcare settings and potentially private spaces like homes highlights the tension between the benefits of enhanced detection and monitoring of health outcomes and the need to [[Sensitive Info AI| safeguard personal privacy]]. This includes [[Decisional Privacy|decisional privacy]], referring to the rights of individuals to make their own care decisions without external interference and mitigating unintended consequences like [[Unintended consequences of AI triage| AI-driven suicide screening]] increasing involuntary hospitalizations, which is a [nuanced decision](https://jamanetwork.com/journals/jamapsychiatry/article-abstract/2810865) if done well.  AI systems processing mental health data are handling highly sensitive information, raising concerns about the ethical use of this information and the potential for data breaches which could lead to stigmatization or discrimination. Finally, there is no consensus on how liability is to be handled when medical errors are made while using novel AI. ##### Ethical Issues: Privacy, Safety, and Trust The continuous monitoring enabled by [[Ambient Intelligence for Insep article|ambient intelligence]] raises obvious privacy concerns. Less obvious is how the mere presence of such ubiquitous monitoring technology could alter individuals' behavior and self-expression, interfering with their [[AI and Autonomy|autonomy]]. Safety issues also arise with ambient monitoring, particularly regarding the secure handling and storage of sensitive mental health data. Breaches or unauthorized access to this information could have [[Ramifications of breach of AV data|severe ramifications]], undermining patient trust and discouraging individuals from seeking necessary care. While ambient technology offers exciting possibilities for prevention and population-level modeling, these potential benefits must be carefully weighed against the ethical risks to individual privacy, safety, and the sanctity of the [[Human element in MH|patient-provider relationship]]. There are valid concerns around privacy violations, as digital phenotyping for [[Precision Psychiatry & AI#A Multimodal Approach to Precision Psychiatry|precision psychiatry using multiple modalities]] will involve collecting and processing highly sensitive personal data, including continuous physiological monitoring, imaging, audiovisual and genetic data. Robustly securing this data and preventing unauthorized access or misuse by [[Commercial Entities Misuse AI|commercial entities]] or for discriminatory purposes like denial of employment or insurance will be critical for maintaining public trust. Additionally, over-trusting AI systems that are not fully validated could lead to incorrect diagnoses or treatment recommendations, potentially causing harm. Many AI-powered tools are now available for therapy, [[AI Scribes|scribing]], digital phenotyping for screening, monitoring, and even diagnostics (e.g. [Senseye](https://senseye.co/) for PTSD diagnosis with retinal imaging). The limited oversight of these tools calls for an urgent conversation among clinicians. ***It is critical to understand that being "HIPAA compliant" does not equate to FDA approval or even being part of the standard of care.*** Therefore, if clinicians wish to use some of these novel tools outside a clinical trial, they must ensure that patients meet the criteria for [[Clinical Innovation|clinical innovation]]. Further, they should consider how a tool marketed as “HIPAA compliant” and having "[[De-identified Data|de-identified data]]", without any FDA approval, could actually enable a market to sell the patient’s data[^sellingdata]. ##### Policy Recommendations: Robust Data Governance and Ethical Guidelines To address privacy, safety, and trust concerns, robust data governance frameworks and ethical guidelines are a start but real oversight will require new laws. Current [[Federal Initiatives and Legislative Efforts|federal initiatives and legislative efforts]] are helping mental health providers "catch up" with electronic health record (EHR) adoption and promoting and enforcing transparency, safety and privacy with AI. [[Data leak and privacy concerns|Robust privacy and data protection]] policies should mandate strict protocols for secure data handling, clear limitations on permissible data usage, and robust access controls and encryption mechanisms. Institutions that are evaluating potential AI tools to implement can use a novel framework for AI Tool Assessment in Mental Health, called [[FAITA]] - Mental Health[^faita]. Once a novel AI tool starts to be used in a healthcare setting, experts strongly recommend local validation and internal testing[^testAI]. As AI plays an increasing role in clinical decision-making, clear protocols must be established for allocating liability in cases of error or negligence. A framework that promotes shared [[Accountability and liability CDS AI|liability]] between the clinician, the technology company, and the health care institution should be seriously considered until AI tools are part of the standard of care. Auditing and compliance measures should monitor adherence to data protection and ethical standards across health care providers and institutions. Finally, clinicians need to be educated about (1) what HIPAA compliance really entails in terms of privacy, safety and efficacy of a tool, (2) basic understanding of ML and AI in relevance to the tools they have access to today and (3) an understanding of how [[AI and Autonomy|AI and technology at large affect their (and their patients) autonomy/agency]]. ### Challenge 4: Balancing access to and quality of autonomous agents [[AI Agents|AI-Agents]] in mental health offer significant benefits, including new modes of treatment, opportunities to engage hard-to-reach populations, better patient response, streamlining operational tasks with [[Overview of AI in Mental Health Care#AI-Scribes | AI-based scribes]] and personal assistants and improving [[Overview of AI in Mental Health Care#AI-Agents for Workforce Education|clinician education]]. Conversational AI agents can be rules based or powered by generative AI (GenAI). [[Digital Twins|Digital twins]] hold tremendous promise both in clinical trials and in improving therapeutic outcomes. Experts warn that unchecked [[Ethical considerations for Gen AI|GenAI systems]] may develop goals misaligned with medical ethics, potentially prioritizing payer interests over patient care[^ethics-relationalai]. ##### Ethical Issues: Balancing Access and Quality Despite the profound impact these applications can have, there is a stark absence of effective regulation to guarantee beneficence, safety and efficacy. The [[Utilitarianism for Inseparable|utilitarian]] argument for deploying AI agents emphasizes maximizing access and benefits for the greatest number of individuals. When employing [[Overview of AI in Mental Health Care#Human-in-the-Loop|human-out-of-the-loop]](autonomous) AI, these agents  can be highly scalable; however, the scalability carries significant risks, especially with generative AI-backed tools that can lead to adverse events because of [[Confabulations vs Hallucinations|hallucinations/confabulations]] or inappropriate care management in novel situations.  Even the best AI models, representing an "average of the best" therapists, may not necessarily lead to better societal outcomes and if not well designed may lead to [promoting stereotypes](https://restofworld.org/2023/ai-image-stereotypes/) or erode trust in psychotherapy more generally. Psychotherapists recognize that human vulnerability and growth through interpersonal connection are essential for therapeutic success. Developing effective and safe AI for mental health requires careful consideration of both [[Virtue Ethics and Pragmatism|virtue ethics]] and empirical evidence. Experts agree that AI agents can help bridge the gap in the [[Workforce Shortage|workforce shortage]] problem but recommend that they be created thoughtfully so the next generation builds the skills necessary to think critically and thrive with human interactions as well. ##### Policy Recommendations: Continuous Testing, Validation, and Oversight Early use of AI agents should be restricted to human-on-the-loop approach (continuous human supervisor) for AI chatbots using generative AI, and human-out-of-the-loop would not be advisable until safety and quality standards are created and met. As the technology advances, regulators will need to carefully consider the balance between utilitarian access (maximizing access to the most people possible) and equitable access, ensuring everyone has the right to the highest quality care we would want for our own children or parents. Experts advocate for a national network of AI ethics centers to manage the evolving frameworks needed for safe implementation of this technology[^ethics-relationalai]. There is a need to create a public-private partnership to support a nationwide health AI assurance labs network, which has been [outlined by experts ](https://jamanetwork.com/journals/jama/fullarticle/2813425)to help promote community best practices for testing health AI models and produce reports on their performance[^shah]. ### Challenge 5: Creating a value-based reimbursement system  The [[Reimbursement of AI| current reimbursement system]] supports creating new codes for AI-driven tools in remote patient monitoring, clinical decision support, and diagnostics. However, the industry is gradually transitioning to a value-based reimbursement system (VBS). As AI models eventually become FDA approved for outcome measurement, there will be a transition period as payors and healthcare institutions begin to use these novel measures. AI also has the potential to significantly enhance value-based care by improving risk stratification, predicting patient outcomes, and personalizing treatment plans. These capabilities can lead to more accurate outcome measurements and, consequently, more appropriate reimbursement allocations. ##### Ethical Issues: Objectivity and Reliability Concerns   Using AI-based tools to measure outcomes for a VBS raises the following ethical issues: 1. **Regulation for non-fixed models:** For FDA approved outcome measures that are fixed, they can be treated similarly to other new measures that slowly become standard of care and are used for VBC. However, AI tools have the potential to either continuously learn from incoming data or be fine tuned. Regulation must prevent tools from "learning" how to improve reimbursement rather than improve patient outcomes. 2. **Limitations of AI Impartiality:** We must recognize that AI, created by humans, cannot achieve perfect objectivity. Rather than viewing objectivity as an absence of preference, we should define it as the fair and intentional handling of data[^mismeasureman]. 3. **Interoperability Challenges:** The integration of AI tools across different healthcare systems poses significant interoperability challenges, particularly as multimodal approaches expand. To ensure consistent and reliable outcome measurements, there is a critical need for standardization of AI tools and their outputs. This standardization should enable seamless data exchange and interpretation across various healthcare providers and payers, ensuring that the value-based reimbursement system operates on a level playing field 4. **Continuous Validation Requirements:** As patient populations, treatment protocols, and medical knowledge evolve, the AI models used for outcome measurements must undergo continuous validation. This ongoing process is crucial to maintain the accuracy and relevance of these models in real-world clinical settings. Regular audits and updates should be conducted to ensure that the AI tools continue to provide reliable data for reimbursement decisions and truly reflect improvements in patient care. ##### Policy Recommendations: Robust Auditing and Combined Measures    Reimbursement of AI-tools in mental health within a VBS should prioritize a combined approach of subjective (patient-reported) and objective (novel biomarkers), AI and non-AI outcome measures. Regulatory bodies need to continue to work with healthcare institutions and payors to conduct pilots and provide the expertise needed to evaluate safety and efficacy of novel AI-based outcome measures. To address the challenges of interoperability and continuous validation: 1. Establish industry-wide standards for AI tool integration and data sharing across healthcare systems to ensure consistent outcome measurements. 2. Implement a framework for regular validation and recalibration of AI models used in VBS. This should include periodic assessments of model performance against real-world outcomes and updates to account for evolving medical practices and patient populations. 3. Create a collaborative platform for healthcare providers, payers, and AI developers to share best practices and address emerging challenges in the use of AI for value-based reimbursement. ## Conclusion AI implementation in mental health care presents unique challenges due to the lack of ground truth, personal nature of mental health data and risk to user privacy, safety and autonomy. Using a strong interpretive lens grounded in a [[Balanced Ethical Theories|balanced view of ethical theories]] and values, regulators must ensure that novel AI tools compel clinicians to remain critical thinkers and decision-makers when treating mental health conditions. They should appropriately employ principle-based vs. rule-based regulation[^principlevsrulesbased] in specific contexts of use. We must continuously consider which skills we're comfortable delegating to technology and which we want to hone and improve with AI's help, ensuring our EHRs and AI tools reflect these goals. Supporting our mental health extends beyond physical and emotional security, food, and shelter—it includes fostering a sense of community, human connection, and purpose. As we integrate AI tools to enhance some aspects of mental healthcare, we must ensure they do not undermine these other critical elements of well-being. Mental health is deeply personal and often shaped by unique, complex life experiences that are challenging to capture in discrete data sets. By actively involving individuals with lived experience in the development of AI tools, we can create technologies that are sensitive to the nuances of mental health, promoting equitable and accessible care. Embedding the voice of lived experience in every aspect of a tool's development ensures that AI serves as a complement to human connection, respecting the complexity of individual lives while enhancing mental health support. [^livedexperience] Decision makers and regulators bear a significant responsibility when introducing new tools to users. It's crucial to ensure these technologies don't lead to unfair limitations on people's potential. We must recognize that external factors, such as biases in AI algorithms or limitations in data sets, could be [[Biological Determinism|mistaken for internal characteristics of individuals]]. This can result in serious injustices, where people are unfairly categorized, limited, or denied opportunities based on flawed technological assessments. The goal must be to harness the power of technology to expand possibilities, not to inadvertently create new barriers or reinforce existing inequalities. ## Table 1: Final Recommendations for AI-tool Implementation unique to Mental Healthcare | Area | Recommendation | | ------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | 1. Professional Training | Provide mental health professionals with training in casuistic reasoning and AI/digital tool literacy to effectively oversee and interpret AI-generated recommendations. | | 2. AI Development | Prioritize diverse and industry accepted standard labels for model training. | | 3. Evaluation Metrics | Shift focus from algorithm transparency to demonstrable improvements in patient mental health outcomes and rigorous external efficacy and safety validation. | | 4. Crisis Ethics | Implement regulations to protect patients' rights during mental health crises. Informed consent is still required to do non-essential tasks like collecting, sharing or selling patient data. | | 5. Model Maintenance | Establish a framework for continuous monitoring and updating of AI models used in Value-Based Systems (VBS) to maintain accuracy and relevance. | | 6. Implementation and Liability | - Start with reconciliation (financial redress for patients and guideline development).<br>- Adopt a shared liability model between clinicians, software companies, and healthcare institutions until AI tools are optimized. | | 7. Human Oversight | - Use human-in-the-loop approach for AI-enabled clinical decision support, treatment, and diagnostics.<br>- Implement human-on-the-loop (continuous human supervisor) for AI chatbots with generative AI until clinical safety studies prove guardrails are working for safety.<br>- Avoid autonomous AI (human-out-of-the-loop) in therapeutic mental health care settings at this time. | | 8. Informed Consent | Regulation is needed to enforce obtaining true informed consent from users. | | | [^racialbias]: [[https://www.science.org/doi/10.1126/science.aax2342]] [^obermeyer-2019]: [Obermeyer et al., 2019](https://www.frontiersin.org/articles/10.3389/fcomp.2020.00031/full#B41) [^FAVES]: [[Department of Health and Human Services. Health data, technology, and interoperability: certification programs updates, algorithm transparency, and information sharing, final rule. Fed Regist 2024;89(6): 1192-438 (https://www.govinfo.gov/content/ pkg/FR-2024-01-09/pdf/2023-28857.pdf).]] [^Martinez]: [Martinez-Martin, N., Insel, T.R., Dagum, P. _et al._ Data mining for health: staking out the ethical territory of digital phenotyping. _npj Digital Med_ **1**, 68 (2018). https://doi.org/10.1038/s41746-018-0075-8]] [^Ethics-relationalAI]: [[Sim, I, Cassel, C. The Ethics of Relational AI-Expanding and Implementing the Belmont Principles. NEJM (2024) DOI: 10.1056/NEJMp2314771]] [^mismeasureman]: [[Gould, S.J. The Mismeasure of Man. 1996 Library of Congress Cataloging-in-Publication Data]] [^principlevsrulesbased]: [[Schuett, J., Anderljung, M., Carlier, A., Koessler, L., Garfinkel, B. From Principles to Rules: A Regulatory Appreoach for Frontier AI. Oxford University Press 2024]] [^forensicbias]: [[Shoba Sreenivasan, Melinda DiCiro, James Rokop, Linda E. Weinberger Journal of the American Academy of Psychiatry and the Law Online Oct 2022, JAAPL.220031-21; DOI: 10.29158/JAAPL.220031-21]] [^fairness]:[[https://hai.stanford.edu/news/when-algorithmic-fairness-fixes-fail-case-keeping-humans-loop]] [^sellingdata]:[[https://hai.stanford.edu/news/de-identifying-medical-patient-data-doesnt-protect-our-privacy]] [^shah]: [[Shah NH, Halamka JD, Saria S, et al. A Nationwide Network of Health AI Assurance Laboratories. _JAMA._ 2024;331(3):245–249. doi:10.1001/jama.2023.26930]] [^thakkar]: [[Thakkar A, Gupta A, De Sousa A. Artificial intelligence in positive mental health: a narrative review. Front Digit Health. 2024 Mar 18;6:1280235. doi: 10.3389/fdgth.2024.1280235. PMID: 38562663; PMCID: PMC10982476.]] [^genhealth]: [[Bouderhem, R. Shaping the future of AI in healthcare through ethics and governance. _Humanit Soc Sci Commun_ **11**, 416 (2024). https://doi.org/10.1057/s41599-024-02894-w]] [^research]:[[Pedersen, S. L., Lindstrom, R., Powe, P. M., Louie, K., & Escobar-Viera, C. (2022). Lack of Representation in Psychiatric Research: A Data-Driven Example From Scientific Articles Published in 2019 and 2020 in the American Journal of Psychiatry. American Journal of Psychiatry, 179(5), 388–392. https://doi.org/10.1176/appi.ajp.21070758]] [^sociodemo]: [[Wilson, S. (2024). Sociodemographic reporting and sample composition over 3 decades of psychopathology research: A systematic review and quantitative synthesis. _Journal of Psychopathology and Clinical Science, 133_(1), 20–36. [https://doi.org/10.1037/abn0000871](https://psycnet.apa.org/doi/10.1037/abn0000871)]] [^race]: [[Maslej, M., Shen, N., Kassam, I. Race and Racialization in Mental Health Research and Implications for Developing and Evaluating Machine Learning Models: A Rapid Review. International Medical Informatics Association 2022 doi:10.3233/SHTI220281]] [^FAITA]: [[Golden, A., Aboujaoude, E. The Framework for AI Tool Assessment in Mental Health (FAITA-Mental Health): a scale for evaluating AI-powered mental health tools]] [^livedexperience]: [[Speechley J, McTernan M. How will AI make sense of our messy lives and improve our mental health? Front Psychiatry. 2024 Jan 18;15:1347358. doi: 10.3389/fpsyt.2024.1347358. PMID: 38304287; PMCID: PMC10832992.]] [^testAI]: [[Lenharo M., How Do You Test AI In Medicine? Nature Vol 632. August 2024]]