Since the first computer “therapist” ELIZA was created in 1964, there have been over 10,000 mental-health-related apps developed (1). Generative AI (Gen-AI) is the most recent technology shaping therapeutic practice, introducing new possibilities for how therapy can be accessed. These advancements indicate the growing demand for accessible care, especially after the COVID-19 pandemic and its lingering repercussions.
Despite this demand, many individuals with mental health concerns never receive adequate help (2; 3) and in some countries mental health systems are under significant strain. This is due to shortages in affordable care, indicated by the long waitlists, widespread burnout among therapists and entire geographic regions officially designated as “Mental Health Care Health Professional Shortage Areas” (HPSAs) in the USA. The existing workforce is also nearing retirement age and new clinicians are not replacing them fast enough.
These well-timed technological innovations can address the gap in several ways, notably by enabling therapists to expand their reach. However, these developments also raise important questions: Are there hidden downsides? And do these technology-based methods really improve mental health outcomes?
Ways technology is currently used in Mental Health Care:
- Teletherapy Platforms (Talkspace, BetterHelp etc.): Virtually connect clients with a therapist, either via video conferencing, online chatting (synchronous), email therapy (asynchronous) or phone counseling. Cognitive behavioral therapy (CBT), psychodynamic therapy, dialectical behavioral therapy (DBT)and mindfulness-based therapy are all therapeutic modalities that can be offered online (4). (While telehealth platforms themselves are not inherently AI-driven, many now incorporate Gen-AI tools to assist therapists with session summarization and documentation, and clients must be made aware of this).
- Digital Mental Health Apps: These include mood trackers, journaling tools, guided meditations, sleeping aids and structured CBT-style exercises. These tools are generally best understood as adjuncts to therapy rather than replacements for it.
- AI Used By Therapists: Generative-AI (e.g. GPT, Claude), through platforms like Upheal, BastionGPT and Klarify, are used for tasks like drafting clinical notes (SOAP, DAP), creating treatment plans, scheduling, insurance verification and transcribing sessions using ambient scribes or dictation for documentation (5). These systems rely on natural language processing (NLP), machine learning and ambient clinical intelligence to automate documentation.
Many of these platforms claim to be HIPAA-compliant, meaning they tokenize protected health information (PHI), which is a must-have feature given the sensitivity of mental health data. By saving time and reducing paperwork, these tools may indirectly improve care by reducing burnout in clinicians.
- Gen-AI “Therapists” & Chatbots: AI “therapists” are marketed directly to consumers. These tools are typically designed to deliver CBT-inspired exercises, mood check-ins, journaling prompts or conversational support.
Writing down one’s issues and attempting to put them into coherent sentences helps those who find it difficult to express themselves (this can serve as the first step). But AI “therapy” is not designed for crisis situations and is widely recommended only as an addition to human care, not a replacement. It is up to the user to seek professional help for emergencies.
- Virtual Reality (VR) Therapy: VR can create a safer, more controlled simulation of reality within which clients can face their fears. VR therapy can be a form of exposure therapy for people with phobias/social phobias or PTSD, for example.
But is Technology-Based Therapy Actually Effective?
What has worked so far:
- Teletherapy (with real therapists) is just as effective as in-person therapy for many conditions, such as anxiety and depression, with studies showing similar outcomes and higher retention than in-person therapy. Teletherapy means increased accessibility for people with disabilities, caregiving responsibilities or geographic barriers. It’s also faster to access, more customizable and more affordable than in-person care.
However, severe conditions or specialized therapies might still require in-person care, mainly because an online environment can limit the therapist’s ability to decipher non-verbal cues and body language, which are often critical for assessment and rapport (6; 7).
- When digital mental health apps are used thoughtfully, they can function like “homework,” reinforcing skills learned in therapy and helping clients notice patterns between sessions. However, many of these tools have limited evidence for long-term effectiveness and vary widely in quality, privacy protections and clinical oversight (8).
- AI tools grounded in CBT can be useful for basic emotional regulation, self-reflection and quick guidance. But one of the most significant benefits of Gen-AI “therapists”/chatbots is accessibility: they are available 24/7 so they can provide immediate support when human therapists are unavailable, which they often are. AI chatbots have been described as “stopgaps to huge waiting lists”.
Some studies show that AI-based therapy tools can reduce symptoms of anxiety and depression in the short term, particularly for mild to moderate conditions (9). According to Biagianti et al. (2025), “CBT-informed digital interventions have accumulated enough scientific evidence to be positioned today as: i) a low-intensity tool for those with subclinical levels of symptoms; ii) a first step in a stepped-care approach to service delivery iii) a low-cost, easily accessible option for targeted preventive programs”. A systematic review of 10 randomized controlled trials also indicated that AI augmentation in psychotherapy has potential to reduce clinical symptomatology (10). The research shows that AI-assisted tools and digital interventions may be effective in specific, limited contexts but more research is needed as Gen-AI develops.
The challenges of Gen-AI Chatbots:
- Many studies emphasize the short-term effectiveness of such tools. The limitations of AI “therapists” become more apparent over time and with increased complexity. Long-term efficacy often fades and AI struggles to adapt to the evolving, nuanced nature of human mental health. Human therapists consistently outperform AI when it comes to severe trauma, complex diagnoses, crisis management and relational work.
- Some AI “therapists” give positive reinforcements or “rewards” immediately and more frequently than a therapist would. One may also be more forthcoming and open with an AI “therapist” because there’s no real judgment. But these features can be double-edged swords because, in some models, AI’s default mode is to be an agreeable “yes-man” to keep you engaged, even when you express dangerous/unhealthy sentiments. With AI “therapy”, you express emotions and experiences with no real accountability, which is one reason why it will likely not be as effective as talking to a professional.
- Gen-AI only has fake empathy and intelligence and may reinforce stereotypes or provide harmful responses if poorly designed or insufficiently monitored. New research has warned about the risks of AI tools offering dangerous or inappropriate advice in sensitive situations. In fact, some AI models are inherently biased because they were trained on prejudiced data. Additionally, “psychologists don’t tend to keep transcripts from their patient interactions, so chatbots don’t have many “real-life” sessions to train from. Therefore, they are not likely to have enough training data, and what they do access may have biases built into it which are highly situational.” (As cited in Lawrie, 2025).
- Typical therapy happens once a week and that means you get several days to process all the insights you gained from each session. But with AI therapy, you might not give yourself enough time to process all the feelings that manifest.
Therefore, in practice, the strongest evidence supports hybrid models that combine AI’s speed with human empathy and intelligence.
Note: It’s important that therapists have a clear understanding of AI’s abilities, limitations and the data it collects in order to properly inform clients about AI’s potential risks.
Ethics, Safety and the Environment
Several concerns remain, mainly: quality of care and privacy. Many platforms cannot be trusted with sensitive mental health data. High-profile cases, such as BetterHelp’s FTC fine for sharing private user data, highlight the real consequences of weak data protections. Moreover, recent news reports have raised concerns about AI chatbots producing harmful or unsafe responses in mental health contexts, once again reinforcing the need for stronger safeguards and better, regulatory oversight.
In online forums, there are also widespread complaints from therapists about alleged exploitative business models used by large teletherapy platforms. Low pay, lack of safeguards and pressure to manage high-need clients with minimal support are some of the grievances mentioned. Some admit to working for these platforms out of desperation; only long enough to gain hours/experience before leaving.
Beyond therapy-specific issues, Gen-AI raises broader ethical concerns. Data-trained Gen-AI tools are often built on scraped work. Their environmental impact is also substantial: AI data centers consume massive amounts of electricity and fresh water, disproportionately affecting low-income communities in America and driving up utility costs for residents living around these data centers.
Experts and analysts also predict that the AI bubble will soon burst. Many AI companies are not profitable and there is increasing pressure to prevent state-level regulation. Gen-AI is being rapidly integrated into work, art, entertainment and public infrastructure, often without meaningful consent. There are also concerns about mass surveillance, suppression of dissent and whistleblowers and the long-term cognitive effects of outsourcing thinking and emotional processing to machines.
AI and the Law:
Most laws allow AI for administrative and supplementary tasks, as long as a human professional reviews and approves the output. This reflects a growing consensus: AI may assist, but it should not replace human judgment in mental health care, and perhaps more AI regulations should be put in place.
Final Thoughts:
Teletherapy, digital tools and AI-assisted systems have enabled psychologists to reach more people with greater flexibility and convenience. For many, these tools provide meaningful, short-term support and help fill the gaps in “demand and supply”.
At the same time, effectiveness heavily depends on how this technology is used, who controls it and what problems it is meant to address. Most current tools work best as supplements, not substitutes, for human care.
Technology in the therapy world is indeed transformative for accessibility, but it primarily functions by filling existing gaps rather than addressing core, systemic issues like the need for human connection, rigorous regulation and true health equity.
References:
References:
(1) https://pubmed.ncbi.nlm.nih.gov/28384700/
(2) https://books.google.com/books?id=lnkOEQAAQBAJ&lpg=PR10&ots=TTj2YNXu_n&lr&pg=PP1#v=onepage&q&f=false
(3) https://pmc.ncbi.nlm.nih.gov/articles/PMC12434360/
(4) https://simply.coach/blog/role-of-technology-in-counseling-and-therapy/
(5) https://www.heidihealth.com/blog/how-is-ai-used-in-healthcare
(6) https://www.psychology.org/resources/virtual-therapy-vs-in-person/
(7) https://manchestercounselling.uk/articles/is-online-therapy-effective-what-the-research-says-about-virtual-counselling
(8) https://pmc.ncbi.nlm.nih.gov/articles/PMC11815452/
(9) https://mental.jmir.org/2025/1/e78340/
(10) https://pubmed.ncbi.nlm.nih.gov/40861147/




