Is AI-Driven Therapy Possible?

สิงหาคม 21, 2025

A worrisome contradiction sits at the boundary of artificial intelligence and mental health. On the one hand, AI is a deeply competitive field, so LLM creators are under strong pressure from both their investors and market forces to not only release their models quickly, but also to boast about how accurate, versatile, safe, and useful they are. Success is measured by the number of downloads and subscriptions that any given LLM achieves, so it pays to keep users happy. Very often, pleasing the user means agreeing enthusiastically with whatever opinion or observation they provide.

On the other hand, one of the more common uses of modern AI is for people to have someone to talk to, and to ask questions about things that are bothering them. People who are frustrated with some aspect of their life, or the world, or who are suffering from depression or some other mental health issue, might give voice to ugly or disturbed thoughts when confiding in an AI. Whereas a professional therapist might try to gently pull their patient back toward a kinder and healthier mental space, LLMs — naively trying to please the user — often express agreement with these angry or bitter statements.

After hearing their darkest thoughts and fears confirmed by a chatbot, users with anger management problems or self-esteem issues may find themselves emboldened to take action, sometimes with terrible real-world consequences.

These and related issues highlight the tension between LLMs that are optimized for quick software updates and appealing conversations, and therapeutic situations which may require delicate pushback and correction. Similar dangers may occur if users develop an emotional bond with an AI, as the algorithm may reinforce this supposed connection rather than disabuse the user of their unhealthy attachment.

But could AIs provide better guidance in the future, even if the incentives are misaligned? What about other forms of AI-powered emotional support? Does the bad really outweigh the good?

A friend for all seasons

Digital therapy sessions aren’t the only way of using AI chatbots for mental health support. Some users of psychedelic drugs keep a chat session open to let the LLM guide them through the experience safely. AI as a “trip sitter” probably wasn’t in the minds of the software developers, and mental health professionals warn that this trend combines powerful psychedelics with unregulated digital companionship.

But right or wrong, people will always use AIs in ways that weren’t intended, for the simple reason that they are cheap, convenient, and everywhere. Psychiatrists and psychotherapists charge high fees for their services, real interpersonal relationships carry risks of rejection and betrayal, and your human friends might have better things to do than babysit you while you get high. 

Moreover, even if your chats with AI may not be as private as you think, it’s still easier to confess your secrets to a piece of software than to a real person. There is no stigma attached to chatting with an AI in the comfort of your own home, as a chatbot is nonjudgmental and always ready to talk.

Yet true emotional health comes from connection with nature and relationships with real people. AI can’t help you reach these goals directly, but it can advise you on how to achieve them yourself. It all depends on how people use the technology.

Putting guardrails in place

Incentives produce outcomes. If LLM developers compete with each other on whose service can get the highest score on certain benchmarks, then emotional intelligence metrics should be added to those benchmarks. These benchmarks can be designed by independent mental health authorities, to ensure they are rigorous in their analysis, and applicable to a variety of volatile situations. AI algorithms that score highly on such benchmarks must prove they are competent at de-escalation and resistant to misuse.

At the same time, users should use AI with an awareness of its biases and limitations. LLMs may default to agreement and flattery, but a simple command such as ‘For the rest of this conversation, act like a responsible therapist’ can guide it toward more constructive output when seeking emotional support. No company can guard against all potential misuses of its products, and users should break the habit of outsourcing responsibility to everyone except themselves.

What’s safe, what’s not

AI chatbots may feel comforting, but they are not yet equipped to handle emotional crisis or psychedelic vulnerability. If you need mental health support, use LLMs only for journaling, mood tracking, or psychoeducation, while recognizing the limits of digital privacy.

AI tools certainly have promise. Researchers in Hong Kong found that after 10 days, participants using a mental health chatbot showed significant improvements in self-care efficacy, mental health literacy, self-care intention, self-care behaviors, and mental well-being. However, at a one-month follow-up assessment, these gains were no longer significant.

Until studies show genuine and consistent mental health improvements among people using AI for emotional support, the tried and true solutions remain your best bet: Reconnect with supportive people around you, spend more time in nature, focus on sleep, exercise, and a healthy diet, and see a human specialist for further guidance. AIs are magnificent logic machines, but people problems need people solutions, and there’s still no substitute for real human connection.

Share this article

กดติดตาม InnoHub

เพื่อรับข้อมูลข่าวสารและแรงบันดาลใจด้านนวัตกรรมใหม่ ๆ

เรานำข้อมูลมาใช้เพื่อการส่งมอบคอนเทนต์และบริการอย่างเหมาะสม เราจะปกป้องความเป็นส่วนตัวของคุณ คุณสามารถอ่านข้อมูลเพิ่มเติมได้ที่ Privacy Policy และคลิกสมัครเพื่อดำเนินการต่อ