
Artificial intelligence (AI) tools are everywhere now, including mental healthcare. Patients are turning to them to ask questions about their diagnoses, symptoms, medications and treatment recommendations. In many cases, this looks no different from when patients used Google or WebMD. The notable evolution is in the consolidation and seeming personalization of information generated through an AI-driven search.
For some patients, this amount of available information can be helpful to feel more involved in their own care and planning. However, for patients living with serious mental illness (SMI), such as schizophrenia, it creates new concerns about how treatment decisions are made outside a formal clinical setting. This is a population for whom the way information is interpreted or presented can be just as important as the information itself.
AI’s influence in healthcare is quickly expanding. OpenAI recently introduced ChatGPT Health, a dedicated space within its platform where users can upload medical data and get responses based on their own records. Even with added privacy features and physician involvement in helping design and refine these AI tools, the information they generate is still general and not personalized clinical advice. It is not actively monitored by healthcare providers and risks being misunderstood when received without a healthcare provider’s insights.
AI as a source of information
AI can work like a highly advanced search engine. It provides answers in a confident, conversational tone, pulling together facts and associations from its training. Many patients, especially for those whose illness itself contributes to reduced contextual awareness, don’t realize that these outputs can be incomplete or even inaccurate. In clinical practice, this can create a false sense of certainty.
For people living with schizophrenia, symptoms such as impaired insight or increased anxiety can affect how outside information is interpreted and acted upon. This makes them even more vulnerable to misinformation and misinterpretation. The issue is not intelligence or effort, but how symptoms shape understanding in certain moments.
For example, there is the risk of confirmation bias. Individuals living with schizophrenia may take one piece of what they read and try to create meaning from it according to what they are thinking or feeling in the moment. If a patient already has a belief and goes looking for something to support it, AI can help them find it. The tool itself is not intentionally misleading the person, but the circumstances lend themselves to that outcome.
If someone experiencing paranoia reads something that seems to confirm a fear, they might accept that as truth because of how it was presented. Or if they see alarming side effects potentially associated with a medication, they may fixate on that without the broader context of benefit versus risk. What is missing is the clinical frame that helps sort what applies to that person’s medical needs and what does not.
AI feels like a medical authority
AI tools present information in a way that sounds neutral and complete, but by default lacks individual, longitudinal clinical context. Treatment decisions depend on many factors, including family history, past experiences, current symptoms and how the patient is presenting. AI cannot see the patient. It cannot observe hygiene, body language or subtle changes in behavior.
That gap between information and context is where problems may originate. When patients receive AI-derived information as fact, it changes the interaction in the clinic. Suddenly, clinicians are asked to validate what looks like evidence while also trying to explain its limitations. The goal is to support patients’ initiative in learning about their health but also help them see where that information falls short.
There have been media reports describing situations where AI responses appeared to challenge established psychiatric diagnoses or prescriptions and influenced people to stop medications that had previously been helping them. A reported example described a person living with schizophrenia who began to question their care after interacting with AI-generated information. Troublingly for me as a clinician, this contributed to their decision to stop their medications, which in turn led to a quick recurrence of their symptoms. These reports underscore why human judgment and ongoing clinical relationships remain essential even when AI becomes part of the paradigm.
Impulsivity and treatment changes
Even accurate information, when talking about SMIs like schizophrenia, can be overwhelming, and benefits from thoughtful, person-first conversation. AI delivers responses in the same confident way regardless of complexity or nuance. Someone might read one negative thing and become frightened.
When that fear potentially leads to abrupt decisions, such as stopping medications, there are significant clinical implications that guide the treatment recommendations I tend to make. With oral regimens, for example, because dosing is typically daily, there is no structural pause to weigh that decision to stop. One day the pill is taken; the next it’s not. Quickly, clinicians are reacting to symptom destabilization rather than preventing it.
This is when treatment planning matters and why it is important to think intentionally about how an approach fits with a person’s needs and long-term outcomes. As an illustration, it can be beneficial to consider medication types that are designed to contribute to consistency, minimizing the likelihood of a missed dose and building in natural touchpoints for clinical consultation.
Long-acting injectable (LAI) medications are one example of this approach. Administered by clinicians on a regular schedule, they remove many of the day-to-day decisions about whether to take a pill. This creates what could be thought of as a holding period. There are fewer opportunities for an impulsive change based on something a patient may have consumed through digital channels.
With oral medications, we rely on a person’s own report, refill records and observation of symptom control to infer that the medication is being received and is working. With LAIs, however, there is confirmation. When patients miss appointments for injectable treatments, we know immediately that something has changed. That visibility creates earlier opportunities to reach out, assess what is going on and provide support.
AI as a tool in the toolbox
AI is a useful starting point to help patients form questions and engage more actively in their own care. But AI is not the be-all, end-all and cannot replace a human discussion or clinical assessment.
At a broader scale, clinicians and ethicists have raised concerns about AI “companion” tools that are designed to mimic emotional support, warning that these systems can foster emotional dependency, reinforce distorted thinking and pose mental health risks when users turn to them in place of human interaction or professional care
Despite these concerns, which are inevitable during any significant technological evolution, AI is not going away, and it does have a role to play. The best path forward is to cultivate trusting clinical relationships so patients feel empowered to bring in what they have learned and talk about it with their provider. AI should support care, not direct it.

