Why people are asking chatbots about their feelingsย
When itโs 2 a.m. and your thoughts wonโt turn off, a general AI chatbot is always awake. People use it to name a feeling, make sense of a conflict, or ask what to do about worry, sleep, or sadness. The appeal is obvious: instant answers, no appointment, no judgment. But speed and confidence arenโt the same as accuracy or safety.ย
What general AI is good at (and what to keep it to)ย
Organizing your thoughts.ย
If your mind is noisy, a chatbot is excellent at helping you structure what youโre experiencing.ย Ask it to summarize your last few journal lines into three themes, or to turn a long ramble into bullet points you can bring to your next appointment. Clarity helps you speak up sooner and more clearly.ย
Preparing for appointments.ย
Use AI as a rehearsal partner, not a diagnostician. Have it draft a one-paragraph update for your clinician: top three concerns, when they started, what makes them better or worse, and what you hope to cover.ย ย
Generating healthy, low-risk ideas.ย
When youโre looking for ways to decompress and calm your mind, ask for a โmenuโ of simple,ย non-medical options. Ideas like 10-minute breaks, brief breathing instructions, a short neighborhood walk plan, or ways to ask a friend for support. Keep it practical and avoid anything that substitutes for medical advice. Youโre collecting ideas, not prescriptions.ย
Pointing to reputable information.ย
You can ask, โShow me beginner resources from major health agencies about anxiety andย sleep.โ Then click the sources and read them yourself. Treat the chatbot like a helpful librarian who points, while you decide whatโs credible.
What you shouldnโt ask a general chatbot to doย
Diagnose you or others.ย
A model can sound certain even when itโs wrong. It doesnโt know your history, context, or risk,ย and itโs not a clinician. If you need a diagnosis, talk to a licensed professional who can ask follow-ups and see the whole picture.ย
Tell you whether to start, stop, or change medication.ย
Medication decisions depend on your medical history, current regimen, labs, interactions, and risks. Only your prescriber should guide those choices. If a chatbot suggests otherwise, close the tab.ย
Handle a crisis.ย
If youโre thinking about harming yourself or someone else, or youโre worried about your safety,ย contact a human right now. In the U.S., you can call or text 988 for the Suicide & Crisis Lifeline;ย outside the U.S., use your local emergency number or national helplines. Chatbots are not crisis services.ย
Collect sensitive data beyond what youโre comfortable sharing.ย
If you wouldnโt write it on a postcard, think twice before typing it into a chatbot. Many systems retain inputs to improve their models. Share the minimum, use privacy settings, and donโt upload documents you canโt take back.ย
The hidden risks most people missย
Confident wrong answers.ย
General models can invent studies, misquote statistics, or oversimplify complex issues. They may sound more certain than your therapist precisely because they donโt feel doubt. Ask for sources, click them, and be skeptical.ย
Bias and blind spots.ย
If training data underrepresents people like youโby culture, language, identity, or conditionโ the modelโs examples and advice may fit poorly. If it feels off, it probably is. Look for tools that disclose testing across groups and allow you to adjust tone and language.ย
Overreliance.ย
AI is convenient, and convenience can become avoidance. If you find yourself chatting instead of reaching out to real people or scheduling care you need, set limits. Technology should widen your support, not replace it.
A simple checklist for consumersย
Purpose: Am I using this to clarify and prepare, or am I hoping it will diagnose or treat me? Privacy: Does this tool let me limit data retention, and am I sharing the minimum necessary? Proof: Did it give sources I can verify, and do those sources actually say what the chatbot claims?ย
Plan: If my symptoms worsen, what is my next human stepโfriend, clinician, or 988? Prompts that keep you safe and productiveย
To help with bridging the gap on how best to use your chatbot, the experts at AMFMย Healthcare designed the following prompts:ย
Clarity prompt (use before an appointment):ย
โSummarize the text below into three concerns, three questions for my clinician, and a one-paragraph update. Do not give medical advice.โย
Journaling prompt (for a noisy mind):ย
โGive me five neutral journaling prompts to explore worry without judging myself. Keep each toย one sentence and avoid medical terms.โย
Support prompt (to ask for help):ย
โDraft a short text to a friend asking for a 15-minute walk this week. Make it kind, specific, andย easy to say yes to.โย
Resource prompt (reputable info only):ย
โList five beginner resources from major public-health agencies about improving sleep routines.ย Include links and a one-line note on what each page covers.โย
How to spot bad advice in two minutesย
Absolute language.ย
If the chatbot says โalwaysโ or โneverโ about complex human problems, thatโs a tell. Real care has nuance. Look for words like โcould,โ โmay,โ and โdepends,โ followed by explanations.ย
Source mismatch.ย
If you click a cited page and it doesnโt match what the chatbot claimed, stop relying on that answer. Trust the source, not the summary.
Out-of-Bounds Advice.ย
If the bot starts offering clinical directives (โtaper your dose,โ โstop therapy if you feel worse for a weekโ), it has crossed a line. Close it and contact a professional.ย
Shame-inducing tone.ย
Helpful guidance is specific and kind. If a response makes you feel small or scolded, itโs not calibrated to support change. Ask for a different tone or seek human support instead.ย
Healthy ways to fold AI into real lifeย
Set a timer.ย
Use a 10-minute timer for AI tasks so reflection doesnโt become rabbit-holing. End with one small action youโll take offline.ย
Use it to bridgeโnot replaceโconnection.ย
Let the chatbot help you plan what to say, then actually text, call, or see the person. Social support is a protective factor no model can replicate.ย
Pair it with body cues.ย
If your heart is racing or your jaw is tight, put the keyboard down and try a short grounding exercise. After you settle, return to planning or journaling if needed.ย
Review weekly.ย
Once a week, check whether AI is helping you follow through on real-world steps like better sleep, a kept appointment or a supportive conversation. If not, change how you use it or step away.ย
Where AMFM Healthcare fitsย
AMFM Healthcare and their subsidiaries (Mission Prep Healthcare and Mission Connectionย Healthcare) teach people to use technology as a bridge to real care, not a substitute for it. Our guidance emphasizes privacy, clear limits, and fast escalation to human help when needed. We believe good mental health support is still human workโAI just helps you organize your thoughts and get to the right door more quickly.
The bottom lineย
A general AI chatbot can help you clarify, prepare, and practice but it shouldnโt diagnose, treat,ย or handle emergencies. Keep your use case modest, protect your privacy, ask for sources, and have a human backup plan. If you feel unsafe or stuck, reach out to a person right away,ย including 988 in the U.S. With thoughtful boundaries, you can get value from AI without letting it run your mental health.



