The Rise of “Personal” AI
Conversational AI is no longer just a productivity tool. Platforms like ChatGPT, Gong, Lavender, and Humantic claim to understand us by analyzing tone, behaviors, and personality types to craft interactions that feel intuitive and emotionally aware.
Vendors promise, “Skyrocket your cold emails with unique personalization” or “The most helpful email assistant on the planet”, but from inside the ecosystem, a different reality is emerging. What’s marketed as connection can often feel like profiling — intrusive, impersonal, even manipulative.
The tools are evolving fast and so are our expectations. Users are no longer impressed by generic familiarity. They crave nuance, the kind that only real human interaction tends to deliver.
When Empathy Becomes Automation
We’ve all seen the now-familiar email:
“Hey [FirstName], I loved your recent post on [Topic]!”
What once seemed clever now reads like white noise. Personalization isn’t the problem; it’s the faking of it at scale. Some companies are already using AI detectors to weed out artificial personalization in cold outreach.
True empathy requires timing, intuition, and human context. AI doesn’t know if your heartfelt post was scheduled by your assistant or great LinkedIn profile was created by a marketing company based on what sells. It can label you a “strategic thinker,” but it doesn’t understand that your tone changed after a personal loss or a professional shift.
When machines start mimicking empathy without earning the context, they cheapen the very thing they aim to replicate. Audiences are catching on and tuning out.
Profiling Without Permission
Here’s where trust begins to break: the data layer.
Many tools now scrape signals from public profiles, online behavior, even sentiment from past conversations. It’s not always illegal, but it is invasive. Users sense it, and they don’t like it.
“How did you know I just changed jobs?”
“Why are you referencing my tweet from two weeks ago?”
“How do you know I went there with my son?”
These moments feel unsettling. When personalization crosses into profiling, users pull back. Lawsuits, like Gong vs. Recall.ai, hint at deeper concerns: reverse-engineered identities, unauthorized data ingestion, and unclear consent.
Even companies that claim they “do not sell your data” often include clauses allowing them to use behavioral data for product improvement and personalized feature training. For instance, Lavender.ai explicitly collects user email sentiment, tone, and behavioral data for benchmarking, product optimization, and even targeted advertising via platforms like Meta and Google. While they provide opt-outs for some uses, much of this profiling happens by default, often using third-party trackers and social pixels embedded throughout user sessions.
Even worse, breaches are inevitable. Imagine if call transcripts, voice analysis, or personal coaching footage were leaked. With little regulation, who takes responsibility?
The free-for-all of data scraping may not last much longer. In a recent move, Cloudflare announced it would begin blocking AI bot scrapers unless they pay a fee, effectively putting a price tag on data access. New customers are also being asked whether they want to block AI crawlers by default.
This marks a shift: from passive tolerance to active protection. As more companies start charging for or outright rejecting bot access, scraping large datasets for AI training will become costlier and legally riskier.
And if a breach does happen, it’s not just data that’s exposed — it’s reputations, relationships, sometimes even safety.
AI Misfires and Brand Damage
The risks aren’t just ethical, they’re operational.
ChatGPT occasionally spirals into off-topic responses or incorrect advice. In B2B sales, that’s more than inconvenient. It can damage trust, confuse clients, or create legal liability.
When tools ingest and analyze calls, profiles, or video recordings, they create a goldmine of potential insight and potential risk. One wrong prediction, a misread tone, or a poorly summarized conversation can lead to reputational harm.
What’s worse, many users don’t realize these errors were AI-generated. They just know something felt “off.” The human on the receiving end may interpret it as carelessness, lack of expertise, or manipulation. And those impressions are hard to reverse.
Companies depending too heavily on automated personalization risk eroding the very trust they’re trying to build.
The AI Fatigue Curve Is Coming
AI is having its moment. Output is higher. Messages are faster. Meetings are summarized. Feeds are filled with synthetic thought leadership and well-formatted posts.
But the pendulum is swinging. People are starting to feel watched. Emails blur together. Messages sound polished but hollow. “Helpful” now reads as formulaic.
We’re reaching a tipping point where consumers and buyers are craving a return to something older: authenticity. They want slower, unscripted, imperfect human conversation — the kind that builds rapport and trust, not just conversion rates. This trend will grow.
In high-stakes B2B settings, trust closes deals, not just clever subject lines. The next wave of successful companies won’t rely on being the “most personalized.” They’ll stand out by being genuinely human.
It’s not about throwing out AI. It’s about knowing when to step in and be present.
What Comes Next?
Behind the scenes, AI vendors aren’t just innovating, they’re suing, acquiring, and racing to scale. Startups train models on scraped or unclear data sources. Users remain unaware of how they’re being profiled or monetized.
This isn’t theoretical. It’s happening now. It raises urgent questions:
- Who owns the insights AI gathers about us?
- What happens when profiling leads to bias and discrimination?
- What safeguards exist when personalization feels like surveillance?
- Will ads sneak into private chat tools?
- How do we audit systems trained on unverified psychological labels?
These aren’t academic debates. They’re the new ethical battleground. And the companies that lean into clarity, ethics, and respect for nuance won’t just survive the fatigue — they’ll rise above it.
Because while AI may scale communication, only humans can scale trust.