
Just another day at work. You prompt a GenAI tool to draft a board update: a quick recap of company performance, key metrics and upcoming priorities.
Most of it looks solid. But one chart pulls last year’s data instead of this year’s. A bullet point misstates your growth rate. And the closing paragraph promises a product launch that’s actually been delayed.
Is it good enough? Maybe as a first draft. But if that version makes its way to the board, it’s not just sloppy. It’s a credibility killer.
The Rise of ‘Almost Right’ AI
Generative and agentic AI tools are proving to be incredible time-savers and strategic assets, helping businesses streamline workflows, surface trends and insights faster, and scale content creation. The upsides are tangible and obvious, so now the content floodgates are wide open.
But not enough people are pausing to ask: “Does this need to exist?” Or: “Is it actually right?”
And too often, AI output gets accepted at face value.
So across business functions, there’s been a surge in content that looks good on the surface — slick, well-formatted and seemingly on-point. But look beneath that, and the veneer starts to shatter.
When 10% Wrong = 100% Risk
Depending on the prompt and training data, AI-generated content is often about 90% right… but the other 10% isn’t just typos or formatting glitches. Like in the board example above, it often includes material inaccuracies: a revenue figure off by a decimal point, a misread of industry nuance, or a hallucinated stat or statement that simply isn’t true.
These aren’t just internal speed bumps. They become external liabilities, especially when public-facing content contains gaffes. (“Does that company even know what they’re talking about?!”)
That’s why when content is 90% right and 10% wrong, it’s effectively 100% useless. That final 10% is where trust unravels, and the damage is done.
Many of us have seen the fallout: for example, from summer reading lists published by major news outlets with made-up books, to hallucinated legal citations submitted in court filings, to academic research riddled with fake citations. The pattern is the same: 90% right (if that), 10% wrong, 100% damaging.
The stakes are especially high in my line of work, partnering with investor relations (IR) professionals, where every detail counts. A single misstatement in an earnings call script or shareholder letter can shake investor confidence, impact valuation or even move markets.
Historically, IR has been more cautious to adopt AI than other finance functions, but that’s changing fast. There’s enormous potential, in IR and beyond, if we use AI correctly.
AI Is the New GPS: Make Sure It’s Pointing the Right Way
We have to get this right because AI is no longer a novelty. It’s as ubiquitous in business as GPS is in everyday life. And like GPS, it’s a powerful tool, but only if it’s taking you in the right direction.
The goal isn’t to let AI shortcut strategy; it’s to help guide it. Here are a few ways to make that happen, so you can move beyond “good-enough” output toward content that’s 100% worth standing behind.
- Know what “good” looks like, and set the bar there. Don’t treat the first draft as the final one. AI output should be judged by the same standards you’d apply to a trusted colleague’s work. Make sure it’s clear, accurate, audience-appropriate and, above all, useful. If it wouldn’t fly in a meeting or on a client call, it’s not ready.
- Put humans (with context) in the loop. Quality review isn’t just about grammar or style adherence. Assign owners who understand the context — financials, industry tone, stakeholder priorities, etc. — and give them time to sanity-check the substance. This is where that risky 10% with subtle errors and hidden biases gets caught. Humans also bring expertise, lived experience and creativity in ways AI often can’t.
- Reward clarity, not verbosity. Don’t mistake more pages for more value. A sharp four-pager that’s useful and accurate wins every time over a 20-page deck that rambles, repeats or quietly misleads.
- Use AI models that know your business. Generic LLMs can’t always grasp the nuance of your industry or role. Instead of just feeding them context after the fact, use purpose-built models trained on your domain, whether that’s IR, finance or another high-stakes function. These systems should be secure by design and configured to pull from trusted sources, so sensitive data stays protected, and outputs align with your standards.
- Train AI agents, not just prompts. Once they’re trained on your data and workflows, agentic systems can go beyond content generation to orchestrate key tasks: reconciling numbers, flagging discrepancies, and deferring to human reviewers when confidence is low, so small errors don’t become big problems. In IR, for example, domain-specific agentic AI is already being used to validate earnings data, spot inconsistencies in shareholder communications, and support personalized investor outreach with timely and data-backed talking points.
AI is Your Copilot (not Autopilot)
Like any tool, AI is only as useful as the way we use it. If the inputs are off — or worse, no one checks the outputs — then we’re really just speeding off in the wrong direction.
The real opportunity comes when businesses combine AI’s potential with human judgment: training teams to use their tools well and ask smart questions. Teams must also apply scrutiny to results and build in checkpoints that catch mistakes before they matter.
Get that right, and AI stops being a content factory that churns out work of dubious quality. Instead, it’s a driver of trust, clarity and 100% valuable work.



