AI can improve productivity inside your organization in a number of ways by automating thankless manual tasks. But there’s a growing assumption that these same capabilities should extend outward: to clients, vendors, partners, and prospects. If AI can handle internal workflows, why not external ones?
Anthropic, creator of Claude and Claude Code, conducted an experiment last year to find out just that. It wanted to determine whether its LLM model could become an agentic shopkeeper. It put a Claude interface in charge of managing a mini shop in the office.
The LLM was able to change pricing depending on demand, choose which merchandise to stock, and reorder supplies when they were running low. Despite its best efforts at prompting, the AI “shopkeeper” was tricked into giving discounts or free food and sold products at a loss. It continued to order Coke Zeros to sell at $3 even though it was told employees could get that drink for free.
“If Anthropic were deciding today to expand into the in-office vending market, we would not hire Claudius,” according to the article recapping the experiment.
AI is not ready to automatically communicate and negotiate with customers in a way that companies can trust. There is too much at stake and too much reputational risk if the agent makes the wrong decision based on an incomplete frame of reference or bank of information.
AI can streamline internal operations, but workflows involving clients, partners, and vendors still require human judgment, accountability, and relationship capital.
Here are some major reasons why:
Mistakes and hallucinations still occur
AI models can still be easily tricked, as was the case of Anthropic, when its AI-powered automated vending machine “shop” was easily led astray.
It’s easy to imagine a malicious prompt injection, an attempt to get the agent to ignore previous instructions, or falsifying communication authorizing a lower price could easily trip up a less sophisticated agent.
Buyers of services cannot afford to have their AI make a suboptimal commitment or error with an external party in a binding contract. This is why companies that do embrace automated workflows are much more likely to be for internal usage, where commitments can be walked back, than with external contractual obligations, where contracts are often set in stone.
Employees still need to manage accountability
If AI makes a mistake on an internal process, exposure is minimal. If it messes up a contract with an external party, there could be legal consequences, financial risk, and relationship and reputational damages. But if it makes an external mistake, it can’t “own” the outcome; a human has to. Businesses still depend on a sales professional or executive to look at all the information at hand and make a judgment call on which is best.
When an executive at your company or client demands to know who is responsible for automatically signing an unfavorable deal, “the model hallucinated” is not an acceptable answer. If your workflow doesn’t have a human in the loop, you may have created liability issues even if you saved costs.
Relationships are context, not transactions
Any contract is an agreement between two separate entities. And when that entity is a company, it is nonetheless represented by an individual or a group of people. Business relationships often carry a complicated backstory, politics, and unspoken dynamics. Few take place only via redlines on documents sent from party to party. They often evolve over multiple conversations that can include professional and personal information being shared. AI may not be able to pick up subtext and tones that sometimes signify something different than the words spoken. A question about pricing could really be about something else, like quality of service or the supplier’s commitment to excellence.
AI can’t make true value judgments
Numbers don’t lie. By crunching data and analyzing trends, AI can tell you what you could do in relation to a negotiation. Remember, AI is optimized to follow strict rules. But humans, for example, can have relationships and can know when to break rules for the betterment of their operations. For example, a contract intelligence system may tell you that you could negotiate a 20% discount, but it’s up to the executive to ascertain whether pushing for that change is worth whatever impact on the relationship may be. For example, if you know a supplier is running into cash flow issues, you may not press for longer payment terms, even if AI highlights that as a leverage point.
AI produces huge value in the right circumstances
Using AI to augment human work feels like a superpower. It can crunch vast amounts of data and make clarifying recommendations based on what it finds. It is especially powerful in negotiations with external parties where a buyer, for instance, may not be able to track down all relevant information or may struggle with identifying which facets of the deal are most important. We should appreciate that there’s an extremely valuable new tool we can harness and not attempt to apply it to every situation.
While AI can “automate” external interactions, there’s already ample evidence that a lot more can go wrong than right. Instead, companies should empower their employees to use AI to make smarter decisions, while leaving it to them to communicate and negotiate with the external party with which they hope to build a relationship.


