
The Goldfish Problem
You’ve had 17 meetings this week. You’ve Slacked, Zoomed, whiteboarded, and taken notes. Everyone is moving fast. But when it’s time to make a decision (or revisit one), it feels like no one remembers what actually happened.
AI was supposed to fix this, and in some ways, it has. We summarize faster, debug better, and even write performance reviews with slightly less dread. The pace of work has accelerated, but the real problem, the one that drags us into repeated meetings with vague action items, isn’t that we work too slowly. It’s that we forget too quickly.
Today’s GenAI tools are like goldfish that remember only what’s right in front of them. Some large language models can simulate memory with long context windows, retrieval methods, or plugins. But when the session ends, so does most of the meaning. No nuance accumulates. No real understanding forms.
Andrej Karpathy said it best: “LLMs are still autocomplete engines with perfect recall and no understanding.” Until we find that cognitive core (intelligence with true memory), they’ll remain brilliant mimics, not minds.
That mimicry isn’t even a competitive advantage anymore. When everyone has access to the same tools, ChatGPT, Claude, Gemini, and others, no one stands out. We’re accelerating the fragments of work, but the structure of work itself remains broken. Writing your email faster won’t save you.
Everyone Has AI. So, Why Does Work Still Feel Broken?
AI is now embedded in nearly every app, document, and coding tool. The productivity boost is real, but the collective impact is shallow. Everyone is summarizing faster, writing better, and debugging with ease.
Yet the playing field has only become more crowded, not more coordinated.
We’ve sped up the surface layers of work (emails, comments, drafts), but the real work happens in the messy middle. That’s where alignment, prioritization, emotional buy-in, and decision carryover live. And that’s where things often fall apart.
The biggest blocker isn’t task completion; it’s shared understanding. One person believes a decision is final, while someone else is still unconvinced. A Slack thread quietly unravels what a Zoom call seemed to conclude.
GenAI can’t help much here. It’s built to assist individuals, not teams. It handles tasks, not trust. The challenge isn’t “Can this AI summarize what we said?” It’s “Can this system help us carry that conversation forward next week, with clarity and context intact?” Most of the time, the answer is no.
Imagine your team debates Q4 priorities for 45 minutes. The AI summarizes it perfectly. Two weeks later, Engineering builds Feature X while Product roadmaps Feature Y. Both point to the same meeting notes. The summary was accurate but flattened the disagreement that mattered.
A Stats 101 Problem, Not a Model Problem
Today’s models are cognitively limited. They don’t reason. They don’t remember. They start from zero every session, with no process for folding insights back into their internal structure. What they hold is a blurred pattern map of the internet, not an actual model of the world.
They replicate one part of the brain by recognizing patterns, but miss the rest: memory, emotion, and instinct. They memorize perfectly but generalize poorly. Feed them random numbers and they’ll recite them flawlessly, but they can’t find meaning in the unfamiliar.
Humans forget just enough to be forced to reason, to synthesize, to seek patterns. LLMs, by contrast, average when they should analyze. When asked to summarize a discussion, they flatten all the inputs, emotions, and tensions into a single mean. But the mean often misses what matters.
The real shape of conversation isn’t a line graph. It’s a violin plot, bulging where people cluster, narrowing where things get sparse, stretching wide where disagreement is loud. It’s messy but real.
Most GenAI tools strip this shape away. They turn dynamic, emotional, high-variance conversation into a single, flattened paragraph. In doing so, they erase the signals we rely on to make smart decisions. The problem isn’t that LLMs are dumb; it’s that we’ve applied them to deeply human problems (teamwork, memory, context) without acknowledging the mismatch. We flattened the shape of thinking, and that shape is where the insight lives.
Beyond the Goldfish
We used to talk about “institutional memory” as something you earned. Long-tenured employees carried it in their heads. They remembered what happened five reorgs ago, why a product line got cut, and which relationships quietly kept the lights on.
But relying on people to be your memory has limits. People leave. They forget. Their perspective narrows. The most important context often vanishes when they walk out the door. Institutional memory should be a system, not a person.
If today’s AI feels like a goldfish, the answer isn’t to make the goldfish faster. It’s time to rethink how memory should work inside teams. Memory-native AI treats knowledge as a living system. It captures what was said, how it was said, who said it, and how that evolved over time. It asks not just “What did we decide?” but “How did we get there, and what might we have missed?”
Instead of focusing on generation, this new class of AI focuses on connection. It links a team’s thinking, emotions, and decisions into one evolving memory. It becomes the infrastructure that makes organizational intelligence compound instead of decay.
What’s Next
Companies spend thousands of dollars per employee every year simply reconstructing knowledge that should have been captured. When someone leaves, a quarter of institutional memory leaves with them.
Meanwhile, intelligence has become commoditized. Everyone has access to the same models. The real competitive advantage isn’t inhaving AI, it’s in what your AI remembers about your business, your team, and your customers.
Organizations that build systems capable of remembering are accumulating proprietary intelligence that competitors can’t replicate. While others continually reconstruct the same knowledge, they’re building on years of accumulated understanding.
We’ve spent years teaching AI to talk and to reason. Now we need to teach it to remember. The problem at work isn’t speed. It’s forgetting too quickly. It’s failing to carry forward the emotional and contextual weight of decisions.
The future of AI isn’t speed. It’s memory. Because memory is how we stop repeating ourselves and start building something that lasts.



