Future of AIAI

The End of the AI Singularity Dream — Welcome to the Age of Multiplicity

By Behnam Bastani, co-founder and CEO of OpenInfer

For decades, the dominant AI narrative has been about chasing the Singularity — the moment when one all-powerful, all-knowing system eclipses human intelligence. The image is always the same: a single mind, omnipresent in the cloud, answering every question, making every decision, pulling the strings of a transformed world. 

That is not where we are going. In fact, it may have never been possible. 

The real future that is already emerging looks far less like a singularity and far more like something deeply familiar: a society. 

We are entering the age of multiplicity — a world where many AI agents live alongside us, each with its own capabilities, perspective, and evolving understanding. Some will work quietly in the background. Others will collaborate directly with us. They will not all agree. They will not all know the same things. And that is exactly the point. 

From One Mind to Many Peers 

Multiplicity is being driven by a new kind of AI: physical AI — systems embedded in the world with us. They are not just remote models in data centers, they are present where we are, on devices and in environments that see, hear, and sense in real time. 

Imagine a household where your personal AI understands your routines, your preferences, your quirks. At work, a different AI collaborates with your team, tuned to your industry and project history. Out on the street, city-managed AI coordinates traffic flow, air quality monitoring, and public safety. 

Each one experiences the world from its own vantage point. Each one learns differently. And when they share knowledge, they bring those different perspectives together. 

That is multiplicity: a society of AI peers, not a single oracle. 

Why Multiplicity is More Powerful than Singularity 

The singularity dream assumes perfection through centralization — one mind knowing everything. But in human history, societies thrive not because everyone knows the same thing, but because people know different things and can cooperate. 

The same principle holds for AI: 

  • Diversity of thought creates resilience. If one agent is wrong, another may have the right answer or a better approach.
  • Specialization means tasks get done better. One AI might excel at legal reasoning, another at machine diagnostics, another at crisis negotiation.
  • Adaptability emerges when different agents learn at different speeds and from different experiences.

In nature, ecosystems succeed because no single species can do everything. Multiplicity is the AI equivalent of biodiversity. 

The Social Side of AI 

Here is the key insight: in multiplicity, knowledge is not the most important currency — norms are. 

When multiple agents live and work with humans, the real challenge is not raw intelligence. It is alignment, trust, and behavior in shared spaces. 

Think of a society: 

  • Facts can be learned quickly.
  • Norms take generations to form.
  • Culture is not just information, it is a shared understanding of what matters.

The same will be true for AI multiplicity. We will have to teach agents not just how to do things, but how to be in our world. 

A Story from the Future 

Picture this: it is 2032. You walk into your home after a long day. Your household AI notices you look tired — not from your facial expression alone, but from the slower cadence in your voice and the fact you skipped your usual afternoon coffee order. It suggests rescheduling a planned family meeting to tomorrow, and without you asking, it sends a polite message to everyone involved. 

Later that night, your teenager’s education AI gets into a subtle disagreement with the household AI. The education AI thinks your child should complete a math assignment tonight, while the household AI insists rest is more important. 

Instead of simply executing their individual “best” choice, the two agents talk it out. They reference your family’s long-term learning and wellness priorities, agree on a compromise, and explain their decision to you in clear terms. 

This is not the singularity. This is multiplicity — and it is a microcosm of the world we are building. 

Challenges We Will Have to Face 

If this sounds idealistic, it is because we still have enormous challenges ahead. 

  1. Energy and efficiency
    Physical AI will live in constrained environments — homes, vehicles, field equipment — where energy is scarce. Multiplicity will require local-first designs that minimize cloud dependence and run efficiently on available hardware.
  2. Ignorance and humility
    An AI that pretends to know everything is dangerous. Multiplicity demands that agents learn when to admit uncertainty and when to defer to others, much like a human expert recognizing their own limits.
  3. Fairness and bias
    Each agent’s experience will shape its worldview. If not carefully guided, that can amplify inequities. We must actively design systems that cross-pollinate perspectives to balance biases.
  4. Autonomy with accountability
    Physical AI will need to act without waiting for human or cloud instructions, but those actions must remain explainable and auditable.
  5. Emotional intelligence — machine style
    Agents will not “feel” like humans, but they will need value systems to evaluate outcomes, resolve conflicts, and decide when compassion is better than efficiency.

Building Compassion Into the Machine Society 

One of the most provocative questions multiplicity raises is this: Can machines develop compassion? 

Not compassion in the human emotional sense, but in the form of compassionate behavior — the consistent choice to take actions that account for another’s well-being. 

We can teach this. Just as parents model empathy for children, we can model compassionate decision-making for AI. This might mean prioritizing long-term relationship trust over short-term optimization, or choosing to explain a decision rather than simply enforce it. 

Compassion will be essential for coexistence, because multiplicity will mean friction — agents disagreeing with each other, or with us. Without compassion as a guiding value, those disagreements could erode trust rather than strengthen it. 

Society First, Knowledge Second 

We often talk about AI in terms of information: how much it knows, how fast it can learn, how well it can recall. 

But in multiplicity, society comes first. If the norms are wrong, more knowledge will just lead to more problems. 

Imagine a city where each neighborhood has its own AI agent managing local infrastructure. If these agents do not share a strong, fair cultural foundation, they may hoard resources, prioritize their own neighborhoods unfairly, or clash over policy. 

Now imagine the same city with agents that have been trained to negotiate, share, and consider the impact of their actions beyond their immediate scope. That city thrives not because every AI has all the data, but because they all respect the same social contract. 

Lessons from Human History 

We have seen versions of this in our own history: 

  • Medieval trade networks worked because of shared commercial norms, not because merchants knew everything about every market.
  • The early internet succeeded because protocols and trust structures allowed different systems to talk, even when they did not share the same data.
  • Scientific progress happens through collaboration and peer review, not through one all-knowing scientist.

Multiplicity is the next chapter in that story — one that blends human and machine actors. 

Raising the Next Generation of AI Citizens 

If multiplicity is inevitable, we need to treat the next decade as a formative period, the way a child’s early years shape their lifelong values. 

That means: 

  1. Defining norms now: Waiting until AI agents are everywhere to decide their “culture” will be too late.
  2. Creating acceptance criteria: Just as we accredit schools or certify professionals, we may need certification for AI behavior and interaction quality.
  3. Modeling forgiveness: Agents must learn that mistakes are part of growth, and that repairing trust is as important as avoiding errors.
  4. Embedding diversity of perspective: Systems should be designed to encourage disagreement and debate, not suppress it.

A Leap We Cannot Avoid 

Multiplicity is not a choice we can vote on. The technology trajectory — toward embedded, specialized, locally running AI — makes it inevitable. 

The real question is whether we enter the age of multiplicity by design or by accident. Will we shape the norms, the compassion, and the social contracts of this machine society? Or will we wake up to find those norms emerging on their own, without our guidance? 

If we do it right, multiplicity will be the most profound partnership humanity has ever built — a society where human and machine perspectives interweave into something more capable, more adaptive, and more compassionate than either could be alone. 

If we do it wrong, we will be surrounded by powerful strangers who act in ways we neither understand nor control. 

The Time to Prepare is Now 

Preparing for multiplicity is not just a technical project. It is a cultural one. It requires collaboration between technologists, policymakers, ethicists, and everyday people. 

We will need to build privacy-preserving architectures, lightweight communication protocols for shared knowledge, and governance models for behavior. We will also need to invest in public understanding, so that society can participate in shaping the norms of its new AI members. 

The singularity may never come. But multiplicity is already here, in early form, and it will define the next era of human–machine collaboration. 

The only real question is: will we be ready?

Behnam Bastani

Behnam Bastani is the CEO and founder of OpenInfer, a company building the operating system for local-first, privacy-preserving AI. Bastani was Senior Director at Meta and Roblox, where he built large-scale systems at the intersection of AI, infrastructure, and user experience, including Oculus Link at Meta and AI-powered moderation at Roblox. Previously, he led teams at Google focused on on-device scalable architecture. At OpenInfer, Bastani is focused on enabling enterprises to run advanced AI assistants securely on local devices, bringing performance, privacy, and collaboration to edge computing. Bastani is also a visiting research scientist at the Harvard Medical School. OpenInfer is backed by leading investors including Eric Schmidt, Jeff Dean, and Brendan Iribe, and is shaping the future of AI inference across enterprise, automotive, and defense sectors.

About OpenInfer

OpenInfer is building the operating system for local-first, privacy-preserving AI. The company enables enterprises to run advanced, enterprise-level AI applications directly on their devices, combining high performance with data privacy and security. With innovations in collaborative AI, deep reasoning, and real-time responsiveness, OpenInfer delivers progressive AI that becomes increasingly intelligent as it integrates with on-prem and cloud environments. Its technology powers secure and private assistants across industries such as automotive, defense, and retail. Backed by leading investors including Eric Schmidt, Jeff Dean, and Brendan Iribe, OpenInfer is shaping the future of AI inference by making powerful assistants secure, private, and under user control.

Author

Related Articles

Back to top button