
A Different Kind of Technology Moment
We’ve been through major technology shifts before. The internet, mobile, cloud: all of them changed how systems work and how people interact with institutions.
AI feels different.
Not just because of speed or scale, but because it’s starting to sit inside decisions that matter; medical recommendations, financial assessments, hiring filters, even the information people use to understand the world around them.
That raises a harder question than “what can AI do?” It’s closer to: what can we trust it to do? And under what conditions?
That question sits at the core of America at 250: A Beacon for the AI Age, a book by Governor Michael Dukakis and Nguyen Anh Tuan, which looks at AI less as a tool and more as a force reshaping how trust itself works in society.
The book was introduced on May 1, 2026, at Harvard University’s Loeb House as part of a broader Boston Global Forum’s initiative focused on building trust architecture for the AI age (https://bostonglobalforum.org/?s=america+at+250).
We’ve Been Talking About Principles But Not Building Systems
For the past few years, most AI governance conversations have sounded familiar: fairness, accountability, transparency. Important ideas but often discussed at a very high level.
The problem is that principles don’t scale on their own.
Different organizations interpret them differently. Some apply them seriously, others treat them as signaling. There’s very little consistency and even less measurement.
Meanwhile, AI systems are moving into areas where inconsistency isn’t a minor issue. It’s a structural risk.
What America at 250 does well is call this out directly:
If AI is becoming infrastructure, then trust must become infrastructure too.
As Nguyen Anh Tuan, CEO and co-chair of the Forum, has argued, “in the AI age, trust is no longer a principle, it’s infrastructure. And if we don’t build it deliberately, we will lose it by default.”
From Ideas to Architecture
One of the more useful contributions in the book is the idea of a “trust architecture.”
Instead of treating trust as a checklist, it’s broken into three connected layers: standards (what trust requires), infrastructure (how it’s implemented and maintained), and order (how it scales across systems and countries).
This isn’t just theory. It’s a way of designing systems so that trust isn’t dependent on intention alone but built into how those systems actually operate.
That shift, from intention to structure, is where many current AI governance efforts fall short.
Why Standards Alone Don’t Solve the Problem
There’s been no shortage of AI guidelines published in recent years. Governments, companies, and institutions have all introduced their own versions.
The issue isn’t lack of effort. It’s fragmentation.
Without shared, operational standards, it’s difficult to compare systems, enforce accountability, or even agree on what “trustworthy” means in practice.
The framework discussed in America at 250 leans toward something more practical: standards that are meant to be applied across sectors and, importantly, measured.
If trust can’t be evaluated, it becomes subjective. And subjectivity doesn’t scale well in systems that affect millions of people.
Trust Is Not a One-Time Certification
Even if we agree on standards, there’s still a second challenge: systems change.
Models drift. Data shifts. Context evolves.
So the idea that an AI system can be “certified” once and then left alone doesn’t really hold up.
This is where the infrastructure layer becomes important. Trust must be maintained through monitoring, feedback loops, and clear accountability structures.
In other words, trust isn’t declared. It’s maintained.
The Pressure Point: Information
If there’s one area where the trust problem becomes immediately visible, it’s information.
AI-generated content is improving fast. Text, images, video; it’s all becoming easier to produce, cheaper to scale, and harder to verify.
That creates a strange dynamic: more information than ever, but less certainty about what’s real.
The book points toward the need for systems that can establish provenance, label synthetic content, and create accountability in digital platforms.
Without that, trust doesn’t just weaken, it fragments.
Trust Doesn’t Stop at Borders
Another point the book makes is that none of this is contained within a single country.
AI systems move across borders. So do data, platforms, and influence.
That makes purely national approaches to governance incomplete by definition.
The idea of a “Trusted Order” is an attempt to address that some level of shared framework that allows different actors to cooperate without constantly resetting the rules.
Why This Feels Urgent Now
AI capabilities are accelerating, but governance is still catching up. That gap is where most of the risk sits.
At the same time, there’s a window. Because AI is still being built into infrastructure, there’s an opportunity to shape how trust is embedded from the start.
A More Grounded Way to Think About AI
America at 250 shifts the conversation away from hype cycles and toward systems.
Away from abstract ethics and toward implementation.
Away from isolated solutions and toward structure.
If AI is going to be part of how societies function, then trust can’t remain a vague expectation. It must be designed, built, and maintained, deliberately.


