AI & Technology

Governments have a big opportunity with AI — here’s what they need to do

By Catherine Friday, EY Global Government & Infrastructure Leader

Governments around the world are striving to harness artificial intelligence to improve how they serve their citizens. Despite all the enthusiasm for AI, results on the whole have been mixed. Some governments have spent millions of dollars on technology initiatives without a clear roadmap and fallen short. However, a growing group of “AI pioneers” is showing how disciplined execution and thoughtful, easy-to-use design can translate into measurable public value.  

EY’s global research with Oxford Economics, which surveyed nearly 500 senior government executives across 14 countries, found that only about one in four AI pilots ever reaches production. The problem isn’t ambition or imagination. Many pilots are scrapped because of weak use cases, fragmented governance, and a lack of focus on constituents. 

A new EY report, From ideas to impact: A government leader’s guide to responsible AI implementation, informed by the research and successful use cases, sets out a structured, actionable roadmap for responsible and scalable AI implementation. Highlights include: 

1. Ask “why” before you ask “how”

Successful AI projects begin with a clear explanation of why they exist. For example, in the US state of Maryland, every proposed AI use case is evaluated against the governor’s 10 policy priorities, from education to transportation to workforce development. Projects that don’t directly support those outcomes are eliminated. 

Maryland’s disciplined process ensures that technology investments are aligned with broader policy goals. It also prevents the all-too-common trap of “technology in search of a problem.” Governments that identify challenges before finding solutions build stronger use cases and serve constituents better.  

The takeaway: treat AI not as a technical experiment but as a tool for advancing explicit policy objectives. When outcomes drive design, AI delivers measurable impact and earns public trust. 

2. Create scalability now— not later 

Scaling AI responsibly means embedding transparency, ethics, and accountability at the start, not retrofitting them later. In 2020, Canada’s federal government launched the Algorithmic Impact Assessment (AIA) framework. Before any automated decision system goes live, agencies assess its potential effects on citizens across four risk levels, from low to very high, and tailor oversight accordingly. 

This framework has two advantages: It protects citizens by identifying risks early, and it streamlines implementation by preventing costly compliance fixes later. It has become a repeatable playbook that helps departments deploy AI faster and with more confidence. 

Governments everywhere can adopt similar mechanisms: consistent ethical reviews, clear data governance standards, and early engagement with regulators. These are not mere bureaucratic hurdles: they are the kind of risk-mitigating infrastructure that’s necessary to actually get public-sector AI off the ground.

3. Design around people and operations

AI works best when it supplements and sharpens, rather than replaces, human judgment. Australia’s Department of Home Affairs illustrates this through its “Targeting 2.0” program, which applies AI and predictive analytics to identify border-related risks. The initiative’s success depends on close collaboration between data scientists and frontline officers. 

Officers feed real-time observations into the model, while AI helps the officers detect anomalies and patterns more quickly. The result is faster decisions, improved accuracy, and stronger security outcomes — all achieved by enhancing human expertise, not automating it away. 

Meanwhile, Estonia’s Bürokratt platform provides a compelling example of designing for data minimization, requiring data to be used only for the specific purpose that it was collected for, and restricting data access only to the agencies that need it. Estonia prioritizes data transparency, enabling citizens to monitor the use of their data and withdraw consent at any time (today, approximately 450,000 Estonians regularly check the data tracker). This decentralized approach improves trust and accountability, proving that efficiency and privacy can coexist when systems are designed around people and principles. 

When end users are part of development and testing, adoption rates soar, and trust in the technology follows. 

The path forward to meeting citizen expectations 

People compare public services to the best digital experiences in their daily lives. A generation of people who grew up with smartphones, online shopping and app-based services will now be prone to wondering why their government can’t give them answers as quickly as ChatGPT. 

From concerns over accountability, ethics and transparency to failures of communication and over-reliance on technology at the expense of people, there are real barriers for governments to overcome.   

But by taking the right steps, governments can overcome these fears, close the expectations gap and unlock extraordinary public value. Maryland, Canada, Australia, and Estonia show that success starts before implementation: setting clear outcomes, embedding responsible governance, and designing around people. 

For public leaders, the imperative is to move from pilots to production — and to ensure that AI can go beyond the hype and make a real, measurable positive impact in the lives of citizens.   

Author

Related Articles

Back to top button