AI Maturity in Government: Why Most Agencies are Stuck in the Middle

  1. Home
  2. Blog
  3. AI Maturity in Government: Why Most Agencies are Stuck in the Middle

You don’t need more AI tools. You need to learn how to operate the ones you already have. Right now, most agencies aren’t set up to do that. They’re stuck.

You Have the Capability. You’re Not Operating It.

Today, most federal agencies already have access to powerful AI capabilities. The tools exist. The infrastructure is emerging. The use cases are growing. It’s like being handed the keys to a Formula 1 car.

And then… Driving it like you’re stuck in traffic. Careful. Controlled. Limited. Not because the technology isn’t capable – but because the system around it isn’t ready.

This is the core issue: The challenge isn’t that agencies lack AI. It’s that they are not yet structured to operate with it.

The Illusion of Progress

At first, it feels like momentum. Teams begin experimenting with AI. Documents are summarized faster. Workflows become more efficient. Productivity improves. It looks like transformation. But in many cases it isn’t.

Because underneath all of it, nothing fundamental has changed. Same processes. Same approval chains. Same fragmentation across programs and systems. AI is just sitting on top. Making things faster.. but not fundamentally better.

Where Most Agencies Actually Are

In racing, there’s a difference between: Having a fast car and knowing how to drive it. Most agencies are somewhere in between. They’ve upgraded the technology, but not the operating model around it. This progression tends to follow a consistent pattern.

Stage 1: In the Garage (Exploration)

Agencies are testing capabilities: pilots and proofs of concept; isolated use cases; and early experimentation across teams. There is energy – but no consistency. Nothing scales.

Stage 2: On the Track – Playing It Safe (AI-Enabled)

AI is now part of the real work: Supporting document review and summarization; Assisting with analysis and reporting; and Improving efficiency within existing workflows.

But the operating model hasn’t changed. So agencies remain cautious. AI is used in a supporting role. Decisions remain fully human-controlled. Processes are not redesigned around intelligence.

This is understandable in government environments where: Accountability is critical; Oversight requirements are high; Risk tolerance is lower.

But the result is the same: AI improves productivity – but does not transform mission delivery.

Stage 3: Pushing the Car (AI-First)

This is where real change begins – and where many agencies hesitate. AI starts to influence how work is designed: Decision support becomes embedded earlier in processes. Workflows begin adapting to intelligence. Agencies start rethinking how outcomes are achieved.

This introduces new challenges: How do we trust AI-informed recommendations? How do we ensure explainability in decision making? How do we maintain accountability when AI is involved?

Most agencies don’t stall here because they can’t move forward. They stall because they are not yet structured – with governance, guardrails, and operating models – to do so confidently.

Stage 4: Racing (AI-Native)

At this point everything aligns: Technology, People, Processes and Governance. AI is no longer a tool layered into workflows. It is embedded across the mission lifecycle.

Examples begin to look like: End-to-end case management with AI support; Intelligence-driven prioritization and resource allocation; Continuous learning loops across programs.

Human and AI operate as a coordinate system. You are no longer “using AI.” You are operating with it.

Why Agencies Stay in First Gear

It’s not a technology problem. It’s an operating model problem. Agencies get stuck because:

  • AI is ownership is fragmented across silos
  • Governance and guardrails are unclear or inconsistent
  • Trust varies across programs and leadership levels
  • Decision making remains centralized and slow

As a result: Progress fragments; Adoption becomes inconsistent; and Momentum slows.

Where Stewardship Actually Matters

In Formula 1, safety is not separate from performance. It is what makes performance possible. The same is true for AI – especially in government. Responsible AI is not about slowing agencies down. It is what allows them to move forward with confidence.

At different stages, it plays different roles:

  • Early: Prevents misuse and reduces risk
  • Middle: Creates consistency across teams and programs
  • At scale: Becomes infrastructure for how AI operates

Without it, agencies remain constrained. With it, they can scale.

The Real Shift

This is not about adopting better tools. It is about operating differently. Today, most agencies: Have access to AI; Have identified use cases; Have early successes.

But are still structured for a pre-AI world. The real shift is from: AI as a tool to AI as part of the operating model; Isolated use cases to mission wide intelligence; Static processes to adaptive, learning systems.

What Comes Next

Recognizing where you are is uncomfortable, but necessary. The next step is understanding: What capabilities define each stage? Where the gaps exist? How to move forward intentionally?

Because success with AI in government is not defined by how quickly it is adopted. It is defined by whether agencies are designed to operate with it responsibly, transparently, and at scale.

This is the third article in the series, building on earlier discussion around Responsible AI and the shift from AI-enabled to AI-native – and focusing on where agencies actually are in their AI maturity.

Dan Foster has more than 25 years of experience in information technology and services, specializing in business agility transformation, Lean-Agile frameworks, and AI-enabled operating models. As a Transformation Leader at Snowbird Agility, Inc., he partners with executives, portfolios, and delivery teams to implement SAFe®, align strategy to execution, and improve flow, predictability, and measurable outcomes. He may be reached at [email protected]

“The teams became more productive, the work became more predictable, and the age old arguments about goals, resource allocations, and ownership took a back seat – replaced by camaraderie and a good natured competitive spirit.”

Tom Munro
CEO Verimatrix

“This was a massive project, a vital role and a huge challenge: large engineering team, broad and complex product suite with multinational development operations. Sharon brought order to chaos and a ton of positive energy, charisma and team leadership. She is a rare talent, a player that I strongly recommend.”

Mike Kleiman
CEO, BandwidthX