What a Failed Government Project Taught Me About AI Development

Now, as I'm learning about AI development,
I realize those expensive lessons are exactly what every AI team needs to know.


The Project That Couldn't Ship

For 4.5 years, I worked on CAMPS (Consolidated Air Mobility Planning System) for the Air Force. We had a 90+ person development team working across multiple lines of business: SAAM, Coronets, Short Notice, Channels, Contingency, and several others in development.

Here's what we couldn't do: get a single line of business across the finish line and into users' hands.

When I got hired three years into what was already a five-year contract, all I heard was that we should have focused on completing one line of business first to prove we could actually deliver. But we kept getting bogged down by endless requirements, constantly shifting stakeholder demands, and users who refused to leave their legacy systems because our application "didn't do everything it should."

No one from the top ever came down and said, "This is the deadline. Move over to the new system and keep using your spreadsheets for what we haven't built yet."

The line in the sand kept moving. And moving. And moving.

At the time, it felt like the most frustrating experience of my career. Looking back now, as I'm deep in learning about AI development and UX for AI systems, I realize we learned some incredibly expensive lessons that every AI team desperately needs to understand.

The Pattern We Missed (That AI Teams Are Missing Too)

Here's the thing that took us years to figure out: everyone was extremely siloed. All the users thought that THEIR line of business was so special and unique to them. Only at the very end did our program managers (retired Air Force, of course) understand that ALL these lines of business followed the same basic pattern.

Every single one had: requirement → allocation of assets → planning process → execution.

Every. Single. One.

They all had nuances, sure, but in the big picture, that was the fundamental system. If we had known that from the beginning and taken a systems approach instead of trying to build everything custom for everyone, we could have built one solid foundation and adapted from there.

Sound familiar? Because I'm seeing the exact same pattern in AI development right now.

What AI Development Actually Requires

I'm currently deep in learning about UX for AI—chatbots, agentic AI, data management, the whole ecosystem. And everywhere I read about building AI solutions, the advice is the same: be nimble, use users literally IN the process of prototyping to get immediate feedback, be Agile and Lean. It's the only way to make sure you're on track for developing AI solutions that actually work.

But here's what I'm seeing in the AI space that mirrors our CAMPS disaster: stakeholders who just "want AI solutions" and "want efficiency." They want to go back to Congress (or the board, or investors) and prove how much money was saved, how many manual jobs were automated, how awesome the data management is now.

But you can't just take a sledgehammer to existing workflows in the AI world. It's dangerous. It can have fatal consequences—we've already seen this with AI-related plane crashes and other catastrophic failures. It can mess up critical data and frustrate users with useless AI UX that doesn't actually help humans do their jobs better.

The Human Role Has Fundamentally Shifted

Here's what I'm learning that directly connects to our CAMPS failure: the human user's role has completely changed. We're no longer the "doers" of the work. We've transitioned into reviewers and decision-makers only.

This means we still need deep knowledge to check AI's work, but we need to understand the system at a much higher level. We need that macro/micro awareness I never had to develop at CAMPS—understanding the client's entire system while also being able to zero in with Agile practices to identify where AI will do its thing best and where humans need to make critical decisions.

At CAMPS, we never developed that systems thinking. We got lost in the weeds of individual stakeholder requests without understanding the overarching patterns. Now I see why that was such a critical failure.

What CAMPS Should Have Taught Us (And What AI Teams Need to Know)

If I could go back and redesign our approach to CAMPS with what I know now about AI development principles, here's what we should have done:

Started with system understanding, not feature requests. We should have mapped out that requirement → allocation → planning → execution pattern first, then built one solid version of that workflow before trying to customize for every stakeholder.

Gotten one line of business working well before expanding. Instead of trying to solve everything at once, prove the concept works with real users in a real workflow, then adapt and scale.

Built in rapid feedback loops from actual users. Not quarterly stakeholder meetings where everyone argues about requirements, but daily or weekly check-ins with people actually doing the work.

Designed for humans as intelligent reviewers, not task-doers. We were still designing interfaces for people to manually input data and manage processes. We should have been designing for people to review, validate, and make decisions about system-generated recommendations.

The Stakes Are Even Higher Now

The difference between CAMPS and AI development is that the consequences of getting it wrong are exponentially higher. A failed government project costs taxpayer money and frustrates users. A failed AI system can cause actual harm—bad medical diagnoses, biased hiring decisions, safety-critical failures.

But the principles for success are the same: understand the system, start small, iterate with real users, and design for the actual human role in an AI-augmented workflow.

I spent 4.5 years learning these lessons the hard way on a project that never shipped. AI teams have the opportunity to learn from those expensive mistakes without paying the same price.

The question is: will they take the systems approach from day one, or will they repeat our pattern of trying to solve everything for everyone all at once?

Because I can tell you from experience—that way lies madness, frustration, and a lot of taxpayer money spent on systems that never see the light of day.

What patterns are you seeing in AI development that echo traditional software project failures? How are you building systems thinking into your AI development process?

**In the spirit of the transparency I advocate for in AI development, I worked with Claude to help structure and refine these reflections on my experience.

Kathryn Neale