Hi.

Welcome to my website. I’m a Senior UX/AI designer focusing on UX+AI for defense & enterprise systems. Check out my Case Studies for my latest work. Check out “Thoughts” for my musings on what I’m learning.

When Everything I Learned from CAMPS Failure Finally Made Sense: The Operation Helping Hand Story

When Everything I Learned from CAMPS Failure Finally Made Sense: The Operation Helping Hand Story

Or: How I turned years of Air Force frustration into a breakthrough that changes everything about mission planning


Here’s the thing about working at the 618th AOC for years: You don’t just learn what’s broken. You learn WHY it’s broken. You sit with the operators. You watch the Tetris game. You document the 70-80% mission failure rate. You feel the weight of it.

And then you leave. And you think you’ve moved on. But the problem doesn’t leave you—it just… sits there. Waiting. Fermenting. Until you finally have the tools to solve it.

That’s what my prototype, Operation Helping Hand is. The answer to a question that’s been bothering me for YEARS.

The Reality I Couldn’t Forget

70-80% of planned missions fail or require complete reworking within 24-72 hours on the execution floor.

I’ve written that statistic before. But I need you to really sit with it for a second. Seven out of ten missions. Every single day. Meticulously planned by highly trained operators using systems built 20-30 years ago. And then… they fail. Or need complete reworking. Within days. Sometimes hours.

Why?

Because legacy systems force humans into a constant manual Tetris game. New 1A1 priority missions drop with zero notice. Equipment fails mid-execution. Weather changes. Airfields get damaged. And suddenly every carefully planned mission needs to be manually reshuffled—asset by asset, mission by mission, constraint by constraint.

The operators aren’t the problem—they’re incredibly skilled. But they’re being asked to do something human brains simply cannot do at scale: process massive amounts of constantly changing data while simultaneously making strategic decisions about global operations.

This is exactly the kind of challenge that contributed to killing CAMPS. We spent 4.5 years trying to build automation, but we were still designing humans to BE the data processors. Just… slightly more efficient data processors with better interfaces.

Traditional sequential processing simply cannot handle the complexity of global airlift operations.

And I knew this. I KNEW this. But I didn’t have a solution. Not until I learned to build AI agent teams.

The “What If” That Wouldn’t Let Me Go

So here’s what kept nagging at me (because that’s what my brain does—it grabs onto problems and won’t let go until I’ve turned them over from every angle):

What if the problem isn’t that we need BETTER automation? What if we need FUNDAMENTALLY DIFFERENT architecture?

What if—instead of one system trying to do everything sequentially—we designed specialized AI agents that work in parallel, just like real operations SHOULD work?

What if AI could handle the complexity (the constant data processing, the parallel constraint checking, the pattern recognition across hundreds of missions) while humans handle the judgment (the strategic prioritization, the risk decisions, the chain-of-command escalation)?

Not AI replacing humans. AI transforming the ROLE of humans from data processors to strategic decision-makers.

I built Operation Helping Hand to test this. A simulated Pacific earthquake response. Six specialized AI agents working together. Real operational complexity. Real decision points. Real demonstration of what becomes possible when you stop trying to force human brains to do computer work.

Meet the Team (And This Is Where It Gets Interesting)

This isn’t one AI trying to do everything. It’s a specialized team working in parallel—just like real operations:

Requirements Agent – Processes mission requests, prioritizes 1A1s vs training missions. This agent understands the difference between “this must fly NOW” and “this can wait if needed.” It doesn’t just accept mission requests—it actively PRIORITIZES based on real operational criteria.

Barrel Agent – Allocates global airlift assets (C-5s, C-17s, C-130s). This is the agent that would have understood “Barrel Master” terminology from Week 1 of my course. It knows asset capabilities, availability, positioning. It’s not just assigning tails—it’s doing strategic allocation.

Planner Agent – Handles operational constraints (weather, hazmat, crew hours, airfield damage). This is where the rubber meets the road. All those real-world constraints that make mission planning HARD? This agent processes them simultaneously.

Risk Management Agent – Real-time ORM compliance and waiver identification. This isn’t after-the-fact risk assessment. This is built-in, proactive, “hey we’re about to do something that requires chain-of-command approval” flagging.

Coordinator Agent – Conflict mediation and chain-of-command escalation. This agent exists because sometimes the other agents are going to disagree about priorities or feasibility. Someone needs to mediate. Someone needs to know when to escalate up.

Observer Agent – Human-in-the-loop interface monitoring all agent communication.

And THIS is the new one. The one that didn’t exist in traditional operations but is absolutely essential in an AI agent environment.

Because here’s what I discovered while building these workflows: Humans cannot cognitively process agent-to-agent communication at the speed these systems operate. Especially not in high-ops tempo when hundreds of missions are being planned simultaneously.

We need an agent whose entire job is to translate what’s happening for human decision-makers. To know when to pause everything and say “Hey, human, we need your judgment here.”

The Observer Agent is the bridge between AI speed and human decision-making authority.

What Actually Happened (The Demo That Made Me Say “Holy…”)

Scenario: Major Pacific earthquake. 15+ missions needed in 72 hours. Damaged airfield infrastructure. Multiple competing priorities. Exactly the kind of high-ops-tempo environment where traditional systems break down.

I prompted the system and just… watched.

The agents started working immediately. In parallel. All at once.

And suddenly I was making connections everywhere (because that’s what my brain does):

This is what I’d been writing about for months. Shared responsibility. Transparent reasoning. Human oversight at appropriate decision points. Appropriate trust calibration.

But now I was WATCHING it happen in real-time.

The agents generated three courses of action—conservative, realistic, high-risk—with completely transparent reasoning for each. Not “here’s the answer.” But “here are three strategic OPTIONS with different risk profiles, and here’s why we recommend each one.”

Risk Management flagged airfield damage IMMEDIATELY and recommended C-17 reconnaissance first. Not as an afterthought. As a FIRST PRIORITY before committing heavy assets.

The Planner Agent identified a flow-to-flow optimization opportunity I hadn’t even thought to look for—using a KC-135 tanker as a communications relay during the reconnaissance mission. This is the kind of creative problem-solving that comes from parallel processing across multiple constraints simultaneously.

And then—when the risk threshold hit a certain level—the Coordinator Agent escalated to the Observer Agent, which triggered a forced decision module that PAUSED all the other agents and presented the human decision-maker with clear options.

The transformation: Complex multi-mission planning that traditionally takes DAYS of manual coordination—completed in MINUTES with transparent reasoning and strategic options presented at appropriate human oversight points.

Minutes. Not days. MINUTES.

But here’s what made me actually emotional: It wasn’t just faster. It was BETTER strategic thinking. The kind of holistic, parallel-processing, pattern-recognizing analysis that human brains are actually pretty bad at when we’re drowning in data.

But Then There’s This Whole Other Problem Nobody’s Talking About

Okay so agents working in parallel is amazing. Multi-agent systems can handle complexity that sequential processing never could. This architecture could genuinely transform mission planning.

But.

(There’s always a “but,” right?)

How do humans actually INTERACT with this in real operational environments?

Because here’s what happened during my demo: I watched agent-to-agent communication scroll by in chat format. Reasoning. Scratch pads. Confidence levels. Recommendations. Conflicts. Resolutions. SO MUCH INFORMATION flowing so fast.

And I’m sitting there thinking: If I can barely keep up with SIX agents planning ONE scenario… what happens in a real operational environment with HUNDREDS of missions being planned simultaneously?

Traditional chat interfaces? Completely inadequate. You’d be drowning in scroll while trying to make time-sensitive decisions about actual operations.

This is the UX problem I didn’t even know existed until I built the system.

And suddenly I was making connections (again, because that’s what my brain does):

Remember Vincent Buil’s work at Phillips Healthcare on high-stakes medical AI? The four perspectives—Purpose, Pathways, Policy, Pixels? He talked about designing interfaces that work for EXPERT users in high-pressure situations where wrong decisions have real consequences.

This is exactly that problem. But in an agentic environment where the interface needs to show you macro-level awareness across multiple parallel processes while ALSO giving you access to micro-level reasoning when you need it.

How do you design for that?

The Dashboard That Changes Everything (Or At Least Starts To)

What I prototyped instead of traditional chat:

Six agents visible simultaneously with real-time progress indicators. You can see ALL of them working at once. Not scrolling through logs—SEEING the whole system state.

Confidence level visualization (green 85%+ / yellow risk / red stop). At a glance. No reading. Just visual indicators that tell you “this agent is confident” or “this agent has concerns.”

Forced decision modules that pause ALL agents for human oversight. When the system hits a decision threshold that requires human judgment, everything stops. Modal pops. Human decides. Then agents continue.

Clickable reasoning panels showing agent “scratch pad” thinking. You don’t need to read it unless you WANT to dig deeper. But it’s there. Transparent. Accessible. Auditable.

Macro-level mission awareness while preserving micro-level detail access. You see the forest. But you can zoom into any tree whenever you need to.

The key insight (and this came from my neurodivergent brain’s tendency to operate at multiple scales simultaneously): You need BOTH views. You can’t just show high-level summaries because sometimes you need the details. But you can’t show everything all the time because that’s cognitive overload.

The interface needs to adapt to what the human needs in the moment.

Why This Actually Matters (Beyond Just Being Cool)

Look, I could talk about the technical architecture all day. Six agents, parallel processing, Chain-of-Thought reasoning, systematic evaluation frameworks—all the stuff I learned building this.

But here’s what this REALLY demonstrates:

Mission Success Transformation – From 70-80% failure rate to adaptive planning that handles operational reality instead of breaking when reality doesn’t match the plan

Cognitive Load Shift – Operators become strategic decision-makers instead of human computers processing data manually

Proactive Risk Management – Conflicts caught BEFORE execution, not discovered on the floor after assets are already committed

Institutional Learning – Democratic expertise distributed across agent teams instead of single points of failure walking out the door when someone retires

Resource Optimization – Automated flow-to-flow identification and creative problem-solving (like that KC-135 communications relay) that emerges from parallel constraint analysis

This is the core principle that was missing from CAMPS, missing from every legacy system I worked on, missing from traditional automation approaches:

AI handles complexity. Humans handle judgment.

Not AI replacing human expertise. AI creating the SPACE for humans to use their expertise strategically instead of burning it on manual data processing.

The Uncomfortable Truth I’m Sitting With

This represents everything I learned from CAMPS failure. Four and a half years watching a massive government project collapse because we tried to build everything for everyone all at once without understanding the underlying patterns.

What if we’d approached it differently? What if we’d identified the core workflow pattern (requirement → allocation → planning → execution) and built specialized agents for each phase? What if we’d started with ONE mission type, proven the concept, then scaled?

The Bible Content Creator workflow I built for my mom’s blog taught me this: Specialized components. Clear handoffs. Systematic evaluation. Iterative refinement. Evolution, not revolution.

Same principles for Bible education and mission planning. Just… different consequences when things go wrong.

And here’s what I keep thinking about: I didn’t have the skills to solve this problem until I learned to build AI agent teams. The technical capability didn’t exist. The architecture patterns weren’t clear. The design frameworks for human-AI collaboration in high-stakes environments were still emerging.

But now? Now I have those skills. Now the technology is ready. Now we understand how to build systems where AI and humans work TOGETHER instead of humans working FOR systems.

The question isn’t whether this is possible. The question is: Are we ready to fundamentally rethink how we design mission-critical systems?

What Happens Next (And Why I’m Both Excited and Terrified)

I’m continuing to push this work forward. Both the agent architecture and the dashboard visualization challenges. Because honestly? This problem isn’t solved yet. Not even close.

The agentic model is clearly the future for complex government workflows—that much is obvious. But the UX patterns for human-agent collaboration in high-stakes environments? We’re still figuring those out. Still prototyping. Still testing.

And I’m here for that challenge. Because this is exactly the intersection where my CAMPS experience, my AI agent skills, and my neurodivergent brain’s tendency to see patterns across complex systems all come together.

My “scattered” focus that sees multiple problems simultaneously? That’s not a bug—that’s exactly what designing multi-agent systems requires.

My discomfort with surface-level solutions and tendency to question everything? That’s the active skepticism needed to catch potential failures before they become operational disasters.

My ability to context-switch between macro strategy and micro details? That’s what this interface design requires—understanding both the 100,000-foot mission landscape AND the individual agent reasoning chains.

Seven weeks ago I thought I was learning to build AI assistants.

What I actually learned was how to design collaboration between human intelligence and artificial intelligence in ways that preserve what makes us human while amplifying our capabilities.

And Operation Helping Hand is what happens when you apply those skills to a problem that’s been bothering you for years.

The Bigger Picture I Can’t Stop Thinking About

Here’s what keeps me up at night (in a good way, mostly):

If this architecture works for mission planning… where else does it work? Intelligence analysis? Emergency response coordination? Resource allocation across any complex government operation?

The same pattern: Specialized agents. Parallel processing. Transparent reasoning. Human oversight at appropriate decision points. Systematic evaluation. Iterative refinement.

Healthcare: Agents that analyze patient data → generate diagnostic recommendations → flag risks → present options to doctors with clear reasoning

Intelligence: Agents that process raw data → identify patterns → generate hypotheses → present findings to analysts with confidence levels

Emergency Response: Agents that assess damage → allocate resources → coordinate logistics → identify conflicts → escalate to human decision-makers

Same architecture. Different domains. Universal principles.

And suddenly the $50 billion AI talent gap makes sense. It’s not about people who can use AI tools. It’s about people who can design human-AI collaboration systems that actually work in complex, high-stakes environments where trust, transparency, and human agency are non-negotiable.

That’s the 1% skill. And after building this, I’m in that 1%.

What I’m Actually Saying Here

Operation Helping Hand isn’t just a demo. It’s proof of concept that we can fundamentally redesign how mission-critical systems work.

Not faster legacy systems. Not better automation. Fundamentally different architecture built around how AI and humans SHOULD work together.

It’s not just faster mission planning. It’s fundamentally better decision-making under pressure. It’s operators using their expertise strategically instead of burning it on manual data processing. It’s risk management built IN instead of bolted ON. It’s institutional knowledge democratized instead of walking out the door.

And it’s what I learned from CAMPS failure finally—FINALLY—making sense.

Because trust isn’t built through perfect AI. Trust is built through transparent AI that keeps humans in control while handling the complexity they shouldn’t have to manually process.

Want to see the full demo walkthrough? [Link to video]

Questions? Disagree with my approach? See something I’m missing? Hit reply—I’m genuinely interested in where people see holes in this thinking.

— Katy

P.S. – This is a simulated demonstration using my personal capstone project setup. It’s not running on classified systems (obviously). But it SHOWS what becomes possible when you stop asking “how do we automate what humans do?” and start asking “how do we design collaboration between human and artificial intelligence?”

That’s the question that matters. And I think I’m finally starting to figure out the answer.


This methodology represents synthesis of insights from neurodivergent cognitive patterns, healthcare AI governance frameworks, and years of experience designing for high-stakes government environments. Starting next week, theory meets practice. The real test begins.

In the spirit of transparency about AI collaboration, I worked with Claude to develop and articulate this methodology—itself an example of the cognitive complexity amplification I’m describing. The framework and approach are my own, with AI assistance in refining the articulation and structure.

Finally, Someone Said It: Why Booz Allen’s Reality Check on Agentic AI Is Everything I’ve Been Thinking About

Finally, Someone Said It: Why Booz Allen’s Reality Check on Agentic AI Is Everything I’ve Been Thinking About

What I Learned in the Space Between: A Year at Vega Solutions

What I Learned in the Space Between: A Year at Vega Solutions