Case Study 3: COA Generator - Automating Complex Decision Logic

 

Case Study #3: COA Generator - Automating Complex Decision Logic

From Manual Decision Points to Intelligent Automation


The Technical Challenge That Changed Everything

In fall 2022, after delivering Air Refueling workflows that took two years to develop, the government made a seemingly simple request: eliminate a manual decision step in the AR Short Notice workflow. Users found it "cumbersome and inefficient."

What seemed straightforward became our most complex technical challenge—and the breakthrough that shaped my understanding of automated reasoning in government systems.


The Hidden Complexity

The Problem: Military operators were manually calculating how many tanker assets could fulfill specific refueling requirements—a decision point requiring:

  • Asset availability analysis across global positioning

  • Mission timing and routing constraints

  • Fuel capacity calculations and consumption rates

  • Airspace coordination and diplomatic clearances

  • Risk assessment for multiple operational scenarios

The Revelation: This wasn't just removing a step—it was replacing human judgment with computational logic in a mission-critical environment.


The Technical Breakthrough

Instead of eliminating the decision, we automated the reasoning process:

Automated Decision Engine:

  • Multi-variable analysis of tanker availability, positioning, and capabilities

  • Real-time constraint checking against operational requirements

  • Course of Action generation with multiple scenarios and trade-offs

  • Risk calculation integrated into recommendation logic

  • Transparent reasoning showing how decisions were reached

The Result:

Operators went from manual calculation and guesswork to strategic validation of system-generated options—the same human role transformation I later applied to AI agent design.


What This Taught Me About Automated Reasoning

This project was my first experience with computational decision-making in defense contexts. I learned:

  1. Complex Logic Can Be Automated - Multi-step military calculations could be systematized

  2. Transparency Builds Trust - Operators needed to see the reasoning, not just results

  3. Human Validation Is Essential - Automation should generate options, humans make final decisions

  4. Context Matters - Real-world constraints must be built into the logic

  5. Failure Modes Are Critical - System must handle edge cases gracefully


The Bridge to AI Agents

The COA Generator was pre-AI automated reasoning—using rule-based logic to handle complex calculations. But it revealed the pattern that became central to my AI agent methodology:

  1. Humans struggled with data processing (tanker calculations)

  2. Systems could handle complex logic (automated COA generation)

  3. Human expertise remained essential (strategic validation and decision authority)

  4. Transparency enabled trust (showing reasoning built confidence)

This experience taught me that the future wasn't replacing human judgment—it was automating the complexity so humans could focus on strategic decisions.


Technical Evolution: Rules to AI

COA Generator (2022): Rule-based automation for specific calculations
Operation Helping Hand (2024): AI agents for adaptive reasoning across multiple scenarios

The progression: From automating known calculations to enabling adaptive problem-solving while maintaining the same human-in-the-loop validation approach.


Why This Matters for Defense Innovation

This case study demonstrates technical evolution in government automation:

  • Proven ability to automate complex military decision logic

  • User trust development through transparent automated reasoning

  • Mission-critical reliability in high-stakes operational environments

  • Foundation thinking that led to AI agent breakthroughs

The COA Generator proved that automated reasoning could work in defense contexts—setting the stage for AI agents to handle even more complex scenarios.


“This project taught me the difference between eliminating work and amplifying judgment—a distinction that became the foundation of my AI agent methodology.”
— Katy
 

Next - Bio & CV