What Healthcare Taught Me about High-Stakes AI Design
A Phillips researcher’s presentation on AI in healthcare gave me the framework I’ve been searching for in defense applications. Turns out, when lives are at stake, the design principles converge.
I’ve spent months wrestling with a fundamental question: How do you design AI systems for high-stakes environments where wrong decisions have serious consequences? The frameworks I was finding felt inadequate—most AI-UX guidance assumes commercial contexts where the worst outcome is user frustration or a bad recommendation.
Then I attended an online presentation by Vincent Buil, a Senior UX Researcher at Phillips Healthcare, and something clicked. He was describing their approach to “Pattern Library for Responsible AI” in medical contexts—patient monitors, CT scans, ultrasound technology. As he walked through their framework, I kept thinking: This is exactly what we need for defense applications.
The parallel makes sense when you think about it. Healthcare and defense share critical characteristics that most commercial AI applications don’t:
Expert users with deep domain knowledge
Life-or-death consequences for wrong decisions
Strict regulatory compliance requirements
The need for AI to enhance rather than replace human judgment
Healthcare has been forced to solve the high-stakes AI design problem. And their solutions translate directly to government and defense contexts.
The Shared Responsibility Framework
The first breakthrough insight from Buil’s presentation addressed the question that keeps everyone in high-stakes AI design awake at night: Who is responsible when AI gets it wrong?
His answer was refreshingly honest: “In high-risk environments, it’s a shared responsibility.”
Phillips’ framework is clear:
We at Phillips should build and design AI systems with full transparency on their capabilities and limitations.
Clinical specialists’ responsibility is to use the AI system in context of those limitations as good as possible. Which we should enable them through the proper governance mechanisms in the UX.
This resonated deeply with my defense work. We can’t eliminate responsibility by adding AI—we have to design systems that make shared responsibility work in practice. The question isn’t whether AI or humans are “in charge,” it’s how we create interfaces that enable appropriate judgment from both.
The Four Perspectives Framework: Purpose, Pathways, Policy, Pixels
Phillips organizes their AI governance through four perspectives that I immediately recognized as applicable to defense systems:
Purpose: Well-Defined Use Cases
Phillips starts by identifying specific use cases that help clinicians get their work done better while supporting their wellbeing and benefiting patients. They run multi-stakeholder workshops using specific tools and templates to identify good use cases while educating teams on AI best practices.
The key insight: They ask both “What AI can/can’t do” AND “What AI should/shouldn’t do.”
Defense translation: We need the same rigor in identifying where AI actually helps mission success versus where it just sounds innovative. The “should/shouldn’t” question is especially critical in defense—just because AI can make certain decisions doesn’t mean it should.
Policy: Making AI Safe to Use
Their policy framework ensures regulation compliance through governance documentation and tools. In healthcare, this means meeting EU AI requirements for high-risk systems.
Defense translation: Government contractors already understand compliance requirements (FedRAMP, FISMA, etc.), but Phillips showed me how to weave AI-specific governance into existing frameworks rather than treating it as separate.
Pathway: Carefully Designed Workflows
Phillips designs the envisioned workflow and care pathway to ensure the end solution truly has the desired impact. They don’t just add AI features—they redesign entire workflows around human-AI collaboration.
Defense translation: This is exactly what I’ve been trying to articulate in my AID methodology. You can’t just insert AI into existing military workflows—you have to rethink how the work gets done when AI and humans collaborate effectively.
Pixels: Prototype-Based Validation
They experiment with prototypes based on use cases to test both governance validation AND UX design principles at work.
Defense translation: The “test governance through UX” approach is brilliant. Instead of separate compliance checks, design prototypes that demonstrate governance principles in actual use.
The Two Critical Requirements: Human Oversight and Transparency
Buil emphasized that two elements are most critical for high-risk AI systems: Human Oversight and Transparency.
The EU AI Requirements he outlined map directly to what defense systems need:
Transparency Requirements:
Clear labeling of AI functions and AI results
Clear instructions for use: intended purpose, accuracy, limitations
Guidance on avoiding risky usage and misuse scenarios
Information about capabilities, limitations, and target populations
Support for interpretation and use of AI outputs
Human Oversight Requirements:
Tools to prevent or reduce risks such that users can:
Detect AI errors
Avoid over-relying on AI
Understand and interpret AI results
Disregard, override, or reverse AI results
Stop the AI system in a safe state
This framework solves a problem I’ve been struggling with: How do you build appropriate skepticism into interfaces without creating analysis paralysis? Phillips answered it: give users specific tools for oversight rather than just warnings about reliability.
The Pattern Library Approach
Here’s where Phillips’ work becomes directly actionable. They created a Pattern Library—reusable design patterns that:
Structure usable components for designers across all legacy systems
Create recognizable patterns that identify what is AI versus human-generated
Apply everywhere in all applications
Ensure compliance with AI requirements
Their lessons learned:
Designers want practical help in UX design for AI (not theoretical frameworks)
You’re designing for uncertainty—make sure stakeholders understand this is probabilistic, not deterministic
Create prototypes in live environments to show probabilistic results, not sanitized demos
AI governance touches all levels: purpose, pathways, pixels, and policy
The healthcare insight I’m applying to defense: Don’t mix AI “reasoning” with AI “results” in the interface. Show both, but make it clear which is which.
The Questions Healthcare Is Still Wrestling With
What I found most valuable was Buil’s honesty about unresolved challenges. Phillips is still asking:
“We are all using AI to generate something and now have to review it. This is a high cognitive task! Is that taking more time?”
“Plus if AI takes away other tasks, potential loss of skills that is needed (for diagnosis) is that worth it?”
“How does this affect job satisfaction? Job profile?”
“We are thinking we are helping to free up clinician’s time but then they get more patients, so maybe it’s not efficient?”
These questions are exactly what I’m grappling with in defense contexts. When AI handles routine analysis, do intelligence analysts lose the skills needed for critical assessment? When AI speeds up mission planning, does that just mean more missions rather than better decisions?
Healthcare doesn’t have all the answers yet, but they’re asking the right questions.
The Holistic Experience Imperative
Buil ended with something that struck me as the core principle for all high-stakes AI design:
“In the end, we must design a holistic experience with all the stakeholders in mind:
Safety for the patient
Job satisfaction for the doctor
Efficiency in healthcare
Society that we want to live in without losing human touch (which is especially key in healthcare)”
Defense translation:
Safety for warfighters and civilians
Job satisfaction and skill preservation for military professionals
Efficiency in mission execution
Society we want to defend—including preserving human judgment in critical decisions
What This Means for My Work
Phillips’ healthcare framework validated something I’ve been developing in my AID methodology: high-stakes AI design requires fundamentally different approaches than consumer applications.
The key insights I’m taking forward:
Shared Responsibility Must Be Designed: You can’t just declare it—you have to build interfaces that make it work in practice.
Governance Through UX: Compliance isn’t separate from design; it’s embedded in how users interact with AI.
Transparency Enables Oversight: When users understand AI reasoning and limitations, they can exercise appropriate judgment.
Uncertainty Is The Design Challenge: Stop pretending AI is deterministic; design for probabilistic thinking.
Skills Preservation Matters: If AI takes away practice opportunities, expertise atrophies. Design for learning, not just efficiency.
The Validation I Needed
Finding Phillips’ work was validation that I’m not alone in thinking about these challenges. Healthcare has been forced to solve high-stakes AI design because patient safety demands it. Defense faces the same imperative for different but equally critical reasons.
But here’s the critical insight that hit me during Buil’s presentation: The Pattern Library isn’t just a design system—it’s the practical implementation mechanism for everything else.
I’ve always associated design libraries with branding and marketing—ensuring consistent visual identity across products. That’s a very commercial, product-centric viewpoint. But in enterprise and highly complex work-related applications, the Pattern Library serves a fundamentally different purpose: it’s the structure that enforces industry regulations, embeds AI governance, and provides practical guidance all at once.
This is the “how” that I’ve been missing. You can have all the right principles about transparency, human oversight, and shared responsibility, but without a structured pattern library to implement them consistently across systems, they remain theoretical. The Pattern Library makes abstract governance concrete and actionable for designers.
I’m not copying Phillips’ specific framework—I’m pulling out the principles I need and reframing them for defense applications. Their Four Perspectives give me structure, but the Pattern Library concept is what makes it practically applicable. It’s how you take “we need transparency” and turn it into “here’s the exact design pattern designers use to show AI reasoning across all our applications.”
This might seem obvious to designers who specialize in pattern libraries and design systems, but it’s a revelation for someone like me who’s been focused on user workflows and interaction design. The Pattern Library isn’t peripheral to AI governance—it’s central. It’s the mechanism that ensures every interface, every interaction, every AI feature follows the same principles for transparency, oversight, and shared responsibility.
That’s what I’m taking into my defense work: not a specific set of healthcare patterns, but the understanding that systematic design patterns are how you operationalize responsible AI principles in complex, regulated environments.
What’s Next
I’m now adapting Phillips’ framework for defense applications, testing whether their healthcare patterns translate to military contexts. The hypothesis: if both domains share the high-stakes characteristics (expert users, serious consequences, regulatory requirements, human judgment preservation), then design patterns should be transferable with domain-specific adaptations.
Over the next few weeks, I’ll be prototyping AI systems using these principles. The goal isn’t just to validate that this approach works—it’s to develop a practical methodology that other designers can use when building AI for high-stakes professional environments.
Because if healthcare has shown us anything, it’s that you can design responsible AI for critical applications. You just have to acknowledge the stakes, build for shared responsibility, and design interfaces that preserve rather than replace human expertise.
The question isn’t whether AI belongs in high-stakes environments. The question is: Are we designing it with the rigor and responsibility those stakes demand?
This exploration builds on Vincent Buil’s presentation “Pattern Library for Responsible AI” at the STRAT 2025 conference. His work at Phillips Healthcare provides crucial insights for anyone designing AI systems where consequences matter. The parallels between healthcare and defense contexts suggest that high-stakes AI design principles may be more universal than domain-specific.