I Thought I Was Learning AI Tools. Instead, I Discovered My Superpower. Part 1: When Personalization Became Pattern Recognition

 
 

Seven weeks ago, I made a decision that felt simultaneously practical and slightly terrifying. I enrolled in Sara Davison and Tyler Fisk’s “AI Agentic Fundamentals” course on Maven.

The practical part? I needed this. I’m a Senior UX Designer at Vega Solutions working on high-stakes government and defense applications—mission planning systems, intelligence workflows, the kind of stuff where getting it wrong has real consequences. And I’ve been writing about AI ethics, human-AI collaboration, and designing for high-stakes environments for months now. But here’s the thing: I was still operating mostly in theory. I could talk about agentic AI, I could analyze what makes good AI design, I could see the patterns emerging across the industry… but I hadn’t actually BUILT anything myself.

The terrifying part? This was a 5-week intensive program began October 5th. Live sessions twice a week. Homework projects. A community of people who probably already knew way more than me (which they didn’t btw so if you are interested, PLEASE TAKE IT!). And I was jumping in during the absolute chaos of pre-holiday work crunch, family obligations, and that familiar ADHD overwhelm of having 46 browser tabs open (yes, I counted) while simultaneously trying to learn “46 AI apps I cannot live without” according to various YouTube influencers.

Sound familiar?

But something about this course felt different. Sara and Tyler weren’t just teaching tools—they were teaching a methodology. A way of thinking about AI that aligned with everything I’d been discovering about how my neurodivergent brain actually works. And there was this statistic in the course description that I couldn’t stop thinking about:

Only 1% of executives believe their AI investments have reached maturity. 47% of C-suite executives identify talent skill gaps as the primary barrier to successful AI implementation.

That gap. That’s where I wanted to be. Not just using AI tools, but understanding how to design and build AI systems that actually work for real people doing real work in complex environments.

So I signed up. And Week 1 completely changed how I think about AI collaboration.

The Assignment That Broke Everything Open

Week 1’s project seemed straightforward: Take “The Professor”—a generic AI learning assistant template—and personalize it for your own learning journey through the course.

Simple, right? Just customize a few settings, maybe add your name, done.

Except that’s not what happened at all.

What happened was this: I spent hours (HOURS) creating incredibly detailed system instructions that transformed The Professor from a generic assistant into something that understood ME. My visual learning preference. My need to see the macro-level vision first before diving into details. My pattern-seeking, connection-making, context-switching neurodivergent brain that everyone (including me, for most of my life) thought was scattered and unfocused.

I included my professional context—working at Vega Federal Solutions as a Senior UX Designer, the DOD/government contracting environment, the complex military applications I design for. I documented my learning style: how I process information through making connections across disparate domains, how I need to understand the “why” before the “how,” how my brain works best when it can see patterns across systems rather than isolated facts.

And here’s where it gets interesting (and a little vulnerable): I included details about my neurodivergent journey. How I was diagnosed with ADD as an adult. How my brain sees multiple problems simultaneously. How what looks like scattered focus is actually pattern recognition across complex systems. How the constant context-switching that exhausts other people is how I naturally operate.

I even included personality traits I wanted The Professor to embody: enthusiastic, witty, wise. Because if I’m going to spend 5 weeks learning from an AI assistant, it should feel like learning from someone I’d actually want to talk to.

Then I tested it. I asked both the generic Professor and my personalized Professor the exact same questions about course concepts.

The difference was… honestly, it was night and day.

The “Barrel Master” Moment

Here’s the moment that made me realize something profound was happening.

I asked my personalized Professor a question that included the term “Barrel Master”—a specific military role in airlift operations that I work with in my DOD applications. It’s not common terminology. You wouldn’t know it unless you’d worked in that specific domain.

The generic Professor? Confused. Gave me a generic response about literally making physical barrels or something equally irrelevant.

My personalized Professor? Understood EXACTLY what I was talking about. Responded with context-appropriate information about airlift operations, asset allocation, mission planning workflows. No explanation needed. It just… got it.

That’s when I realized: This isn’t just about making AI “nicer” or “more friendly.” This is about creating AI that understands the CONTEXT of how you think, what you do, and the specific domain knowledge you’re operating within.

And suddenly I was making connections everywhere (because that’s what my brain does).

If personalization works this powerfully for learning… what does that mean for the government and defense applications I design? What does it mean for intelligence analysts who need AI that understands their specific mission context? What does it mean for mission planners who operate in environments where generic responses could cost lives?

The Pattern Recognition Revelation

But here’s where it gets even more interesting. Because as I was documenting my learning style and cognitive patterns for The Professor, I started seeing parallels to something I’d written about months ago: whether neurodivergent minds have an AI advantage.

In that post, I’d hypothesized that certain cognitive traits seem particularly well-suited for AI collaboration:

  • Comfort with ambiguity

  • Pattern recognition across domains

  • Iterative thinking

  • Active skepticism

  • Parallel processing

And now, in Week 1 of this course, I was literally documenting those exact traits as the FOUNDATION for effective AI personalization.

My ADHD brain that sees patterns and connections everywhere? That’s not a bug I need to manage—it’s the exact cognitive architecture that makes me effective at AI collaboration.

My tendency to context-switch constantly between macro-level strategy and minute details? That’s not scattered focus—that’s the ability to operate at multiple scales simultaneously, which is exactly what complex AI system design requires.

My discomfort with surface-level solutions and tendency to question everything? That’s not being difficult—that’s the active skepticism needed to catch AI hallucinations and maintain appropriate trust calibration.

The overwhelm I feel when I see 46 browser tabs and think about 46 AI apps I “need” to learn? That’s not anxiety—that’s sophisticated pattern recognition detecting the gap between AI marketing hype and actual capability, identifying risks and potential problems before they become visible to others.

I’ve spent YEARS trying to “manage” my neurodivergent traits. Trying to be more focused. More linear. More… normal.

And here I was, in Week 1 of an AI course, discovering that those traits aren’t things to overcome—they’re exactly what effective AI collaboration requires.

What This Means for High-Stakes AI Design

This realization hit me hard. Because I don’t just use AI tools—I design AI systems for environments where lives are at stake.

I’ve been writing about this for months. About how we need AI systems that preserve human agency. About shared responsibility between AI and human judgment. About the importance of transparency, appropriate trust calibration, and designing for expert users rather than trying to make everything “simple.”

But personalization adds a whole new dimension to this thinking.

If I needed this level of customization to feel truly understood by an AI learning assistant… what does that mean for the military users, intelligence analysts, and mission planners I design for?

What if the reason so many AI systems fail in professional contexts isn’t because the AI isn’t “smart enough”—but because it’s not PERSONALIZED enough? Because it doesn’t understand the specific context, domain knowledge, and cognitive patterns of the people using it?

What if effective AI in high-stakes environments requires designing for cognitive diversity—not just accessibility in the traditional sense, but actual adaptation to different ways of thinking, processing information, and making decisions?

I started thinking about my failed CAMPS project—4.5 years working on a massive Air Force application that never shipped. We tried to build everything for everyone all at once. We never understood the underlying patterns. We never personalized for the actual users and their specific workflows.

What if we’d approached it differently? What if instead of trying to create one generic system, we’d built personalized AI assistants that understood each user’s role, their specific mission context, their decision-making patterns?

The Questions I’m Now Asking

Week 1 left me with more questions than answers (which, honestly, is how I know I’m learning something real):

How do we scale personalization in government/defense applications? You can’t manually customize every AI assistant for every user. But you also can’t have generic AI in environments where context is everything. What’s the middle ground?

How do we design AI systems that work WITH different cognitive styles rather than assuming one interaction model fits all? My neurodivergent brain needs certain things from AI collaboration. Someone else’s brain might need completely different things. How do we build systems that adapt?

What does “appropriate personalization” look like in high-stakes environments? There’s a balance between AI that understands you and AI that might be too accommodating, reinforcing biases or blind spots. Where’s that line?

If personalization is this powerful for learning, what does it mean for operational AI? The Professor helps me learn about agentic AI. But what about AI that helps intelligence analysts make sense of complex data? AI that helps mission planners coordinate assets across multiple operations? How does personalization change effectiveness in those contexts?

What I’m Realizing About My Work

I’ve been designing UX for AI systems for years now. But I’ve been approaching it mostly from the interface level—how do we show AI reasoning, how do we build appropriate trust, how do we give users control.

What Week 1 taught me is that effective AI design might need to start much earlier. Not with the interface, but with understanding the USER at a much deeper level. Their cognitive patterns. Their domain expertise. Their specific context and needs.

This connects directly to the healthcare AI framework I wrote about—Vincent Buil’s work at Phillips on designing AI for high-stakes medical applications. He emphasized that in high-risk environments, AI must enhance rather than replace human judgment. That shared responsibility must be designed, not just declared.

But what I’m adding now is: That shared responsibility might require personalization. Because “human judgment” isn’t generic—it’s specific to the person, their expertise, their cognitive style, their decision-making patterns.

The Practical Applications Already Emerging

By the end of Week 1, my brain was already spinning with applications (because of course it was—pattern recognition across domains, remember?):

For my work: How could personalized AI assistants help bridge knowledge gaps for non-veterans like me working on military applications? Could we create AI that helps new team members quickly gain contextual understanding that normally takes years?

For my son: Could I create a personalized learning tutor for my 7-year-old with dyslexia? An AI that understands his specific learning challenges and adapts explanations to how HIS brain processes information?

For my bosses’ startup: They’re pursuing DOD contracts in a space where domain expertise is everything. Could personalized AI assistants help them scale their expertise without hiring dozens of subject matter experts?

For the mission planning demo I’m building: Could personalization make the difference between AI that’s “interesting” and AI that’s actually USEFUL for the operators who would use it?

The Bigger Picture

Here’s what I’m sitting with as I move into Week 2: AI personalization isn’t just about making tools more user-friendly. It’s about fundamentally rethinking how we design human-AI collaboration.

The $50 billion AI talent gap that Sara and Tyler mentioned? I don’t think it’s just about people who can build AI systems. I think it’s about people who can design AI systems that work WITH human cognitive diversity rather than against it.

And my neurodivergent brain—the one that sees patterns across complex systems, that context-switches constantly, that questions everything, that gets overwhelmed by information overload but somehow processes it all simultaneously—might be exactly the cognitive architecture needed to bridge that gap.

I’ve written before about becoming “tech’s conscience”—about how UX designers are being asked to be the ethical guardrails for AI development. But what if it’s more than that? What if we’re also being asked to be the translators between AI capability and human cognitive diversity?

What if the future of AI isn’t about making it smarter—it’s about making it more adaptable to how different humans actually think?

What’s Next

Week 1 was about personalization. But Week 2? Week 2 was about to get uncomfortable in the best possible way.

Because if creating a personalized AI assistant required understanding how I learn… Week 2 was going to require something even more challenging: analyzing my own voice with forensic precision. Understanding not just how I think, but how I COMMUNICATE that thinking.

And what I discovered about my neurodivergent communication patterns—the parenthetical layering, the extreme rhythmic variation, the way I make invisible connections visible—changed how I think about AI-generated content entirely.

But that’s a story for Part 2.

For now, I’m sitting with this: Seven weeks ago, I thought I was learning to build AI assistants. What I actually started learning was how to design collaboration between human intelligence and artificial intelligence. And why my neurodivergent brain might be exactly what this moment requires.

The question isn’t whether AI belongs in high-stakes environments. The question is: Are we designing it with the personalization, rigor, and human-centered thinking those stakes demand?

Stay tuned.


In the spirit of transparency I advocate for in AI development: I worked with Claude to structure and refine these reflections from my course experience. The insights, breakthroughs, and honest assessments are from my actual Week 1 work in Sara and Tyler’s AI Agentic Fundamentals course, with AI assistance in articulating the patterns I’m still processing.

Kathryn Neale