Survey Says: What My Friends and Family Really Know About AI (And What It Made Me Think About My Users)


After nine months of diving deep into AI, I wondered: Are the people in my life as unprepared as I was? What I learned has me thinking differently about the users I design for in high-stakes environments.

Eight months ago, I thought AI was just another tech marketing fad—another "shiny object" promising to make us all more productive. Fast forward to today, and I'm knee-deep in prompt engineering, following AI safety discussions, and genuinely concerned about designing AI systems for users who might know even less than I did when I started.

This transformation got me wondering about something that's been nagging at me in my work as a UX designer: How many people in my life are experiencing this same whiplash? More importantly, what do my friends and family—people who represent the same demographic as many government and defense workers I design for—actually know about the AI tools that are rapidly reshaping our world?

Those of us who remember landline phones, who lived through the dot-com revolution in our early twenties and watched it transform the world in a decade—we're seeing the same magnitude of change with AI, except this time it feels different somehow. More consequential.

As someone who designs systems for high-stakes environments, I started wondering: if the people in my life are struggling with AI adoption, what does that suggest about the users I design for professionally? The stakes feel higher when you're creating systems where wrong decisions could affect critical outcomes.

So I decided to ask 15 friends and family members what they actually know about AI. What I discovered has me thinking differently about assumptions I might be making about user knowledge.

The Research: 15 Friends

So I surveyed 15 of my friends in my demographic—people who share the generational experience of adapting from landlines to smartphones to AI. While they're not Defense professionals, they represent the broader population of mid-career adults that includes many government workers, and I suspect their knowledge gaps mirror what I'd find among Defense sector users.

These aren't techno-phobic luddites. These are smart, capable professionals who've successfully adapted to major technological shifts before. Which makes their AI knowledge gaps concerning for anyone designing systems for similar demographics in high-stakes environments.

Finding #1: The "Familiar Yet Clueless" Paradox

Here's the alarming headline finding: 53% consider themselves "somewhat familiar" with AI, but their actual knowledge reveals dangerous gaps for Defense applications.

When I dug deeper into what they actually understand:

  • 40% have never heard of "AI hallucinations" (when AI confidently generates false information)

  • 33% don't know what "prompt engineering" means

  • 27% have no idea how important datasets are to AI tool reliability

This disconnect between perceived familiarity and actual knowledge is precisely what keeps me awake at night as a UX designer. In Defense contexts, overconfidence in poorly understood tools can have catastrophic consequences.

Finding #2: Regular Users Don't Understand the Fundamentals

Even more alarming: 40% use ChatGPT regularly, but many still don't grasp basic concepts about reliability and limitations.

When asked where AI gets its information, responses included:

  • "Unsure"

  • "Web scraping the internet"

  • "The Internet, social media, research papers"

While not entirely wrong, these responses reveal shallow understanding of training data, model limitations, and why AI sometimes fails spectacularly—exactly the kind of knowledge gaps that could be dangerous in high-stakes decision-making environments.

Finding #3: Mental Models Range from Naive to Panicked

When asked what comes to mind about AI, responses varied wildly—revealing that people are processing AI through completely different frameworks:

The Dangerously Optimistic:

  • "A tool that can be used to streamline work or help analyze data"

  • "Personal intern"

The Appropriately Concerned:

  • "Some really cool things that can increase productivity and effectiveness with some potentially scary pitfalls"

  • "Taking over the economy"

The Deep Thinkers:

  • "Potential for positive change through automation, but also the potential for lazy reliance on technology. Currently we are forced into at least some semblance of critical thinking..."

In Defense environments, these vastly different mental models could lead to inconsistent risk assessment and inappropriate trust calibration with AI systems.

Finding #4: The "Just Tell Me What's Safe" Barrier

When asked what would motivate them to invest time learning AI tools, responses revealed something crucial for my UX design work:

  • "Knowing where to start"

  • "Confidence that the output is accurate and there is a clear leader in which tool is best"

  • "User friendly (not time consuming to learn the functionality)"

  • "I think I'd need help in figuring out which tools are the most helpful and least invasive/troublesome"

Translation: My user base wants curated, trustworthy guidance—not another overwhelming list of AI tools to evaluate. This has massive implications for how I design AI education into Defense applications.

Finding #5: The Generational Responsibility Weight

Perhaps most telling was this response about learning AI tools:

"As a parent, AI feels overwhelming and like a slippery slope. I do not want to become dependent on these tools, and want to model good interactions with them for our children. I also worry that they are slowly taking away our need to actually think for ourselves?"

This perfectly captures the weight my generation feels: We're responsible for both adopting AI responsibly ourselves AND preparing our kids for an AI-driven world. In Defense contexts, we're also responsible for maintaining critical thinking skills that could literally be matters of life and death.

What I'm Thinking About After These Conversations

These findings have me reflecting on some things that matter for my work designing systems:

The Knowledge-Stakes Question

In the commercial world, AI knowledge gaps might lead to sub-optimal results. But I work on systems where understanding AI limitations could be much more critical. It makes me wonder: if smart people in my life have these knowledge gaps, what does that mean for users in high-stakes environments?

The Confidence-Knowledge Gap

That disconnect between feeling "somewhat familiar" and actually understanding fundamentals like hallucinations has me thinking about interface design. How do you build appropriate confidence and skepticism into systems when users might not realize what they don't know?

The Education Challenge

The responses about wanting guidance rather than more tools to evaluate really resonates with something I've been grappling with: How do you educate users about AI capabilities and limitations through the interface itself, rather than expecting them to become AI experts?

What This Means for Defense AI Integration

These findings have profound implications for anyone building AI tools for government and defense applications:

1. Assume Zero Baseline Knowledge

Even "somewhat familiar" users lack critical understanding of AI fundamentals. Design accordingly.

2. Build Trust Calibration Into Interfaces

Users need to develop appropriate skepticism and confidence levels with AI recommendations. This can't be left to training—it needs to be designed into the user experience.

3. Create Invisible Education

My user base wants guidance, not coursework. AI literacy needs to be embedded naturally into workflows rather than treated as separate training.

4. Design for the 99% and the 1%

Systems need to work for routine decisions while maintaining human oversight capabilities for critical moments when wrong decisions could affect warfighters and missions.

The Bigger Question: Are We Moving Too Fast?

Here's what this research really reveals: We're implementing AI in Defense environments at a pace that far exceeds our users' ability to develop appropriate mental models and skills.

This isn't a criticism of my colleagues—it's a recognition that AI development has outpaced human adaptation. The question is: What do we do about it?

As a UX designer, I have a responsibility to bridge this gap. I need to design systems that:

  • Make AI limitations transparent without being overwhelming

  • Build appropriate trust and skepticism into user interactions

  • Educate users about AI capabilities through the interface itself

  • Preserve critical thinking skills while leveraging AI efficiency

What I'm Learning: We Still Have to Be the Critical Thinkers

The most important insight from this survey aligns with what I've been learning in my own AI journey: We can't outsource critical thinking to AI, especially in high-stakes environments.

My generation of Defense professionals needs to understand that AI is a powerful tool for processing information and identifying patterns, but human judgment, ethical reasoning, and strategic thinking remain essential. The challenge is designing systems that make this collaboration intuitive rather than burdensome.

Moving Forward: The UX Designer's New Mission

This survey has crystallized my mission as a UX designer in the AI era: I'm not just designing interfaces for AI tools—I'm designing the bridge between human expertise and artificial intelligence for users who are still learning to navigate this collaboration.

The people in my study aren't early adopters or tech skeptics. They're pragmatic mid-career adults—the same demographic that makes up much of the government and professional workforce. Their approach to AI adoption will likely influence how thoughtfully or haphazardly this technology gets integrated across many sectors, including the high-stakes environments where I work.

Right now, they're trying to figure it out as they go. And in Defense contexts, "figuring it out as we go" isn't good enough.

That's where thoughtful UX design becomes critical. We have the opportunity—and responsibility—to make AI collaboration intuitive, safe, and effective for users who don't have time to become AI experts but whose decisions affect national security.

The question is: Will we design systems that enhance human capability, or will we create new vulnerabilities disguised as technological advancement?

This research is part of my personal exploration of AI adoption patterns among people in my demographic. I'm pursuing this because I see connections between how my friends and family approach AI and the challenges I face designing systems for Defense sector professionals. Understanding these knowledge gaps and mental models helps inform how I think about user experience design in high-stakes environments.

Kathryn Neale