What My Friends Really Know About AI (And Why It Keeps Me Up at Night as a UX Designer)
After nine months of diving deep into AI, I wondered: Are the people in my life as unprepared as I was?
What I learned has me thinking differently about the users I design for in high-stakes environments.
Eight months ago, I thought AI was just another tech marketing fad—another “shiny object” promising to make us all more productive. Fast forward to today, and I’m knee-deep in prompt engineering, following AI safety discussions, and genuinely concerned about designing AI systems for users who might know even less than I did when I started.
Thanks for reading kathryn.neale@gmail.com! Subscribe for free to receive new posts and support my work.
This transformation got me wondering about something that’s been nagging at me in my work: How many people in my life are experiencing this same whiplash? More importantly, what do my friends and family—people who represent the same demographic as many government and defense workers I design for—actually know about the AI tools that are rapidly reshaping our world?
Those of us who remember landline phones, who lived through the dot-com revolution in our early twenties and watched it transform the world in a decade—we’re seeing the same magnitude of change with AI, except this time it feels different somehow. More consequential.
As someone who designs systems for high-stakes environments, I started wondering: if the people in my life are struggling with AI adoption, what does that suggest about the users I design for professionally? The stakes feel higher when you’re creating systems where wrong decisions could affect critical outcomes.
So I decided to ask 15 friends and family members what they actually know about AI. What I discovered has me thinking differently about assumptions I might be making about user knowledge.
The Survey: 15 Friends and Family Members
I asked 15 people in my life—friends and family ranging from their late 30s to 50s, most with established careers and families—about their AI knowledge and experiences. These are people who share my generational experience: we adapted to smartphones, lived through major technological shifts, and now find ourselves trying to figure out this next wave.
They represent the broader demographic that includes many of the government and professional workers I design systems for. Smart, capable people who’ve successfully navigated technological change before, which makes their responses particularly interesting to consider.
The “Familiar Yet Clueless” Paradox
Here’s what stopped me in my tracks: 53% consider themselves “somewhat familiar” with AI. On the surface, that sounds encouraging—most people feel they have at least some grasp on what AI is and does.
But when I dug deeper into what they actually understand, a troubling picture emerged. 40% have never heard of “AI hallucinations”—when AI confidently generates false information. A third don’t know what “prompt engineering” means. Over a quarter have no idea how important datasets are to AI tool reliability.
This disconnect between perceived familiarity and actual knowledge is exactly what I was afraid of finding. In my work designing systems for high-stakes environments, overconfidence paired with knowledge gaps isn’t just inconvenient—it’s potentially dangerous.
The Mental Model Chaos
When I asked what comes to mind when they think about AI, the responses revealed just how wildly different people’s frameworks are:
Some saw it pragmatically: “A tool that can be used to streamline work or help analyze data.” One person called it their “personal intern.”
Others expressed concern with varying levels of alarm: “Some really cool things that can increase productivity and effectiveness with some potentially scary pitfalls.” Another worried about it “taking over the economy.”
And then there were the deeper reflections: “Potential for positive change through automation, but also the potential for lazy reliance on technology. Currently we are forced into at least some semblance of critical thinking...”
What strikes me is that these aren’t just different levels of knowledge—they’re fundamentally different ways of understanding what AI is. In my work, if users are approaching the same system with mental models this divergent, how can we possibly design interfaces that serve them all effectively?
Regular Users Don’t Know What They Don’t Know
Here’s what really concerns me: 40% of my survey group uses ChatGPT regularly. They’re actively working with AI tools, making decisions based on AI outputs, integrating AI into their workflows.
But when I asked where AI gets its information, responses included things like “Unsure,” “Web scraping the internet,” and “The Internet, social media, research papers.”
While not entirely wrong, these answers reveal a shallow understanding of how training data works, why AI has limitations, and when outputs should be questioned. In commercial applications, this might lead to minor inconveniences. In the defense applications I work on? The implications are much more serious.
The “Just Tell Me What’s Safe” Problem
When I asked what would motivate them to invest time learning AI tools, their responses revealed something I need to take seriously in my design work:
“Knowing where to start.”
“Confidence that the output is accurate and there is a clear leader in which tool is best.”
“User friendly (not time consuming to learn the functionality).”
“I think I’d need help in figuring out which tools are the most helpful and least invasive/troublesome.”
These aren’t people asking for advanced training or technical deep-dives. They’re asking for curated, trustworthy guidance about what’s safe to use and how to use it effectively. The question for me as a designer: How do I build that guidance into interfaces rather than expecting users to seek it out separately?
The Weight My Generation Carries
One response has stuck with me since I read it:
“As a parent, AI feels overwhelming and like a slippery slope. I do not want to become dependent on these tools, and want to model good interactions with them for our children. I also worry that they are slowly taking away our need to actually think for ourselves?”
This captures something profound about the generational moment we’re in. We’re responsible for both adopting AI responsibly ourselves AND preparing our kids for an AI-driven world we don’t fully understand. In defense contexts, we’re also responsible for maintaining critical thinking skills that could literally be matters of life and death.
The weight of that responsibility—personal, professional, societal—is real. And it’s showing up in how people approach AI adoption.
What I’m Starting to Understand
These conversations with my friends and family have clarified something I’ve been wrestling with in my professional work. The challenge isn’t just about making AI interfaces user-friendly. It’s about designing for users whose understanding of what AI is and does may be fundamentally incomplete or incorrect.
In commercial applications, this knowledge gap might mean suboptimal results or minor frustrations. But I design systems for environments where understanding AI limitations could be critical. If smart people in my life have these gaps in understanding, what does that mean for the professionals using high-stakes systems?
The disconnect between feeling “somewhat familiar” and actually grasping concepts like hallucinations, prompt engineering, and data reliability has me thinking about interface design in new ways. How do you build appropriate confidence and skepticism into systems when users might not realize what they don’t know?
How do you educate users about AI capabilities and limitations through the interface itself, rather than expecting them to become AI experts before they can use the system safely and effectively?
The Education Challenge I’m Grappling With
The responses about wanting guidance rather than more tools to evaluate really resonates with something I’ve been thinking about: traditional training doesn’t work for AI literacy in professional contexts. People don’t have time for courses. They need to learn through doing.
But how do you design systems that teach users about AI reliability, limitations, and appropriate use cases while they’re actually using the system? How do you build skepticism and critical thinking into the user experience without creating analysis paralysis?
These aren’t rhetorical questions for me—they’re design challenges I’m actively working through.
Moving Forward: What This Means for My Work
The people in my study aren’t early adopters or tech skeptics. They’re pragmatic mid-career adults—the same demographic that makes up much of the government and professional workforce. Their approach to AI adoption will likely influence how thoughtfully or haphazardly this technology gets integrated across many sectors, including the high-stakes environments where I work.
Right now, they’re trying to figure it out as they go. And understanding how they process AI adoption helps me think about the users I design for, where “figuring it out as we go” in high-stakes environments requires much more thoughtful design approaches.
I don’t have all the answers yet. I’m still processing what these patterns mean for my work and how to design systems that bridge the gap between user perception and AI reality. But I know this much: we can’t keep designing AI interfaces as if users have accurate mental models of what these systems are and how they work.
The question I’m sitting with now: How do we design AI systems that meet users where they actually are in their understanding, rather than where we wish they were?
This research is part of my personal exploration of AI adoption patterns among people in my demographic. I’m pursuing this because I see connections between how my friends and family approach AI and the challenges I face designing systems for Defense sector professionals. Understanding these knowledge gaps and mental models helps inform how I think about user experience design in high-stakes environments.
In the spirit of transparency about AI collaboration, I worked with Claude to organize and refine these insights from my survey responses. The analysis, conclusions, and personal reflections are my own, with AI assistance in structuring the narrative and improving clarity.