I Turned Claude into My Research Partner—Here's What I Learned
In 8 weeks, I went from knowing nothing about a complex government domain to delivering a working prototype that impressed 13-year industry veterans.
But the process wasn't without significant challenges and risks. Here's what really happened when AI became my research partner.
The Impossible Timeline
Eight weeks ago, my boss walked into a meeting and announced we had a chance to pitch a major government contract. The catch? We needed to demonstrate how AI could transform a complex federal process that I knew absolutely nothing about.
In the traditional UX world, this would have been impossible. I'd need months just to understand the domain, more months to interview enough subject matter experts, and even more time to synthesize insights into actionable design solutions. By the time I had a prototype ready, the opportunity would be long gone.
But something different happened. Something that completely changed how I think about UX research in the AI era.
I accidentally turned Claude into my research partner.
The Traditional Bottleneck
Here's the problem every UX designer faces when entering a complex domain: the knowledge gap. You can't design good experiences for processes you don't understand. But gaining that understanding traditionally requires extensive access to subject matter experts who are usually busy, expensive, and limited in availability.
In government contracting, this problem is exponential. The domain knowledge is incredibly specialized, the stakeholders are risk-averse, and the learning curve is steep. Most UX projects in this space take 12+ months just to reach basic competency.
I had 8 weeks to go from zero to demo-ready.
The Accidental Partnership (And Its Complications)
It started simply enough. After conducting intensive interviews with our domain expert—a government specialist with 13 years of experience—I fed the transcripts to Claude for basic synthesis. Just trying to make sense of the overwhelming complexity.
But instead of stopping at summaries, I kept going. I started asking Claude to help me generate "How Might We" statements. Then to break down complex regulatory documents into understandable frameworks. Then to help me envision what revolutionary workflows might look like.
Here's where it got interesting—and problematic.
Claude started generating detailed use case scenarios that felt incredibly realistic. Fake conversations between government users and AI systems. Elaborate workflow descriptions that seemed to capture the nuances of government processes. For someone who knew nothing about this domain, it was exactly what I needed to understand the landscape.
The problem? These were hallucinations. Sophisticated, convincing, domain-appropriate hallucinations, but fabrications nonetheless.
The breakthrough? Even though they were fake, they gave me enough of a "sense" of how real use cases could unfold that I could start designing at a high level. Claude helped me brainstorm in domain language that was completely foreign to me, providing just enough conceptual scaffolding to see the big picture without getting lost in details I wasn't ready to understand yet.
What Actually Worked (And What Didn't)
The Acceleration Effect
Claude helped me become proficient enough in domain knowledge to accelerate my ability to understand the macro-level "forest" without getting bogged down in details. This forest-level understanding was crucial—it allowed me to bring concepts to brainstorming sessions with my PM (who is also a domain expert) much faster than traditional research methods would have allowed.
Here's where the magic happened: My PM could look at the AI-assisted concepts I brought and immediately extract features and value propositions for a brand new product. Because I had enough domain fluency to speak his language, our collaboration became incredibly productive. Then I could take all those ideas from real human experts and prototype solutions exponentially faster.
The entire process was electric because it was happening in 2 weeks, not 22 months. We were making decisions faster, understanding implications faster, and iterating at unprecedented speed.
The Dangerous Velocity Problem
But here's the critical drawback I'm grappling with: it might be too fast for proper reflection and testing.
The speed is intoxicating. When you can go from concept to working prototype in days instead of months, there's a temptation to keep accelerating toward implementation. But speed without validation is just speed toward the wrong solution.
The reality check: If we go directly from rapid prototype to actual application without extensive human testing, we could end up miles from where we should be. The AI-assisted process is so convincing and the prototypes so polished that they can create false confidence in solutions that haven't been properly validated with real users in real contexts.
The Human Validation Imperative
This is why every AI-generated insight, every hallucinated use case, every rapid prototype iteration had to be validated with our domain expert. And not just once—continuously throughout the process.
The breakthrough insights didn't come from Claude alone. They emerged from the combination of AI-accelerated domain learning, real human expertise validation, and rapid iteration between synthetic scenarios and expert feedback.
The essential lesson: AI can accelerate your learning and prototyping, but humans must validate every step, especially when moving at unprecedented speed.
Why This Matters (And Why It's Not Revolutionary Yet)
I initially thought I was discovering a revolutionary new methodology for UX research. But honestly? There are probably dozens of researchers and designers experimenting with similar AI-human collaboration approaches right now. The tools are accessible, the techniques are emerging everywhere, and the basic concept of "AI as research partner" isn't particularly novel anymore.
What might be different is the specific combination of domain acceleration, stakeholder collaboration speed, and prototype velocity I experienced. But I need more projects, more validation, and more systematic documentation before claiming I've discovered something truly new.
The Rapid Prototyping Alignment Challenge
Reading Greg Nudelman's "UX for AI," I'm struck by his emphasis on rapid iterative testing and evaluation. He argues that in the AI era, traditional development cycles are too slow—we need to prototype and test at the speed of AI development itself.
This aligns perfectly with what I experienced, but it also highlights the risk. Nudelman emphasizes that rapid prototyping must be coupled with continuous user validation, not just stakeholder approval. The speed of iteration must match the speed of user feedback, or you end up building the wrong thing very efficiently.
The Testing Imperative
Here's what I learned: The #1 requirement for any AI-accelerated UX process is continuous testing with real users. Not just at the end, not just with domain experts, but throughout the entire accelerated development cycle.
The danger of AI partnership isn't that it produces bad ideas—it's that it produces convincing ideas so quickly that you can build elaborate solutions before discovering they don't actually solve real user problems.
The Skills That Actually Matter
Based on this experience, here are the capabilities that proved most critical:
Prompt engineering for domain learning - Learning to ask AI the right questions to extract meaningful insights from complex information without accepting hallucinations as fact.
Hallucination detection - Developing instincts for when AI-generated content sounds right but needs human verification.
Rapid validation orchestration - Creating fast feedback loops with real domain experts to continuously check AI-assisted insights against reality.
Speed management - Knowing when to accelerate and when to slow down for proper reflection and testing.
Stakeholder collaboration at AI speed - Using AI insights to enhance human conversations rather than replace them.
What I'm Still Learning
This experience raised more questions than it answered:
How do you maintain research rigor at AI speed? Traditional UX research methodologies exist partly to prevent exactly the kind of rapid assumption-making that AI enables. How do we preserve the critical thinking while embracing the acceleration?
When is fast too fast? There's clearly a point where velocity becomes reckless. I haven't figured out where that line is yet.
How do you systematize intuitive AI collaboration? The most valuable moments in my process were spontaneous—times when I asked Claude something I hadn't planned to ask, or when a conversation with my PM sparked an AI query that led to breakthrough insights. How do you replicate spontaneity?
What happens when everyone has access to the same AI tools? If this kind of acceleration becomes standard, what becomes the new differentiator?
The Honest Assessment
Eight weeks ago, I thought I was facing an impossible timeline. I ended up delivering something that exceeded expectations, but I'm not sure I've discovered a revolutionary methodology so much as stumbled into a productive way of working with emerging tools.
What I do know: AI-human partnership in UX research can be incredibly powerful, but it requires constant vigilance about validation, speed management, and human oversight. The acceleration is real, but so are the risks.
What I'm watching for: How this approach scales across different domains, different team structures, and different project constraints. Whether the speed advantages hold up under more rigorous testing requirements. Whether the quality of insights maintains when the novelty of AI collaboration wears off.
The future of UX research will likely include AI partnership, but the specific methodologies, best practices, and quality standards are still being figured out by all of us in real-time.
The most important thing I learned: No matter how sophisticated AI becomes as a research partner, every insight, every assumption, and every design decision must ultimately be validated by real humans solving real problems. The speed is thrilling, but the responsibility to get it right remains entirely human.
In the spirit of transparency about AI collaboration, I worked with Claude to organize and refine these reflections on my recent research experience. The insights, challenges, and honest assessments described are from real project work, with AI assistance in articulating the lessons I'm still learning.