I Thought I Was Learning AI Tools. Instead, I Became Tech's Conscience
I signed up for Ioana Teleleanu's AI Product Design course expecting to learn about tools and workflows. Instead, I found myself in an unexpected 101 course on ethics.
(above image co-created with Midjourney)
Turns out, that's exactly what we need right now.
The Curriculum I Didn't Expect
When I enrolled in the AI for Designers certification, I was thinking wireframes, prototypes, maybe some prompt engineering techniques. You know—practical stuff to make my UX work more efficient. Which it did give me that, but at the end of the course, it opened up to talk about real impacts of AI that we just don't know about yet and can't foresee.
Instead, here I am in the final section of the course, deep in discussions about clean datasets, bias detection, and the moral implications of AI hallucinations. My reading material shifted from "10 AI Tools Every Designer Should Know" to Greg Nudelman's "UX and AI: A Framework for Designing AI-Driven Products"—which reads more like a philosophy text than a how-to guide.
To be honest, I wasn't expecting such a large emphasis on ethics for UX designers, but it's always there, at the end of all the practical stuff. And now it's got me really thinking. Is this the "new UX?"
With great power comes great responsibility. And we've been handed tremendous power.
The Job Description No One Wrote
Somewhere along the way, while we weren't paying attention, the UX designer role evolved. We're no longer just pixel-pushers or user advocates. We've become something else entirely: tech's conscience.
Sure, we've been reading insider perspectives on how AI is shifting the dynamics of the tech industry faster than we can blink, and predictions are already proving true—at least within the UX space, what we've been graduating from countless online UX course certifications are the "Production-UX Designers," the ones that can work in Figma, the pixel-pushers, the UI masters and Design System producers. Now with AI, those jobs are already getting re-evaluated and it's been accurate for experts to say that UX needs to focus on strategic alignment with business goals, expertise in UX Research metrics and methods, and the psychology of human brains to understand users more effectively. In just a few years, it's predicted that the "Internet" will actually become a dynamically, "individual-centric" experience which is customized to each one of us. With AI that constantly learns about each one of us, there are even predictions of apps being created in that moment to enable each one of us to perform a specific goal for that given moment online that is unique to us. Our reality is literally being reshaped in front of us by us, by our own AI making.
However, it has all got me thinking even more. Is there something even more profound going on? I'm looking at my course and my new "AI toolbox" now includes ensuring clean, unbiased datasets because biased data creates biased outcomes, building systems that handle AI hallucinations gracefully because AI lies confidently, creating transparency about what AI is doing at every step because users deserve to know, and designing "exit ramps" so users can take manual control because human agency matters. The new "UI" is not actually the traditional "pretty" user-interfaces. It's marrying the architecture underneath with the perfect workflows based on timing and working with data architects even more closely. But most importantly—questioning everything.
That last one is the kicker. We have to pose hard, challenging questions to our colleagues, peers, stakeholders, and bosses. Questions they may not want to hear. Questions that might slow down development or challenge business assumptions. Questions like "What happens when this AI system gets it wrong?" and "Who gets hurt if this bias goes unchecked?" and "Are we building something that actually serves humans, or just serves our metrics?"
As UX Designers, we've been trained to question everything, but more often than not, we get sidelined in companies because larger stakeholders bulldoze their way forward with clear goals to make money. How can we compete with this?
Why Us? Why Now?
Here's what I'm realizing: UX designers are uniquely positioned to be AI's ethical guardrails because we're already trained to think about human impact. We understand user mental models, trust patterns, and failure states. We bridge technical capability with human needs.
We're literally the last line of defense between AI power and human harm.
And here's where my neurodivergent brain becomes an unexpected advantage. I'm already wired to question "normal," to spot patterns others miss, to feel uncomfortable with surface-level solutions. The same ADHD overwhelm that makes me see connections across complex systems? That's exactly what's needed to navigate AI's ethical maze.
My instructor keeps emphasizing: "AI cannot create anything." It can only recombine, reprocess, and reflect what we feed it. So more than ever, we really have to have our critical thinking hat on.
(above image co-created with Midjourney)
The Mirror We're Building
This brings me to the most profound realization from my coursework: AI is just a mirror of ourselves—our humanity, weaknesses, biases, strengths, and challenges.
We've literally built The Matrix. Not in the dystopian sense (well, hopefully not), but in the sense that we've created systems that reflect our reality back to us, amplified and automated.
Every design decision I make shapes the reflection AI shows back to humanity. When I design an AI interface, I'm not just building a product—I'm curating humanity's digital mirror. I'm helping determine what gets reflected back: our biases or our values, our shortcuts or our wisdom, our fears or our hopes. And to take it one step further, we will be designing for every individual on the planet with the dynamic internet coming into play.
That's... heavy. And exhilarating. And terrifying.
The Stakes We Can't Ignore
What happens when a UX designer chooses not to question? When someone is less than moral or ethical? When business pressure overrides ethical considerations?
The implications could be completely disastrous. Or they could be incredibly mind-blowing. Isn't that life? Isn't that exactly the choice point we're all facing?
Every UX designer working with AI faces this daily now. We're being asked to make decisions about how transparent to be about AI limitations, whether to prioritize efficiency over human understanding, how much control to give users versus automated systems, when to flag potential bias versus letting it slide.
These aren't just design decisions anymore. They're moral choices that ripple out into the world.
The Professional Identity I Didn't Ask For
I wanted to learn AI tools to make my UX work more efficient. Instead, I discovered I'm being asked to help decide what kind of reality we leave for our children.
No pressure, right?
We're being asked to become tech's conscience in the room, humanity's advocates during product decisions, the questioners of power and assumptions, the guardians of human agency in an automated world.
This isn't what I signed up for when I decided to become a UX designer. But it's what the world needs right now.
The Responsibility We Inherit
Here's what I'm taking away from this unexpected ethics education: We have a choice. We can design AI systems that diminish human agency, or we can design systems that enhance it. We can build mirrors that reflect our worst impulses, or our best aspirations.
The choice is ours. But only if we choose to see ourselves as more than tool-users or efficiency-optimizers. Only if we embrace the role of tech's conscience—even when it's uncomfortable, even when it slows things down, even when stakeholders don't want to hear our questions.
Because the consequences are BIG if we fail. But they're also incredible if we succeed.
My ADHD brain that sees patterns and connections everywhere? My discomfort with surface-level solutions? My tendency to question everything?
Turns out these aren't bugs in my design process. They're features. They're exactly what this moment requires.
Ironically, as I worked with Claude to help structure these thoughts, I realized I was living the very transparency I'm advocating for—being clear about when and how AI assists in the creative process. What questions are you asking in your AI product work? How are you using your voice as the guardrails are being built? I'd love to hear how other designers are navigating this new responsibility.