Everyone Is Wrong About AI. That Includes the People Warning You About It.
After a year of living inside this technology every day, here’s the one question I think we should actually be asking.
You’ve been sold two stories.
The first one goes like this: AI is here, it’s extraordinary, and it’s coming for your job — fast. The founder of a major fintech company just cut nearly half his workforce in a single move and watched his stock go up 24%. A prominent AI entrepreneur writes that he walks away from his computer for four hours and comes back to find complex software projects fully built, tested, and ready. No corrections needed. Researchers who have spent six years inside the industry are sounding the alarm: this is already happening to us, and you’re next.
The second story goes like this: you’re being manipulated. The people selling you the apocalypse are the same people selling you the product. The fear and the wonder serve the same function — they keep you paying attention and the money flowing. At its core, AI is math. Matrix multiplication at massive scale. A text-prediction engine that recombines what already exists. Not a mind. Not a miracle. Not a harbinger of human obsolescence.
Both stories feel urgent. Both are being told by people with something to gain.
I should say upfront: I'm not a researcher, an engineer, or an AI company founder with a financial stake in how you think about this. I'm a designer who has spent the past year living inside this technology every day, trying to make sense of it — for myself, and for the real humans who will have to work alongside it.
Here’s what I’ve learned after a year of working inside AI technology every single day: they’re both partially right. And they’re both missing the most important question entirely.
──────────────────────────────────────
The Skeptic’s Case (And Why It Matters)
A researcher named David William Silva wrote a piece recently that cuts through the noise more cleanly than most things I’ve read. It’s worth taking seriously — not because it’s comfortable, but because it’s precise.
His core argument: AI is not magic. It is built from spectacularly ordinary pieces. Math, statistics, linear algebra, massive grids of numbers being adjusted by tiny increments whenever the model guesses wrong. That is what “learning” means. You feed in enormous volumes of data, the computer runs the same routine billions of times, slowly tuning itself, and the output — at sufficient scale — feels like intelligence. It isn’t.
He cites Yann LeCun — one of the actual architects of modern AI, not a fundraiser, not a hype machine — who has been saying clearly and repeatedly for years what the industry doesn’t want to hear: Large Language Models are a dead end on the path to human-level intelligence. They lack common sense. They lack causal reasoning. They have no model of physical reality. A child learns physics by dropping a spoon from a high chair a thousand times. An LLM learns “physics” by reading sentences about gravity. No amount of scaling — bigger models, more data, more compute — bridges that gap.
LeCun’s voice, Silva points out, gets drowned out. Not by censorship. By irrelevance. Because his perspective destroys the hype. And the hype is where the money lives.
Silva’s best line — the one I haven’t been able to stop thinking about — is this:
AI is “the most productive intern you’ve ever had — one who never sleeps, never complains, and many times fails to check whether the work is actually correct.”
That’s not a takedown. That is a precise technical description. And honestly? It’s more useful than anything a founder breathlessly endorsing their own product has ever said about it.
──────────────────────────────────────
But Here’s What the Skeptics Miss Too
Silva deflates the magic. What he doesn’t do — what almost nobody in this conversation does — is answer the more important question that comes right after the deflation.
So now what?
If AI is a very fast, very confident, sometimes catastrophically wrong intern — someone still has to manage it. Someone still has to design the workflow it operates in. Someone still has to build the system that catches it when it hallucinates. That decides when to hand the decision back to a human. That makes sure the human receiving that handoff actually has what they need to make a meaningful call.
In a hospital. In a defense operation. In a financial institution. In any high-stakes environment where an AI agent is making recommendations that real humans have to act on — someone has to design how that works.
And right now, almost nobody is.
That’s not a small gap. That IS the gap. It’s being designed by accident, by people whose job is to build the AI rather than think about how humans live alongside it. And the consequences of getting it wrong aren’t abstract.
──────────────────────────────────────
The Question Nobody Is Actually Asking
The hype camp spends its energy asking: how do we make AI smarter?
The skeptic camp spends its energy asking: is AI actually as smart as they claim?
Both camps are so busy debating the intelligence of the machine that they’ve almost entirely skipped the question that matters most in practice:
Who is designing the layer where humans and AI actually have to work together?
Not the model. Not the interface. The collaboration layer. The trust layer. The moment where an AI agent hands a recommendation to a human who has to decide whether to act on it — and that human needs to know: Can I trust this? Why did it say this? What do I do when it’s wrong?
Silva is right that the technology is being oversold. But the answer isn’t to dismiss what’s happening. The answer is to get serious about how we design human-AI collaboration so that the fast, confident, sometimes-wrong intern doesn’t make consequential mistakes that nobody caught — because nobody thought to design the catching system.
That’s not an engineering problem.
It’s not a traditional design problem either.
It’s something new. Something in between. Something most organizations are quietly desperate for and don’t yet have language to hire for.
I know this because I’ve spent the past year building toward it. And I’m now ready to say clearly where that’s been leading.
──────────────────────────────────────
What I’m Actually Doing About It
I’ve been processing this publicly for months — the ethics questions, the neurodivergence questions, the “Is UX Dead?” question I spent years dismissing and am now answering differently. All of it has been pointing toward a pivot I’m making right now, in real time.
Not away from the humans. Not into the hype. Into the gap.
I’m calling it the collaboration layer. And I’m writing about it in a four-part series that is coming here: Part 1 — “I Spent Years Defending UX. Tonight I Changed My Mind.”]
If you’re skeptical about AI, I think you’ll find Part 1 surprisingly familiar. If you’re a designer feeling the ground shift, Part 2 is for you. If you’re an enterprise leader trying to figure out what responsible AI deployment actually looks like in practice — stay tuned. That’s what Parts 3 and 4 are for.
The conversation we need to be having isn’t about whether AI is magic or math.
It’s about what we build in the space between the machine and the humans who have to trust it.
That space needs designing. And right now, it largely isn’t.
──────────────────────────────────────
In the spirit of transparency I always advocate for: I worked with Claude to structure and refine these thoughts. The year of daily experience, the conclusions, and the direction are entirely mine. Which is, incidentally, exactly the kind of human-AI collaboration I’m writing about.
