You shipped three features this week instead of one. The code works. The tests pass. The PR got merged.
And something feels off.
Not because the code is bad. Because you didn’t suffer enough to produce it. The voice in your head, the one that sounds like a mass layoff survivor from 2019, keeps whispering: Did you actually earn this?
You’ve heard the discourse. AI is making us lazy thinkers. We’re losing the craft. The next generation won’t understand fundamentals. We’re building on sand while the tide rises.
Here’s what I find interesting: the moral high ground appeared fast. Almost suspiciously fast.
Before ChatGPT went mainstream, I don’t remember this level of hand-wringing about intellectual rigor. We copy-pasted from Stack Overflow — 40 million copies in two weeks, one in four visitors grabbing code within five minutes of landing on a page. We Googled error messages at 2am and applied fixes we couldn’t fully explain. We accumulated technical debt faster than we could name it and called the whole mess “pragmatism” and “shipping fast.”
But now, suddenly, we’re worried about thinking deeply.
The purity police showed up exactly when their competitive advantage, “I learned this the hard way,” started evaporating. That timing isn’t coincidence. That’s status anxiety wearing the mask of craft preservation.
I’m not here to tell you AI is harmless. The concerns about atrophying skills aren’t baseless. A Microsoft-Carnegie Mellon study found that heavier AI use correlates with less critical thinking engagement. That’s real.
But here’s what’s also real: 84% of developers are using AI tools, and 51% use them daily. Yet positive sentiment has dropped, from over 70% in 2023 to just 60% this year. Nearly half actively distrust the output. But 68% expect AI proficiency to become a job requirement anyway.
Read that again. They’re using what they don’t trust, because the alternative is falling behind.
This is the transitional generation’s signature move. And if you’re reading this, you’re probably living it.
The guilt you feel isn’t evidence of moral failure. It’s the friction of being caught between two worlds: one where suffering was proof of competence, and one where it’s just inefficiency.
Every generation that lives through a fundamental shift pays this tax. The weavers paid it during the industrial revolution. The typesetters paid it when desktop publishing arrived. You’re paying it now.
The question is whether you’ll keep paying interest on guilt that the next generation won’t even understand.

We Were Never That Rigorous
Let’s be honest about what software development looked like before the AI discourse took over.
Stack Overflow published an analysis in 2021. They tracked every copy command on their site for two weeks. The results: 40 million copies across 7.3 million posts. One in four visitors copied something within five minutes of landing on a question page.
That’s not research. That’s Ctrl+C, Ctrl+V, ship it.
And this was celebrated. “Good developers know how to Google” became a meme because it was true. The skill wasn’t understanding; it was pattern matching fast enough to hit your sprint deadline.
I’ve done it. You’ve done it. We’ve all pasted code we only partially understood into production systems and called it pragmatism. We’ve shipped features with test coverage we’d be embarrassed to discuss. We’ve merged PRs at 11pm because “it works on my machine” and tomorrow’s standup wasn’t going to wait.
The craft was always messier than the mythology.
Technical debt didn’t appear with ChatGPT. It’s been accumulating for decades. The 2022 CISQ report estimated that poor software quality cost the US economy $2.41 trillion, and that was before generative AI wrote a single line of production code.
So when I hear senior engineers lamenting that AI is degrading our standards, I have to ask: the ones where we understood every line we shipped? Those standards never existed at scale.
The Real Objection
For twenty years, there was a specific path to seniority: suffer through the fundamentals, accumulate hard-won intuition, become the person juniors ask when Stack Overflow fails them. That suffering was a moat. It protected your value.
AI didn’t just change the tools. It drained the moat.
The person who spent years mastering regex now watches a junior ask their way to the same solution in thirty seconds. The person who memorized API documentation now competes with someone who just asks Claude. The accumulated pain that used to signal expertise is becoming optional.
That’s not a threat to the craft. It’s a threat to a particular story about the craft, the one where suffering is the price of admission.
The objection dressed as “we’re losing depth” is often actually “my depth is losing market value.” And those aren’t the same thing.
But What About Real Skill Loss?
I’m not dismissing the concern entirely.
There’s a developer on Medium who described becoming “a human clipboard,” copying errors to AI, pasting solutions back, learning nothing in between. Addy Osmani, a well-known engineering leader, wrote about the “creeping decay” where developers stop reading documentation because LLMs can explain it instantly.
These are real patterns. They’re worth watching.
But here’s what the purity crowd gets wrong: they treat this as a binary. Either we reject AI and preserve depth, or we embrace it and become helpless.
That framing ignores a third option: we’re in a messy transition period, and the norms haven’t settled yet. The developers who figure out how to use AI without atrophying their judgment will have a significant advantage. The ones who reject it entirely will fall behind. The ones who outsource all thinking will hit walls.
The answer isn’t purity. It’s discernment. And discernment doesn’t require moral panic.
I’ve written elsewhere about what I call the “Army of Juniors” effect — AI that produces code which works but doesn’t think. The architectural blindness, the security vulnerabilities appearing in 40–50% of AI-generated code, the junior developer pipeline quietly drying up. Those concerns are real. They require operational responses: better review processes, protected training roles, metrics that track more than velocity. But they’re engineering problems requiring engineering solutions, not moral failings requiring absolution. You can take the quality risks seriously without taking on the guilt.
The Numbers Don’t Lie (But They Do Confuse)
The 2025 developer surveys tell a story that doesn’t make sense until it does.
Stack Overflow surveyed over 49,000 developers this year. JetBrains surveyed another 24,500. GitHub tracked 180 million developers on their platform. The numbers are in. They contradict each other in exactly the right way.
Usage keeps climbing: 84% of developers now use AI tools, 51% daily. JetBrains puts regular usage even higher at 85%. But trust is falling. Positive sentiment dropped from 70% to 60% in two years. Nearly half, 46%, actively distrust AI output accuracy, a significant jump from 31% last year. Only 3% report “highly trusting” what AI produces. And yet 68% expect AI proficiency to become a job requirement. Meanwhile, 77% say vibe coding isn’t part of their professional work, even as they use AI tools daily.
They’re using what they don’t trust. That’s not cognitive dissonance. That’s what a forced transition feels like from the inside.
The Perception Gap
Here’s where it gets stranger.
METR, an AI safety research organization, ran a controlled study with experienced developers. The result: developers using AI were actually 19% slower on real-world tasks. But here’s the twist: those same developers perceived themselves as 20% faster.
A 39-point gap between perception and reality. You feel more productive while being less productive.
This might explain the guilt. Some part of your brain suspects the math doesn’t add up. You’re shipping more features but can’t articulate why you feel less capable than before. The perception gap is real, and feeling productive isn’t the same as being productive.
But before you use this to justify abandoning AI tools entirely, consider the context. These were experienced developers working on unfamiliar codebases with current-generation tools. The gap may narrow as tools improve and as developers calibrate their use.
Why 77% Still Reject Vibe Coding
The rejection of vibe coding, that approach of prompting AI repeatedly until something works without understanding why, reflects calibration, not failure. Developers are learning where AI helps and where it hurts.
The frustrations are specific. 66% cite “almost right but not quite” solutions as their top pain point. 45% say debugging AI code takes more time than writing it themselves. These aren’t abstract complaints. They’re hard-won lessons about where the tools fall short.
Dropping trust alongside rising adoption isn’t contradiction. It’s a realistic mental model being built in real time. The continued adoption despite distrust is pragmatism. 54% of business leaders believe companies won’t survive 2030 without AI at scale. You’re adapting to pressure, not to hype.
The Horse in Manhattan
Here’s the frame that changed how I think about all of this.
Most technological changes are improvements within a category. A faster horse. A sharper blade. A brighter lightbulb.
Some changes are different. They don’t improve the category. They end it.
When cars replaced horses, the conversation wasn’t really about speed. It was about what “travel” meant. The horse people weren’t wrong that something was being lost. Horsemanship is a skill. The bond between rider and animal is real. The knowledge accumulated over centuries did have value.
But the conversation was already over. Not because cars were better horses. Because cars weren’t horses at all.
Imagine someone riding a horse through Manhattan today. Not Central Park. Actual streets. Midtown traffic.
You don’t think: “Wow, impressive horsemanship.”
You think: “What is happening? Is this a movie shoot? Is this legal?”
The skill hasn’t degraded. It’s been recategorized. Horsemanship went from “practical necessity” to “specialized hobby” to “performance art” in about fifty years. The knowledge still exists. The practitioners still exist. But the context that made it default knowledge is gone.
Now imagine a developer in 2040 who writes all their code by hand. No AI assistance. No generation. Just them and a text editor, the way we learned.
What’s the reaction in that world? Probably the same as the horse in Manhattan. Not “impressive craftsmanship.” More like “why would you do that?”
That’s not guaranteed. But it’s plausible. And if it happens, it won’t be because hand-coding became wrong. It will be because the context shifted underneath it.
The Generation That Won’t Understand
The 2025 GitHub Octoverse report found that 80% of new developers use Copilot within their first week on the platform. Not their first month. Their first week.
These developers aren’t learning to code and then adding AI. They’re learning to code with AI from day one. The distinction we agonize over, between “real” coding and AI-assisted coding, won’t exist in their mental model.
They won’t ask “can you code without AI?” any more than we ask “can you calculate without a calculator?” The question will seem quaint. Possibly pointless.
And when their AI stops working, they won’t panic the way we fear. They’ll do what every generation does with their tools: they’ll fix it. Or more likely, Jarvis will fix Jarvis. The debugging itself will be AI-assisted.
The skills that matter will be different. Not absent. Different.
Why the Rejection Is Transitional
That 77% vibe coding rejection makes sense for today. The tools aren’t reliable enough yet for blind trust. The “almost right but not quite” problem is real and costly.
But “risky in 2025” isn’t the same as “wrong forever.” The horseless carriage was unreliable too. Early adopters broke down on country roads while horse owners passed them by. That didn’t mean horses were the future.
The rejection is calibration, not verdict. The developers saying “not yet” aren’t saying “never.” They’re saying “the tools need to earn more trust first.”
That trust will either come or it won’t. But the direction of travel is clear.
The Choice That Isn’t a Choice
Let’s talk about what’s actually driving this.
The pressure isn’t coming from AI evangelists on Twitter. It’s coming from spreadsheets. From competitive analysis. From the simple math of “if we don’t use AI, our competitors will.”
BCG’s research on AI adoption found that companies leading in AI deployment achieve 1.5x revenue growth and 1.6x shareholder returns compared to laggards. McKinsey’s analysis shows AI high performers are 3x more likely to have senior leadership actively championing AI adoption.
This isn’t ideology. It’s competitive pressure with receipts.
And here’s what makes the transitional generation’s position uniquely painful:
The developers who started after 2023 don’t feel guilty. They’ve never known anything else. For them, AI assistance is just how coding works. No before to compare against. No sense of loss.
The developers who opted out early made their peace. They either found niches where AI doesn’t help, or they accepted the tradeoffs, or they left the field. They’re not agonizing because they already decided.
But you? The ones in the middle? You remember the before. You learned the hard way. And now you’re watching the hard way become optional, and you’re not sure whether to grieve or adapt, so you’re doing both simultaneously, and calling it guilt.
The guilt is real. But it’s not signal. It’s friction.
You can stop pretending there’s a choice about whether to use AI. There isn’t, not for most of us, not if we want to stay competitive. The only choice is how, and even that choice narrows every year.
The question isn’t whether you’ll use these tools. It’s whether you’ll use them well, or whether you’ll use them while feeling bad about it.
One of those paths is more productive than the other.
The Question That Keeps Me Honest
I’ve been making the case that AI resistance is mostly status anxiety and misplaced guilt. But there’s an argument against uncritical adoption that actually gives me pause.
If everyone stops writing code from scratch, where does the next generation of training data come from?
This isn’t hypothetical. A Nature paper examined what happens when AI models train on AI-generated content. The result: model collapse. Like photocopying a photocopy, each generation loses fidelity. The outputs converge toward mediocrity, then nonsense.
The training data that made GPT-4 and Claude possible came from decades of humans writing code, asking questions, explaining solutions. That corpus is the foundation. And it’s not being replenished at the same rate.
Epoch AI projects we may run out of new high-quality training text between 2026 and 2032. The aquifer we’re drawing from took the entire history of the internet to fill. And we might be depleting it faster than it refills.
If the purity crowd wanted a genuine argument against uncritical AI adoption, this is it. Not “you’re cheating” but “you might be depleting the aquifer.”
The Futures I Can See
I don’t know how this resolves. But I can sketch the plausible paths.
The Priesthood Model. A small specialist class continues producing foundational knowledge, the way a tiny percentage of humans actually understand semiconductor physics, but billions use smartphones. Software development becomes like particle physics: most people use the outputs without understanding the inputs, and that’s fine because a dedicated few maintain the core.
The Fossil Fuel Model. The existing corpus of human-generated code is so massive that future models can train on historical artifacts indefinitely. We’ve already produced “enough.” Pre-2023 code becomes a strategic asset, mined, refined, recycled. New human-written code becomes unnecessary for training purposes.
The Synthetic Flywheel. AI-generated training data doesn’t degrade the models because architectural innovations compensate. Researchers have already shown that mixing synthetic data with human data can avoid collapse. Maybe the quality holds through techniques we haven’t invented yet.
Each sounds plausible. Which wins depends on variables we can’t predict from here.
My honest read:
we’ll end up with a much smaller group of people who deeply understand foundations. Not because everyone needs to, but because someone needs to. And that someone will likely be well-compensated, precisely because scarcity.
What This Means for You
The bootstrapping paradox doesn’t invalidate the rest of my argument. It complicates it.
You can accept that the revolution is happening and that your guilt is misplaced, while also acknowledging that genuine unresolved questions remain. These aren’t contradictory positions. They’re just honest ones.
The transitional generation isn’t just navigating a shift in tools. You’re navigating a shift in what knowledge means: who needs to hold it, how it gets transmitted, whether understanding remains valuable when execution becomes automated.
That’s a harder question than “should I use Copilot?” And it’s one I won’t pretend to have fully answered.
What I will say: the answer probably isn’t “everyone maintains artisanal coding skills just in case.” That’s not how previous technological transitions resolved. The system will find a new equilibrium we can’t fully predict from here.
Coding might become like horsemanship. Not extinct. Repositioned. A skill that exists in the world, practiced by specialists and enthusiasts, while the masses move by other means.
What You Do Now
Stop moralizing a revolution.
That’s the short version. The revolution isn’t asking for your permission, your approval, or your guilt. It’s happening in the 84% adoption rate, in the 36 million developers who joined GitHub last year, in the 80% of new developers using Copilot within their first week. It’s happening in your competitor’s sprint velocity and your own shipped features.
The discomfort you feel is real. But it’s not moral signal. It’s historical friction, the sensation of living through a fundamental change while your mental models are still calibrated for the old world.
Every transitional generation pays this tax.
The weavers paid it when mechanical looms arrived. Their identity was tied to the movement of their hands, the rhythm of the shuttle, the knowledge passed from master to apprentice. The machines didn’t just threaten their jobs. They threatened their understanding of what “weaving” meant.
The typesetters paid it when desktop publishing arrived. Decades of expertise in kerning, leading, paste-up, rendered optional by a twenty-three-year-old with PageMaker. The craft didn’t die. It transformed into something the old practitioners barely recognized.
The taxi drivers are paying it now. The radiologists are watching it approach. The translators have already felt it arrive.
You’re not unique in this discomfort. You’re just the current generation standing in the current gap.
What You’re Not Losing
Here’s what I’ve come to believe.
The craft was never really “writing code.” That was just the medium. The craft was solving problems, building systems that work, translating human needs into machine behavior. That craft survives. The medium changes.
The photographers who complained that digital ruined the purity of film were focused on the wrong layer. The craft was never silver halide crystals. It was seeing light, composing frames, capturing moments. Digital didn’t kill that. It just changed what you held in your hands while doing it.
The developers who insist that hand-written code is the “real” craft are making the same mistake. The craft isn’t syntax. It’s systems thinking. It’s understanding what to build and why. It’s knowing when the AI is wrong, which requires understanding what “right” looks like.
That knowledge doesn’t become less valuable in an AI-augmented world. It becomes more valuable. Because everyone will have access to generation. Not everyone will have the judgment to evaluate it.
The Question Worth Asking
The question isn’t “should I use AI?” That question is already answered. You are using it, or you will be, or you’ll be competing against people who are.
The question isn’t “is this cheating?” That’s the old world’s frame, and it doesn’t apply to fundamental shifts. You can’t cheat at a game whose rules are being rewritten.
The question isn’t even “will I become obsolete?” Probably not, if you’re reading this, you’re already adapting. The people who become obsolete are the ones who refuse to engage with the question at all.
The question worth asking is simpler and harder:
What becomes valuable when AI handles the how?
When generation is cheap, what remains expensive? When execution is automated, what stays human? When the machine can produce the code, what do you produce that the machine can’t?
I don’t have a complete answer. I have guesses. Judgment. Taste. Asking the right question in the first place. The wisdom to know when the elegant solution is wrong for the context. The relationships and trust that make collaboration possible. The understanding of human needs that no training data fully captures.
But I might be wrong. The answer might be something we can’t see yet, some new form of value that emerges only after the transition completes.
The Only Thing I’m Sure Of
The guilt doesn’t help.
It doesn’t make you more rigorous. It doesn’t preserve the craft. It doesn’t slow the revolution or protect your skills or honor the developers who came before.
It just burns energy you could spend on adaptation.
So here’s my invitation: let it go. Not the discernment. Keep that. Not the standards. Keep those too. Let go of the guilt specifically. The feeling that you’re doing something wrong by doing something new.
You’re not cheating. You’re not lazy. You’re not betraying the craft. You’re just early.
And early is uncomfortable. Early means the norms haven’t settled. Early means you’re making it up as you go, pattern-matching against old models that don’t quite fit, wondering if you’re doing it right when “right” hasn’t been defined yet.
But early is also where the advantage lives. The developers who figure out this transition, who build the judgment, who find the balance, who learn to wield the tools without being wielded by them, will define what “good” looks like for the next generation.
The ones still arguing about whether to participate will be arguing about a question that’s already been answered.
The revolution doesn’t care about your guilt.
But it might care about what you build next.