AI Development Is a Different Skill
And most engineers don't have it yet.
I watched a staff engineer spend three hours on something that should have taken twenty minutes last week. He had Claude Code open. He had the context loaded. Fifteen years in the codebase, knew every corner of it.
He was reading every line Claude generated. Cross-referencing with internal docs. Running manual verification after each change. Treating the AI like a suspect who needed supervision. When he finished, he’d produced what someone with two years of experience shipped in forty-five minutes the same day using the same tools.
I used to think experience was the variable. Give the best engineers the best tools and they’d get the best results. That seemed so obviously true that I never questioned it.
Then I started watching what actually happens when experienced engineers use AI tools, and the pattern broke in a way I wasn’t expecting.
The inversion
Some of the best traditional engineers I know are terrible at AI development. Not mediocre. Terrible. They’re slower with AI than without it. They know this. They’ll tell you. They have theories about why.
Their theories are wrong. They think the tools aren’t good enough yet, or that AI can’t handle complex systems, or that the studies showing marginal gains prove the technology is immature. What’s actually happening is simpler and harder to accept: they can’t let go.
Their entire career was built on understanding every line. Reading the code is how they catch bugs. Tracing the logic is how they verify correctness. Deep implementation knowledge is what made them senior. That instinct, the one that served them for fifteen years, is now the thing making them slow.
The engineer I watched last week wasn’t using AI wrong in any technical sense. His prompts were fine. His context was loaded properly. He just couldn’t stop doing the old job while trying to do the new one. He added AI to his workflow without removing any steps. Of course that’s slower.
What the new skill looks like
The people who get massive output from AI dev work differently at every level. They write architecture documents before any code exists. They describe systems, not functions. They review at the integration level, not the syntax level. They run tests to verify correctness instead of reading code to verify correctness.
The creator of Claude Code ships over twenty pull requests a day. He runs multiple instances in parallel. He maintains a file that captures every mistake the AI has made on his codebase so it learns over time. He hasn’t manually written code in months.
That’s not a faster version of traditional engineering. It’s a different activity. He’s orchestrating and directing, not implementing. The skills that matter are architecture, system design, knowing what to build, knowing what’s wrong at the behavioral level. The skills that don’t matter anymore are the ones traditional engineering spent twenty years valorizing: syntax mastery, language fluency, the ability to trace through implementations in your head.
The grief is real
Something I didn’t expect started happening in early 2026. Senior engineers began writing what can only be described as grief essays. Not about losing their jobs. About losing their identity.
One well-known open source developer wrote “we are the last of our kind.” Another described structural change hitting identity. A twenty-year veteran wrote about going through the five stages.
I don’t think most people outside engineering understand what’s happening here. These aren’t people worried about being automated. They’re people watching the skill they spent a career mastering become the part that doesn’t matter anymore. The thing that made them exceptional is now the thing the machine does better.
Over half of senior engineers now say AI writes better code than they do. That’s not junior engineers overestimating the tools. That’s the people with the deepest knowledge of what good code looks like admitting the tool exceeds them at the thing they’re best at.
The split I keep seeing is between engineers who identified with the craft of writing code and engineers who identified with the outcome of shipping systems. The craft-oriented ones are grieving. The results-oriented ones are having the best year of their careers. Same tools. Same job market. Completely different experience depending on which part of engineering they built their identity around.
Why this matters for companies
The DORA report landed on a finding that captures this perfectly: AI acts as a multiplier of existing engineering conditions. Good practices get amplified. Bad practices get worse faster.
That’s the whole story. If your engineering org has people who can think at the systems level, who can direct rather than implement, who can let go of the code and focus on what the code is supposed to do, AI will multiply their output dramatically. If your org is full of people who can’t make that shift, AI will make them slower and more frustrated.
You can’t fix this with training. You can’t fix it with better tools. The gap is about how someone relates to the work itself, and changing that is closer to identity work than skill acquisition.
The companies that figure out the difference between “good engineer” and “good at AI dev” will capture the gains. The ones that assume they’re the same thing will keep wondering why the tools aren’t delivering.