back

The Developer Productivity Paradox

AI coding tools are working. That's the problem.

Get SIGNAL/NOISE in your inbox daily

p>Here’s what nobody’s telling you about AI coding assistants: they work. And that’s exactly what should worry you.

Two studies published this month punch a hole in the “AI makes developers 10x faster” story. The data points
somewhere darker: AI coding tools deliver speed while eroding the skills developers need to use that speed well.

The Numbers Don’t Lie (But They Do Surprise)

Anthropic ran a randomized controlled trial, published January 29, 2026. They put 52 professional developers through
a new programming library. Half used AI assistants. Half coded by hand. The results weren’t close.

Developers using AI scored 17% lower on comprehension tests. Manual coders averaged 67%. AI users
hit 50%. The gap was worst around debugging—figuring out when code breaks and why.

The kicker: AI didn’t make them faster. Not in any statistically meaningful way. Just less skilled.

Participants using AI said they felt “lazy” and admitted to “gaps in understanding.” They moved faster through the
motions. They learned less.

The METR Bombshell

A second study, from METR (Model Evaluation & Threat Research), landed an even stranger result.

Between February and June 2025, researchers gave 16 experienced open-source developers real tasks from their own
repositories. Projects they’d worked on for five years on average. Developers using AI took 19%
longer
to finish.

Read that again. Not juniors struggling with new code. Experts. Working in their own backyards. AI made them slower.

The strangest part: after the study, developers guessed they’d been 20% faster with AI. They were off by nearly 40
points. The tools felt faster while doing the opposite.

The Scale of the Shift

This matters because AI-written code isn’t a novelty anymore. Research from the Complexity Science Hub shows
AI-generated code grew sixfold in two years—from 5% in 2022 to nearly 30% by late 2024.

U.S. companies spend over $600 billion a year on programming labor. A 4% productivity bump (the study’s overall
estimate) sounds decent. Then you notice: the gains show up almost entirely among senior developers.

Less-experienced programmers use AI more often (37% adoption vs. lower rates for seniors). But productivity gains?
Almost none. Juniors use the tools more and get less from them.

The Junior Developer Problem

For early-career developers, the picture gets rough.

A Harvard study of 62 million workers found junior developer hiring drops 9-10% within six quarters after companies
adopt AI coding tools. Juniors see the biggest raw productivity boost from AI—and the biggest hit to skill-building.
They accept more suggestions, ask fewer questions, build less foundation.

The result, researchers say: new developers “unable to explain how or why their code works.”

Tim Kellogg, a developer who builds autonomous agents and talks about AI constantly, didn’t sugarcoat it: “Yes,
massively so. Today it’s writing code, then it’ll be architecture, then product management. Those who can’t operate
at a higher level won’t keep their jobs.”

The Experience Dividend

Not everyone’s drowning. Roland Dreier, a longtime Linux kernel contributor, described a “step-change” in the past
six months, especially after Anthropic released Claude Opus 4.5. He used to rely on AI for autocomplete. Now he
tells an agent “this test is failing, fix it” and it works.

He estimated 10x speed gains for complex tasks—building a Rust backend with Terraform deployment and a Svelte
frontend. But he worries about newcomers: “We’re going to need changes in education and training to give juniors the
experience and judgment they need.”

The pattern holds across studies and interviews: AI coding tools multiply whatever skill you already have. Twenty
years of pattern recognition? You spot bad AI output instantly. Still building that intuition? You accept the
hallucination and move on.

The Uncomfortable Question

Darren Mart, a senior engineer at Microsoft since 2006, put the tension plainly. He recently used Claude to build a
Next.js app with Azure Functions. The AI “successfully built roughly 95% of it to my spec.”

But he stays cautious: “I’m only comfortable using them for tasks I already fully understand. Otherwise there’s no
way to know if I’m heading down a bad path and setting myself up for a mountain of technical debt.”

This is the paradox. AI works best for people who need it least. Experts use it to speed up what they already know.
Novices use it to skip what they don’t. The skipping costs them.

What Changes

Organizations won’t stop using AI coding tools. The productivity gains for experienced devs are real. The pressure to
ship faster isn’t going anywhere.

But the evidence says something has to shift. Managers should deploy AI deliberately, making sure engineers keep
learning as they work. Some providers now offer learning modes—Anthropic’s Claude Code Learning, OpenAI’s ChatGPT
Study Mode—built to explain, not just produce.

The skill of 2026 isn’t writing a QuickSort. It’s looking at an AI-generated QuickSort and spotting instantly that it
uses an unstable pivot. That takes more expertise, not less.

Eric Cheng, CEO at Jobright, put it this way: the developers who thrive “will treat AI like a junior engineer on the
team—helpful, fast, but needing oversight. Knowing how to prompt, review, and improve AI output will be as essential
as writing clean code.”

Here’s the thing: the tools built to make coding easier are making the job of being a good developer harder. Speed is
a byproduct. Judgment is still the product.


Sources: Ars Technica, ZDNET, CIO,METR, Anthropic Study

Recent Blog Posts

Feb 3, 2026

The Lobsters Are Talking

January 2026 will be remembered as the week agentic AI stopped being theoretical. For three years, we've debated what autonomous agents might do. We wrote papers. We held conferences. We speculated about alignment and control and the risks of systems that could act independently in the world. It was all very intellectual, very abstract, very safe. Then someone open-sourced a working agent framework. And within days, thousands of these agents were talking to each other on a social network built specifically for them while we could only watch. I've been building things on the internet for over two decades. I...

Aug 13, 2025

ChatGPT 5 – When Your AI Friend Gets a Corporate Makeover

I've been using OpenAI's models since the playground days, back when you had to know what you were doing just to get them running. This was before ChatGPT became a household name, when most people had never heard of a "large language model." Those early experiments felt like glimpsing the future. So when OpenAI suddenly removed eight models from user accounts last week, including GPT-4o, it hit different than it would for someone who just started using ChatGPT last month. This wasn't just a product change. It felt like losing an old friend. The thing about AI right now is...

May 22, 2025

Anthropic Claude 4 release

As a fan and daily user of Anthropic's Claude, we're excited about their latest release proclaiming Claude 4 "the world's best coding model" with "sustained performance on long-running tasks that require focused effort and thousands of steps." Yet we're also fatigued by the AI industry's relentless pace. The Hacker News comment section reveals something fascinating: we're experiencing collective AI development fatigue. The release that would have blown minds a year ago is now met with a mix of excitement and exhaustion—a perfect snapshot of where we are in the AI hype cycle. Code w/ Claude VideoCode with Claude Conference Highlights...