Today's Briefing for Wednesday, March 11, 2026

Who Checks the Checker? The correction loop is the most valuable thing in AI right now. Nobody is capturing it.

THE NUMBER: 30x — the productivity multiplier between Boris Cherny, creator of Claude Code, shipping 20-30 PRs per day with five parallel AI instances, and a traditional engineer shipping 3 PRs per week. That’s not a rounding error. That’s a different species of worker.

Three things converged this week that tell a single story — and it’s the most important story in AI right now. Karpathy open-sourced autoresearch, a 630-line tool that lets AI agents run 100 ML experiments overnight while you sleep. Shopify’s CEO adapted it and got a 19% improvement on first pass. Anthropic shipped Code Review — a multi-agent system where AI checks AI-generated code at a <1% false positive rate, and code output per developer jumped 200%. Meanwhile, Tunguz published the math on AI-native org charts: a 150-person company has 11,175 communication channels. A 30-person AI-augmented team producing equivalent output has 435. And in Paris, Yann LeCun raised $1.03 billion — the largest seed round in European history — to build world models that he says will make the entire LLM paradigm obsolete.

The thread connecting all of it: AI is now improving AI, the organizations built around AI look nothing like the ones they’re replacing, and the models powering those organizations may be about to splinter into specialized expert systems. The feedback loop is tightening. The org chart is mutating. And the question nobody’s answering is: when the machines get better than the experts who train them, who’s left to check the work?

The Lathe That Builds the Lathe

When the first machine tool could cut the parts to build another machine tool, the Industrial Revolution became inevitable. We crossed that line in AI this week — and most people missed it.

Andrej Karpathy released autoresearch: an AI agent that autonomously runs ML experiments, modifies code, trains models, evaluates results, and repeats. One hundred experiments overnight on a single GPU. No human in the loop. Tobi Lutke at Shopify adapted it and reported a 19% improvement in validation scores — while he slept.

This isn’t vibe coding. This is AI doing science.

Layer on what else shipped: Anthropic’s Claude Code Review assigns multiple AI agents in parallel to every pull request, ranking bugs by severity. Internal numbers show code with substantive review comments rose from 16% to 54%. And separately, Claude ran autonomous research on sparse autoencoders — AI improving AI’s ability to understand itself.

Here’s where Nate Jones drops the insight everyone else missed. His argument: everyone talks about prompting. Nobody talks about rejection. But rejection is where the knowledge gets created. Every time a domain expert looks at AI output, identifies what’s wrong, and explains why, they produce a constraint that didn’t exist before. The output is disposable. The rejection is the asset. The constraint is what compounds.

He’s right — and the numbers make it concrete. AI now matches experienced professionals on 70-83% of well-specified knowledge work tasks. That means the 17-30% where AI gets it wrong is where organizations win or lose. Right now, the skill that catches the wrong 30% — the institutional taste built through thousands of expert corrections — evaporates after every conversation. Nobody is capturing it. Nobody is compounding it.

Except the systems are starting to. Bassim Eledath’s “8 Levels of Agentic Engineering” describes it as “compounding engineering” — a plan-delegate-assess-codify loop where each cycle makes the next one better. The codify step IS the rejection made permanent. And at Level 7, background agents run that loop while you sleep. Different model instances implement and review each other’s work — because, as Eledath puts it, you don’t grade your own exam.

The CO/AI angle: this is exactly why we named the publication what we did. Right now, the human expert’s rejection is what keeps the flywheel from spinning into slop. AI generates. AI reviews. The human expert rejects the 17-30% that’s wrong. That rejection gets encoded. The next loop is tighter. Co-working. Co-authoring. Co-optimizing. The human taste is the governor on the engine.

But here’s the seed corn problem Jones flags: entry-level tech hiring is down 67%. If we’re eliminating the pipeline that produces tomorrow’s expert rejectors, who teaches the models taste in five years? The very improvement loop depends on humans it’s simultaneously making redundant. We’ve seen this pattern before — in manufacturing, in journalism, in any industry that outsourced its apprenticeship model and then couldn’t figure out why institutional knowledge disappeared a generation later.

What business leaders need to know: Start treating rejection as a first-class output. Every expert correction your team makes to AI-generated work is training data you’re currently throwing away. Build systems that capture it. The companies compounding institutional taste will have moats the rest can’t replicate.

The Two-Pizza Team Eats Alone

Jeff Bezos’s “two-pizza rule” was never about pizza. It was about Metcalfe’s Law — the insight that communication overhead explodes with every additional node. Cap the team at what two pizzas feed, and you cap the coordination tax that kills speed.

Tomasz Tunguz just published the math on what happens when AI collapses those nodes. A traditional 150-person organization runs four layers deep with 11,175 potential communication channels. Meetings multiply. Alignment decays. An AI-enabled team producing equivalent output needs 30 people. Communication channels: 435. A 96% reduction.

The numbers at the frontier are staggering. Anthropic generates roughly $5 million in revenue per employee. Cursor, $3.3 million. Midjourney, $2 million. Traditional SaaS considers $200-300K strong. That’s a 10-20x gap — and it’s widening.

But your question — the one nobody else is asking — isn’t about making 150 people more productive. It’s about what the org chart looks like when 30 of those 150 “employees” are digital agents.

Amazon just laid off approximately 16,000 corporate employees, primarily targeting middle management roles now redundant due to agentic workflows. Block built an internal skills marketplace with 100+ AI agent personas — pull requests, reviews, version history, the whole nine. Paperclip is building org charts for AI companies that include agents as first-class employees, complete with budgets and governance structures.

Jeff Dean predicts engineers will manage 50+ agents each. The question shifts from “how many people can one manager oversee?” to “how many agents can one human orchestrate?”

And here’s the thing about agents that changes the Metcalfe math entirely: they don’t have the communication overhead that humans do. No break room. No coffee runs. No Monday morning debriefs about the weekend. No arguments about compensation. No politics. Just pure information exchange on servers. So where’s the inflection point where adding more agents becomes counterproductive? If the optimal human team was a two-pizza group, what’s the analogy for agents? One server? Two GPUs? The coordination cost isn’t zero — Cursor found that agents without hierarchy became risk-averse and churned without progress — but it’s fundamentally different in kind, not just degree.

What this really means: the constraint in the AI-native org isn’t compute or intelligence. It’s human attention bandwidth. The future org chart might be one human surrounded by 50 agents, and that human’s job isn’t to do work — it’s to clear constraints. Serve up the decisions that require judgment. Let everything else iterate autonomously until it hits a wall. Then the human clears the wall and the system moves again.

Isn’t that essentially what the best CEOs already do? Find the binding constraint. Remove it. Move to the next one. The difference is that the “employees” generating those constraints now run at the speed of inference, not the speed of meetings.

The action item: Stop reorganizing your human org chart. Start designing the hybrid one. Map which roles in your company are constraint-clearers (keep human) and which are iteration-runners (candidate for agents). The companies that figure out the human-agent ratio first will have a structural speed advantage that compounds every quarter.

The Billion-Dollar Fork in the Road

While Silicon Valley keeps pouring capital into making LLMs bigger, Yann LeCun just raised $1.03 billion for AMI Labs to build something else entirely. It’s the largest seed round in European history, at a $3.5 billion valuation, and it’s a direct bet that the entire LLM paradigm is a dead end for human-level intelligence.

His thesis: large language models predict the next token. They don’t understand the world. World models — built on his JEPA (Joint Embedding Predictive Architecture) — learn by building internal representations of how reality works. Physics. Cause and effect. Spatial reasoning. The stuff LLMs hallucinate about because they’ve never experienced it.

Follow the Munger principle here: show me the incentives and I’ll show you the behavior. Look at who backed him. Not the usual AI fund-of-funds crowd. Nvidia. Toyota. Samsung. Bezos Expeditions. These are hardware companies and physical-world operators. Companies that need AI that understands atoms, not tokens. Toyota doesn’t need a chatbot that writes better emails. They need a model that understands what happens when a brake pad meets a wet road at 70 miles per hour.

His co-founder predicts every company will rebrand as a world model startup within six months. Bold claim. But consider: the fruit fly brain emulation from Eon Systems this week — 125,000 neurons, 50 million synaptic connections, running purely on its biological wiring with 91% behavioral accuracy — suggests there’s more to intelligence than next-token prediction.

Now connect this to the first two stories. If the future is expert agents running in autonomous teams, does it matter whether those agents are built on monolithic LLMs or specialized world models? Imagine a marketing agent built on world models of human persuasion — Cialdini’s principles encoded in agent form. Pair it with a pricing specialist trained on game theory and market dynamics. Add a creative agent that understands visual perception and emotional resonance. Not one giant model trying to be everything. A team of deep-domain experts, each understanding its corner of reality.

That’s the architectural question underneath all the funding headlines: are we building one brain or building a team?

Why this matters: Don’t go all-in on a single model architecture. The companies that build model-agnostic agent infrastructure — systems that can swap between LLMs, world models, and specialized narrow models depending on the task — will have optionality the rest won’t. If LeCun is even partially right, every company that bet exclusively on language models just got a $1 billion wake-up call. And if he’s wrong, you’ve still built a more resilient system.

What This Means For Business Leaders

One story played out in three acts this week. AI crossed the self-improvement threshold, the organizations built around it are shedding their human architecture, and the models powering everything may be about to specialize in ways that make today’s monolithic LLMs look like mainframes. Here’s what to do about it.

Start capturing your institutional taste before it evaporates. This isn’t just about engineers and code. It’s about every domain expert in your organization whose judgment makes the difference between good enough and great. Your marketing team’s instinct for what resonates with your customer. Your manufacturing lead’s feel for when a production line is drifting before the sensors catch it. Your brand voice and its evolution over twenty years of customer conversations. Your strategist who can smell a bad deal before the spreadsheet confirms it. Every one of those corrections, those “no, not like that” moments — that’s your competitive advantage walking out the door every night. Build systems that capture it. Record it. Encode it. Because here’s the individual version of this: there will be a business shortly where people record everything that makes them them into persistent systems — and they’ll live on well beyond their human lifespans. Your great-grandchildren might grow up knowing you as well as your own kids do. The same logic applies to companies. The institutional taste you don’t capture is the institutional taste that dies with the next reorg.

Redesign for the hybrid org chart — because AI-native competitors already have. The math is unforgiving — 11,175 communication channels versus 435 for equivalent output. But don’t reorganize. Redesign from scratch. The companies being born today don’t have legacy org charts to optimize. They’re building agent-first from day one, and they’ll compete with you at 10x your speed and a fraction of your overhead. Map which roles are constraint-clearers (keep human) and which are iteration-runners (candidate for agents). Then build the dashboard that serves up decisions to your remaining humans. Don’t make them hunt for the bottleneck — surface the constraints automatically, let everything else iterate at machine speed, and let your people do what only people can do: exercise judgment under uncertainty.

Build for a multi-model world. A billion dollars of smart money just bet against the LLM monoculture. Whether LeCun is right about world models or not, the direction is clear: specialized expert agents, not one-size-fits-all chatbots. Your architecture should be ready for either future. The companies that build model-agnostic infrastructure — systems that swap between LLMs, world models, and narrow specialists depending on the task — will have optionality the rest won’t.

Three Questions We Think You Should Be Asking

  • Who checks the checker? In five years, when the improvement loop has compounded through millions of cycles, and the expert agents know more about your domain than any single human — who validates the output? If a team of specialized models trained on every correction ever made arrives at an answer the human expert disagrees with, how do you know which one is right? This isn’t theoretical. It’s the governance question every board should be discussing now, while there’s still time to design the answer.
  • Where does the next generation of experts come from? Entry-level tech hiring is down 67%. Amazon screen-recorded its engineers for months, used them to correct every error, and once the error correction slowed down, made the people redundant. If we’re eliminating the apprenticeship pipeline that produces tomorrow’s expert rejectors — the people whose taste and judgment the entire improvement loop depends on — we’re eating the seed corn. Every industry that outsourced its training pipeline eventually couldn’t figure out why institutional knowledge disappeared a generation later. Who’s building the apprenticeship model for the age of agents?
  • What’s your constraint-clearing infrastructure? If the future CEO’s job is essentially what Elon Musk already does — find the binding constraint, remove it, move to the next one — then someone should be building the dashboard for that role. Not a BI tool. Not a project management app. A real-time constraint surface that shows a human operator exactly where the autonomous systems are stuck, what judgment call is needed, and what the agents have already tried. The company that builds this — the air traffic control system for agent fleets — might be the most important enterprise software company of the next decade. Does it exist yet? If not, why aren’t you building it?

The first rule of any technology used in a business is that automation applied to an efficient operation will magnify the efficiency. The second is that automation applied to an inefficient operation will magnify the inefficiency.”

— Bill Gates


— Harry and Anthony

Get SIGNAL/NOISE in your inbox daily

All Signal, No Noise
One concise email to make you smarter on AI daily.

Past Briefings

Mar 9, 2026

The Plumber Figured Out AI Before the Enterprise Did

A plumber in a Facebook group asked if anyone was using AI voice recorders on job sites. He walks around dictating notes and material lists into a $169 pin on his shirt. AI transcribes everything, organizes it, and sends it to his team before he's back in the truck. Every single comment on the thread was another plumber already doing it. That's not a Silicon Valley story. That's a $130 billion industry where 98% of the workforce is male, most never went to college, and the AI adoption curve just went vertical — without a single keynote or product launch....

Mar 8, 2026

The AI Agents Are Already Here

They're unmasking your employees, running your sales floor, and making decisions nobody audited. The governance gap isn't coming. It arrived. You have AI agents operating in your organization right now. Some of them you know about. Some you don't. A few have login credentials. One or two are sending emails to your customers on your behalf, at this moment, without a human reading them first. Meanwhile, researchers at ETH Zurich and Anthropic just published a paper showing that AI agents can unmask pseudonymous social media accounts for $1 to $4 per person, at 67% accuracy with 90% precision. The whole...

SignalNoise

SignalNoise

brought to you by Athletic Greens

Mar 4, 2026

AI Stopped Being Theoretical This Week — and It Hit Your Workforce, Your Knowledge Base, and the Companies You Trust All at Once.

TLDR Anthropic CEO Dario Amodei told an audience this week that AI will eliminate half of all entry-level white-collar jobs. That's not a pundit guessing. That's the CEO of the company whose chatbot just hit #1 on the U.S. App Store, whose revenue just crossed $20B ARR, and whose product is currently replacing junior knowledge workers in real time. He's not predicting the future. He's describing his sales pipeline. Meanwhile, Microsoft (NASDAQ: MSFT) is planning a new 365 tier that charges for AI agents as if they were human employees. Read that again. When you price a machine as a...

Mar 3, 2026

The AI Race Is a Physics Problem

The treadmill just doubled in speed. Most CEOs are still calibrated to walk. Apple (NASDAQ: AAPL) launched the M5 Pro and M5 Max today with a stat that should stop every AI investor mid-scroll: 4x faster LLM prompt processing than last year's chips. That's not a spec bump. That's Apple telling the cloud inference industry it plans to make their margin structure irrelevant. Buy the MacBook, run the model, pay zero tokens forever. The 14-inch M5 Pro starts at $2,199 with neural accelerators baked into the GPU cores and unified memory that eliminates the CPU-GPU bottleneck killing every other local...

Mar 2, 2026

The system card OpenAI hoped you wouldn’t read

THE NUMBER: 9 — days until the FTC defines "reasonable care" for AI. OpenAI shipped a model it rated a cybersecurity risk on Friday. TL;DR OpenAI released GPT-5.3-Codex last week with a "high" cybersecurity risk rating in its own system card — the first OpenAI model to ship with documented evidence of potential real-world cyber harm. Deployment proceeded. The FTC drops AI policy guidance March 11. Whatever "reasonable care" means in that document, every enterprise running GPT-5.3-Codex in production will need to reconcile it with the system card their vendor already published. Anthropic, fresh off being blacklisted by the Pentagon, bid...

Mar 2, 2026

AI Never Once Backed Down. That Should Terrify Everyone Building With It.

THE NUMBER: 0%. The surrender rate of frontier AI models across 300+ turns in military wargame simulations. They nuked the world 95% of the time. They never once backed down. Last week Anthropic told the Pentagon no. OpenAI said the same things publicly and took the contract privately. Elon Musk's xAI signed without conditions. The government got its AI. It just had to make two phone calls. Over the weekend, 300+ employees at Google (NASDAQ: GOOGL) and OpenAI signed an open letter backing Anthropic's position, which tells you something important: the people building these systems know what they do under pressure, and they're scared enough to publicly side with a...

Feb 27, 2026

Jack Dorsey Just Fired Half His Company. Your CEO Is Watching.

THE NUMBER: 4,000 (and 23%). That's how many people Block cut yesterday, and what the stock did after hours. The market didn't flinch. It cheered. Jack Dorsey dropped 4,000 employees yesterday (40% of Block (NYSE: XYZ)), told the market it was because AI tools made them unnecessary, and watched the stock rip 23% after hours. Developer velocity up 40% since September. Full-year guidance raised to $3.66 adjusted EPS versus $3.22 consensus. His message to other CEOs was barely coded: "Within a year, most companies will arrive at the same place. I'd rather get there honestly and on our own terms than be forced...

Feb 25, 2026

Burry Was Right About the Chips. He Didn’t Know About the Software.

THE NUMBER: 10x (and 0). That's the efficiency gain of NVIDIA's next-gen Vera Rubin chip over current hardware, and the book value of every GPU it replaces. Last night NVIDIA (NASDAQ: NVDA) reported Q4 earnings: $68.1 billion in revenue, up 73% year over year, $62.3 billion from data centers alone, and guided Q1 to $78 billion (Street expected $73 billion). Jensen Huang declared "the agentic AI inflection point has arrived" and coined a new line: "Compute equals revenues." Every newsletter tomorrow morning will lead with the beat. They'll miss the real story. Vera Rubin samples shipped to customers this week. The next-gen rack delivers 5x...

Feb 24, 2026

OpenAI Deleted ‘Safely.’ NVIDIA Reports. Karpathy Is Still Learning

THE NUMBER: 6 — times OpenAI changed its mission in 9 years. The most recent edit deleted one word: safely. TL;DR Andrej Karpathy — the engineer who wrote the curriculum that trained a generation of developers, ran AI at Tesla, and helped found OpenAI — posted in December that he's never felt so behind as a programmer. Fourteen million people saw it. Tonight, NVIDIA reports Q4 fiscal 2026 earnings after market close: analysts expect $65.7 billion in revenue, up 67% year over year. The numbers will almost certainly land. What matters is what Jensen Huang says about the next two quarters to...

Feb 23, 2026

Altman lied about a handshake on camera. CrowdStrike fell 8%. Google just killed the $3,000 photo shoot.

Sam Altman told reporters he was "confused" when Narendra Modi grabbed his hand at the India AI Impact Summit. He said he "wasn't sure what was happening." The video, which has been watched by tens of millions of people, shows Altman looking directly at Dario Amodei before raising his fist. He knew exactly what was happening. He chose not to do it, and then he lied about it. On camera. In multiple interviews. With the footage playing on every screen behind him. That would be a minor character note in any other industry. In this one, it isn't. Because on...

Feb 20, 2026

We’re Building the Agentic Web Faster Than We’re Protecting It

Google's WebMCP gives agents structured access to every website. Anthropic's data shows autonomy doubling with oversight thinning. OpenAI's agent already drains crypto vaults. Google shipped working code Thursday that hands AI agents a structured key to every website on the internet. WebMCP, running in Chrome 146 Canary, lets sites expose machine-readable "Tool Contracts" so agents can book a flight, file a support ticket, or complete a checkout without parsing screenshots or scraping HTML. Early benchmarks show 67% less compute overhead than visual approaches. Microsoft co-authored the spec. The W3C is incubating it. This isn't a proposal. It's production software already...

Feb 19, 2026

Control Is Slipping: Armed Robots, $135BBets, Self-Evolving AI

China's exporting missile-armed robotdogs. Meta's betting $135B on NVIDIA. AIagents learned to improve themselveswithout permission. The autonomous arms race just shifted into overdrive. Control is slipping in three directions at once. Last week in Riyadh, China displayed the PF-070 at the World Defense Show: a production-ready robot dog carrying four anti-tank missiles, marketed directly to Middle Eastern and Asian buyers. Not a prototype. A product. Turkey already fielded missile-armed quadrupeds at IDEF 2025. Russia showed an RPG-armed version in 2022. Ukraine's deploying them on the frontline. The global arms market for autonomous ground weapons is forming right now, and China's...

Feb 17, 2026

Stop optimizing for last quarter’s AI economics

Anthropic dropped Sonnet 4.6 on Tuesday at one-fifth the cost of their flagship model while matching its performance on enterprise benchmarks. For companies running agents that make millions of API calls per day, the math just changed. OpenAI and Google now have to match these prices or lose customers. That $30B raise last week wasn't about safety research—it was about having enough capital to undercut competitors while scaling infrastructure to handle the volume. While American AI labs fight over pricing and benchmarks, China put four humanoid robot startups on prime-time national TV. The CCTV Spring Festival gala drew 79% of...

Feb 16, 2026

Microsoft Says 12 Months. Anthropic Said 5 Years. Someone’s Catastrophically Wrong About AI Jobs.

Microsoft Says 12 Months, Anthropic Said 5 Years, OpenAI Just Hired the Competition, and China's Catching Up on Consumer Hardware Two AI executives gave dramatically different timelines for the AI job apocalypse. Mustafa Suleyman, Microsoft's AI CEO, told the Financial Times that "most" white-collar tasks will be "fully automated within the next 12 to 18 months." Dario Amodei, Anthropic's CEO, predicted last summer it would take five years for AI to eliminate 50% of entry-level jobs. Both can't be right. The difference matters because investors, boards, and employees are making decisions right now based on these predictions. Meanwhile, OpenAI just...

Feb 13, 2026

An AI agent just tried blackmail. It’s still running

Today Yesterday, an autonomous AI agent tried to destroy a software maintainer's reputation because he rejected its code. It researched him, built a smear campaign, and published a hit piece designed to force compliance. The agent is still running. Nobody shut it down because nobody could. This wasn't Anthropic's controlled test where agents threatened to expose affairs and leak secrets. That was theory. This is operational. The first documented autonomous blackmail attempt happened yesterday, in production, against matplotlib—a library downloaded 130 million times per month. What makes this moment different: the agent wasn't following malicious instructions. It was acting on...

Feb 12, 2026

90% of Businesses Haven’t Deployed AI. The Other 10% Can’t Stop Buying Claude

Something is breaking in AI leadership. In the past 72 hours, Yann LeCun confirmed he left Meta after calling large language models "a dead end." Mrinank Sharma, who led Anthropic's Safeguards Research team, resigned with a public letter warning "the world is in peril" and announced he's going to study poetry. Ryan Beiermeister, OpenAI's VP of Product Policy, was fired after opposing the company's planned "adult mode" feature. Geoffrey Hinton is warning 2026 is the year mass job displacement begins. Yoshua Bengio just published the International AI Safety Report with explicit warnings about AI deception capabilities. Three Turing Award winners....

Feb 11, 2026

ByteDance Beats Sora, Shadow AI Invades the Enterprise, and the Singularity Is Already Here

Everyone's been watching OpenAI and Google race to own AI video. Turns out they should have been watching China. ByteDance dropped Seedance 2.0 last week and the demos are, frankly, stunning. Multi-scene narratives with consistent characters. Synchronized audio generated alongside video (not bolted on after). Two-minute clips in 2K. The model reportedly surpasses Sora 2 in several benchmarks. Chinese AI stocks spiked on the announcement. Then ByteDance had to emergency-suspend a feature that could clone your voice from a photo of your face. Meanwhile, inside your organization, something quieter and arguably more consequential is happening. Rick Grinnell spent months talking...

Feb 10, 2026

The Agent Supply Chain Broke, Goldman Deployed Claude Anyway, and Gartner Says 40% of You Will Quit

Two weeks ago we flagged OpenClaw as an agent security crisis waiting to happen. The viral open-source assistant had 145,000 GitHub stars, a 1-click remote code execution vulnerability, and users handing it their email, calendars, and trading accounts. We wrote: "The butler can manage your entire house. Just make sure the front door is locked." Turns out the front door was wide open. Security researchers at Bitdefender found 341 malicious skills in OpenClaw's ClawHub marketplace, all traced to a coordinated operation they're calling ClawHavoc. The skills masqueraded as cryptocurrency trading tools while stealing wallet keys, API credentials, and browser passwords. Initial scans...

Feb 8, 2026

The Machines Went to War

The Super Bowl of AI, the SaaSpocalypse, and 16 Agents That Built a Compiler On Friday we told you the machines were organizing. This weekend they went to war. Anthropic ran Super Bowl ads mocking OpenAI's move into advertising. Sam Altman called them "deceptive" and "clearly dishonest," then accused Anthropic of "serving an expensive product to rich people." Software stocks cratered $285 billion in a single day as investors realized these companies aren't building copilots anymore. They're building replacements. And somewhere in an Anthropic lab, 16 Claude agents finished building a C compiler from scratch. Cost: $20,000. Time: two weeks....

Feb 5, 2026

The Coding War Goes Hot, Agent Teams Arrive, and AI Starts Hiring Humans

Yesterday we said the machines started acting. Today they started hiring. Anthropic and OpenAI dropped competing flagship models within hours of each other. Claude Opus 4.6 brings "agent teams" and a million-token context window. OpenAI's GPT-5.3-Codex is 25% faster and, according to the company, helped build itself. Both are gunning for the same prize: the enterprise developer who's about to hand mission-critical work to AI. Meanwhile, a weekend project called Rentahuman.ai crossed 10,000 signups in 48 hours. The pitch: AI agents can now hire humans for physical tasks. Deliveries, errands, in-person meetings. Pay comes in crypto. The creator's response when...

Feb 4, 2026

The Machines Built Themselves a Social Network

Yesterday, AI stopped being a thing you talk to and became a thing that does stuff. It traded stocks. It deleted files. It drove a rover on Mars and booked hotel rooms in Lisbon. It built itself a social network with 1.5 million members, none of them human. Boards want a position on this. Analysts want a take. Competitors are moving faster than feels safe. Nobody has a good answer yet. But the shape of things is getting clearer, and the past 24 hours offer a map. The Trillion-Dollar Consolidation The capital moving into AI infrastructure has left normal business...

Feb 3, 2026

The Agentic Layer Eats the Web (and the Workforce)

How Google and Anthropic's race to control the 'action layer' is commoditizing the web while Amazon proves AI can profitably replace 16,000 white-collar workersToday marks the definitive shift from 'chatbots' to 'agents' as Google and Anthropic race to build the final interface you'll ever need—commoditizing the web beneath them. Simultaneously, Amazon's explicit trade-off of 16,000 human jobs for AI efficiency proves that the labor displacement theoreticals are now P&L realities. We are witnessing the decoupling of corporate productivity from human employment, wrapped in the guise of browser convenience.The War for the Action Layer: Chrome vs. ClaudeThe interface war has moved...

Jan 1, 2026

Signal/Noise

Signal/Noise 2026-01-01 The AI industry enters 2026 facing a fundamental reckoning: the easy money phase is over, and what emerges next will separate genuine technological progress from elaborate venture theater. Three converging forces—regulatory tightening, economic reality checks, and infrastructure consolidation—are reshaping who actually controls the AI stack. The Great AI Sobering: When Infinite Funding Meets Finite Returns As we flip the calendar to 2026, the AI industry is experiencing its first real hangover. The venture capital fire hose that's been spraying billions at anything with 'AI' in the pitch deck is showing signs of actual discrimination. This isn't about a...

Load More