No One Set Off My Evil Detector
Three months ago Elon Musk said Anthropic hates Western Civilization. This morning he leased them 220,000 GPUs and said the senior team didn't trip his moral alarm. Number Two has a new face. The orbital data center is the moonbase. The casino is open.

THE NUMBER: 220,000 β the count of NVIDIA H100, H200, and GB200 GPUs that Elon Musk leased to Anthropic this morning, the company he called misanthropic in February and hating Western Civilization a week later. Three months from “Anthropic hates Western Civilization” to “No one set off my evil detector” is the entire arc of the AI capital cycle compressed into a single CEO’s quote feed. SpaceX’s Colossus 1 supercomputer in Memphis, fully leased, full capacity. 300 megawatts of new compute on the table within thirty days, doubled Claude Code rate limits the same afternoon, and an exploratory partnership on multi-gigawatt orbital data centers*that ties directly into Musk’s $7.5 trillion comp-package milestone. “I spent a lot of time last week with senior members of the Anthropic team to understand what they do to ensure Claude is good for humanity and was impressed,”Musk wrote on X. “No one set off my evil detector.”Sam Altman was once Musk’s protege at OpenAI. Sam Altman is now Musk’s courtroom adversary. Number Two has a new face, and Number Two now sits on a moonbase.* Mark Hanna would have ordered another martini. Dr. Evil would have told him the bill is on the company.
π The Moonbase
Start with what Musk and Dario actually traded.
Anthropic is rate-limited, has been rate-limited for months, and would have stayed rate-limited through Q2. Dario Amodei said it on stage in San Francisco this afternoon, with the kind of plainness CEOs only use when the receipt is already public. “We tried to plan for a 10-fold increase. The level of growth has been so extreme that Anthropic hasn’t been able to meet compute demand.” Anthropic’s first-quarter revenue grew eighty-fold on an annualized basis. Plan was ten. Reality was eighty. The gap is the explanation for every customer complaint on Reddit, every developer thread on Hacker News, every peak-hour throttle, every quietly-tested removal of Claude Code from the $20 Pro plan that lasted six hours before the public revolt forced a reversal. The product was selling faster than the substrate could be rented. The most important sentence anyone said inside Anthropic for the last five months has been some version of “we don’t have the GPUs.” Today they got 220,000 of them in one phone call.
SpaceX, on the opposite side of the table, had a Colossus 1 that needed monetizing. Musk built two clusters in Memphis. Colossus 1 β 220,000 NVIDIA GPUs, 300 megawatts β and Colossus 2, the newer one. SpaceX moved its xAI training to Colossus 2 once Colossus 2 came online. Colossus 1 became an asset that needed a customer big enough to take it whole. Anthropic was the only buyer in the market who could absorb 220,000 GPUs in a single transaction without the deal falling out at procurement. The IPO book needs an AI customer story. SpaceX is targeting June 28 at $1.75-2 trillion, the largest offering in history. Bookrunners selling that to retail need a marquee name on the customer roster. Anthropic is that name. The deal closed because both sides needed it to close, and the only thing that ever stops a deal both sides need is the principle. The principle is what got compromised this week.
Then β and this is the part that turns the trade into the Austin Powers casting call β Anthropic and SpaceX announced an exploratory partnership on multi-gigawatt orbital data centers. Compute in space. The literal moonbase. That partnership is not a product announcement. It is a contract clause that ties Anthropic’s compute roadmap directly to Musk’s pay-package milestones β 100 terawatts of orbital compute capacity is one of the two thresholds that triggers Musk’s $7.5 trillion vest. Anthropic just became the demand-side anchor for the most expensive comp-package vest in human history. Dr. Evil and his new Mini-Me on the moonbase, looking down at an OpenAI courtroom in Oakland. That image writes itself.

Episode 6 β Why the AI Distribution Revolution Will Decide Future Market Leaders
Most companies are paralyzed by the βFog of Warβ in AI β a relentless storm of product hype, unpredictable breakthroughs, and fleeting models. But the real game-changer isnβt just the technology; itβs how you navigate the chaos and turn distribution into your ultimate moat.
𦴠Why The Principle Was Always Going To Compromise
Let’s do the receipts on Musk’s three-month pivot, because the speed of it is the story.
February 2026. Musk on X: “Anthropic hates Western Civilization. Frankly, I don’t think there is anything you can do to escape the inevitable irony of Anthropic ending up being misanthropic.” Posted alongside a screenshot from a Trump administration official accusing Anthropic of bias in its model constitution.
April 2026. xAI absorbed into SpaceX. Colossus 2 comes online. Colossus 1 enters the underutilized-asset column. Musk’s IPO timeline crystallizes β June 28, $1.75T target, $50-75 billion raise. Bookrunners begin asking the obvious question: who is the marquee AI customer story for the prospectus.
May 6, 2026. Anthropic Code with Claude developer conference in San Francisco. Joint announcement with SpaceX in the morning. Musk’s quote on X by the afternoon. “I spent a lot of time last week with senior members of the Anthropic team to understand what they do to ensure Claude is good for humanity and was impressed. No one set off my evil detector.” He added that SpaceX would “provide computing capacity to other AI companies that make similar efforts to favor humanity, like how SpaceX launches satellites for competitors with fair terms and pricing.”
Read those three timestamps next to each other. The substantive change between February’s “Anthropic hates Western Civilization” and May’s “no one set off my evil detector” is not a discovery about Anthropic’s values. It is the calendar approaching the IPO date. Musk did not change his mind. The cap table changed his mind. In Mark Hanna’s vocabulary, this is the trade where the principle becomes a footnote and the price becomes the story. The casino reabsorbs rivals when both sides need the receipts. The thing that should keep a public-market allocator awake at the SpaceX IPO is not whether Musk delivers Mars in five years. It is the ease with which the moral floor under his AI position moved fifteen degrees in twelve weeks because a customer wanted to write a check.
The legal frame is the part the press cycle is going to miss this week. Musk is in Oakland federal court suing Sam Altman and OpenAI for $134 billion on the theory that OpenAI’s mission to build AI for humanity was abandoned the moment commercial incentives became too large to refuse. That is the case. And the same week that case is being argued, Musk leased compute to a competitor of OpenAI on the public theory that the competitor β Anthropic β is committed to building AI for humanity. The trial argument and the SpaceX press release are using the same vocabulary, applied to two different counterparties, in service of opposite outcomes. If Anthropic is committed to humanity and OpenAI is not, then either the case is correct or the lease is wrong. Both cannot stand. The press will pretend otherwise for at least a quarter, possibly longer, possibly until the closing argument.
The Hanna line for this week β and the one to put on a sticky note on every CIO’s monitor β is not the fugazi line. It is the “don’t get attached to anything you can’t walk out on in thirty seconds flat” line. Musk just walked out on a moral position in twelve weeks because the cap table needed him to. The companies that thought they were in a long-term relationship with that position are about to discover they were renting it.
π¦ Why The Smart Grid Just Grew Legs
Now turn the camera the other way. While Musk was reversing on Anthropic’s values, Anthropic was running its first-ever developer conference and shipping the most aggressive harness offensive of the year.
A vocabulary check before we enter the announcement list. The smart grid is the metaphor for what Anthropic and the other frontier labs are actually selling. Three years ago they sold electricity β inference. Then they sold electricity plus the wiring β APIs and SDKs. Then electricity plus wiring plus the appliance β Claude Code, Cowork, Claude Desktop. Then electricity plus wiring plus the appliance plus the electrician β the forward-deployed engineer, the consulting JV, the Goldman-Blackstone-Hellman & Friedman vehicle that dropped on Monday and the ten financial-services agent templates that dropped on Tuesday. The labs are no longer in the electricity business. They are in the smart-grid business β generation plus distribution plus end-use plus the human who ties it all together inside your operating company. That is the move. It compounds margin. It compounds lock-in. It compounds switching cost. The trillion-dollar private valuation is priced as if the smart grid is the company.
The harness β Claude Code, Cowork, Claude Desktop, the agent runtime, the integration surface β is the sticky part of the smart grid. The model under the harness can be swapped tomorrow. The harness is the moat for the lab that owns the harness. Today’s Code with Claude announcements are entirely harness moves:
- Dreams β agents review their own past sessions between runs, extract patterns, restructure stored memory. Self-improving lock-in. Each user’s deployment compounds because the agent learns the user. You can swap the model. You cannot swap the agent that has learned you for six months.
- Routines β Claude Code can be scheduled to run on a cadence, prompted by another instance of Claude. “The default isn’t ‘I’m going to prompt Claude Code,'” Boris Cherny said on stage. “The default is now ‘I will have Claude prompt Claude Code.'” Recursive self-prompting baked into the harness. The user is no longer in the loop on the prompt β the user is in the loop on the policy.
- Multi-agent orchestration β generally available today inside Claude Code. The harness now manages multiple specialist agents under a single supervisor. The architectural complexity of swapping the lab grew an order of magnitude this morning.
- Outcomes loop β rubric-driven self-improvement against developer-defined success criteria. Each deployed agent gets a measurable evaluation surface that improves with use.
- Webhooks for managed agents β agents can now trigger external systems. The agent doesn’t live inside the chat session. It lives in your operations stack.
- Microsoft 365 Excel, PowerPoint, Word add-ins β Claude reads and writes the file format your finance department lives in. Try ripping that out at next year’s procurement review.
- Eight new data partnerships β Dun & Bradstreet, Fiscal AI, Financial Modeling Prep, Guidepoint, IBISWorld, SS&C IntraLinks, Third Bridge, Verisk β plus a Moody’s MCP app surfacing proprietary credit ratings on 600 million companies. The agent walks into your finance department with the briefcase already filled.
Read those announcements as a list and they look like a normal product-conference cadence. Read them as a category and they look like Anthropic ran the equivalent of three years of Salesforce-style platform-buildout in a single afternoon. The smart grid grew legs. Each feature deepens the harness. Each one increases the cost of swapping the lab. Each one extends the IPO arithmetic. Each is also a more visible target for whichever platform decides to absorb the harness layer underneath it.
That is the part the cocktail-party take is missing.
πͺ Yeah, But β The Platforms Are Moving Down The Stack On The Same Wednesday
Mark Hanna’s pitch was always new idea, special situation, special idea. The Implicator team writes a Yeah, but after the lead because every press release deserves the second look. Today’s press releases earn it.
Anthropic shipped Dreams, Routines, multi-agent orchestration, the SpaceX deal, an 80x growth admission, an orbital data center partnership. The smart grid grew legs.
Yeah, but β every other major platform spent the same Wednesday building its own router for that grid:
Google leaked Agent Mode for the Gemini app β a dedicated tab inside Gemini for tasks and workflows that run beyond a single chat turn. Scheduled actions. Skills. Inbox triage, meeting prep, slide deck generation, personalized news digests, ghostwriting, recurring-bill tracking. The leaked screenshots came out today. Google I/O on May 19-20 is the likely venue for the formal announcement. Google is not selling generation. Google is selling the router that picks generation per task.
Meta is training Hatch in virtual environments modeled on DoorDash and Yelp. The Information broke this on May 5; the Korea Economic Daily and Financial Times confirmed it today. The detail nobody else is highlighting yet: Hatch is being built today on Anthropic’s Opus 4.6 and Sonnet 4.6 β and Meta has publicly stated it intends to swap them for Meta’s in-house Muse Spark at launch. The lab is being used as a temporary training rail by the platform that intends to commoditize it the moment its own model is ready. Internal testing completes next month. The eviction notice is sitting on the desk before the rent has been paid.
ServiceNow and Accenture launched a forward-deployed engineering program at ServiceNow Knowledge 2026 in Las Vegas, ten days after Anthropic and Goldman announced the same trade. ServiceNow’s pitch: “The ServiceNow AI Platform integrates with any cloud, any model, and any data source to orchestrate how work flows across the enterprise.” Three hundred pre-built agent skills, “AI Control Tower” governance, single pane of glass. The Anthropic-Goldman-Blackstone JV had its harness moat at the operator-tier for ten days. That moat now has a same-name competitor with seven hundred and eighty-six thousand Accenture employees walking in alongside the engineers.
Microsoft Agent 365 went GA last week at $15 per user per month standalone, $99 in the new E7 bundle. The directory service for AI agents. Identity, policy, audit, governance β the layer that decides which agents are allowed to do what inside the enterprise. Whoever owns that layer owns who Anthropic gets to talk to inside the building. Active Directory, all over again.
OpenAI’s ChatGPT Apps directory is becoming the App Store inside ChatGPT. Spotify, Zillow, Canva, Coursera, Booking.com, Expedia, Adobe Photoshop, Gmail, Microsoft Teams, Stripe β and Replit β are listed as MCP-server-plus-component bundles that ChatGPT can render inline. Eight hundred million weekly users. Submission process. Review queue. An app store inside an app inside a phone whose app store rules don’t know how to review software that doesn’t hold still.
Apple’s iOS 27 multi-model selector lets users pick Claude, Gemini, or Apple Intelligence per feature. The Apple Intelligence $250 million class-action settlement landed yesterday. Customers are getting $25 to $95 because Apple oversold the rollout. The harness layer the lab needs the iPhone to host has just become the layer Apple is letting the user toggle.
Yeah, but β every one of these platform moves is a counter-offensive to the smart grid. Anthropic is sprinting up the stack. Microsoft, Google, Apple, Meta, OpenAI, ServiceNow, Accenture are all sprinting down the stack toward the same harness layer. The collision is the news. The press cycle is going to write today as Anthropic’s day. It is also Google’s day, Meta’s day, ServiceNow’s day. The smart grid grew legs. The smart router is being built six different ways at once. Six teams cannot all win the same layer.
π Apple Has The Bigger Stick β Unless
Among the platforms running the down-the-stack play, Apple is the one with the structural advantage almost no allocator is pricing.
The reason is simple and unsexy. Apple has 1.5 billion iPhones in circulation. Apple owns the biometric identity on those phones. Apple has a privacy moat the labs cannot match because Apple sells devices and the labs sell tokens. Apple has on-device M-class inference silicon that the labs cannot replicate. Apple has integration with mail, calendar, contacts, photos, messages, Maps, Apple Pay, Wallet, Health β every app the agent needs to read to be useful. Apple has the OS-level position. Apple has the bigger stick.
The piece The Wrapper and the Code, published Monday by an Adaptive Software writer named Iris, made the case more sharply than anyone has so far. Apple’s App Store rules β App Review Guideline 2.5.2 in particular β were written for software that holds still. Replit’s iOS app has been stuck on the same version since January because Apple cannot review code that the wrapper generates at runtime. App Anything was pulled from the store on March 26. The reviewable artifact and the running artifact are not the same thing anymore. “The artifact has dissolved into the runtime,” Iris wrote. “Someone in Cupertino, fairly soon, is going to have to write a memo about what review means when there’s nothing static to inspect.”
There is a transition cost to that memo. Apple’s services line is roughly $96 billion annualized. The App Store take inside it is roughly $25 to $30 billion. The 30% rake on binaries does not survive a world where the unit of distribution moves from binary to capability to intent. What replaces it β agent transaction take rates, Apple Intelligence subscriptions, identity fees on the directory service for agents β is plausibly bigger over a five-year arc. The wrapper dies. The new wrapper takes twelve to eighteen months to monetize. The stock holds the gap.
That gap is the part the press will mis-cover for two quarters. The bull case for Apple is not that the App Store survives. The bull case for Apple is that the layer that replaces the App Store will also belong to Apple, because Apple owns the device, the identity, and the trust. The bear case is that Apple is structurally a hardware-and-services company that just had its most valuable services surface dissolve, and the replacement surface takes a year and a half to price.
There is one structural answer to both arguments, and it is the question Harry asked me last night and it is the question almost no analyst is putting in their model.
What if the labs ship their own edge devices?
Right now the labs are renting Apple’s surface to reach the end customer. Cowork on an iPhone. Claude on an iPad. ChatGPT in a browser tab inside Safari. Yeah, but β none of those configurations is durable if the lab can ship a device that hosts the harness directly. The reported $6.5 billion OpenAI acquisition of Jony Ive’s design firm in 2024 was the first move. The product has been in development for two years. The expected ship window is 2026 to 2027. If OpenAI ships a credible AI-first edge device, OpenAI does not need Apple. If Anthropic does not ship one, Anthropic stays renting Apple’s surface for the rest of the decade.
The history of the device-as-replacement strategy is short and grim. Humane’s AI Pin failed in 2024. Rabbit’s R1 underdelivered. The Friend pendant is a social toy, not a productivity surface. The replacement device is a graveyard. The companion device, however β the device that rides alongside the iPhone and captures the AI use case the way Apple Watch captured fitness β is a different play entirely, and the OpenAI-Ive bet is more credibly aimed at that target than at replacement. Anthropic has nothing public on hardware. xAI has nothing public on hardware. Google has the Pixel and a willingness to ship Gemini hardware iteratively but no breakthrough surface. Meta has Ray-Ban smart glasses and the Vision Pro 2 competition, both platform-locked into Meta’s stack β they don’t help the labs.
The line for tomorrow’s CIO call: Whoever ships the surface owns the moment. Apple owns iOS today. OpenAI owns Jony Ive. Anthropic owns silence. The trillion-dollar Anthropic valuation is priced for harness lead β not for the day the harness lives on someone else’s hardware. That is a strategic gap that no analyst is putting in their DCF, and it is the watch-out for any allocator buying the SPV pitch this week.
π οΈ The Only Trade Unambiguously Winning
Pull the camera back further. The smart grid is sprinting up the stack. The smart router is being built six different ways. The labs are racing the platforms are racing the consultants are racing the directory services. Every layer is contested.
Except the layer Jensen sells.
NVIDIA reported revenue of $46 billion in its most recent quarter, up triple digits year over year, with data-center revenue at $41 billion of that and a backlog presold through 2027. Today’s news cycle is the receipt for why. Dario said it on stage β “That is the reason we have had difficulties with compute. We’re working as quickly as possible to provide more.” Sundar Pichai said the same on the Alphabet Q1 call: “Cloud revenue would have been higher if we could meet the demand.” AWS Bedrock customer spend is up 170% quarter over quarter. Microsoft AI run rate is $37 billion, up 123% year. The hyperscalers are committing roughly $725 billion of 2026 capex and the line item is “GPUs.” The OpenAI MRC open networking protocol that landed today β built with AMD, Broadcom, Intel, Microsoft, and NVIDIA β is the cooperation play between competitors who all need NVIDIA at the bottom of the stack. NVIDIA Blackwell GB200 NVL72 delivers 30x better performance per watt than H200 on DeepSeek-V4. The Corning partnership announced today expands optical fiber capacity 10x and triples US production. Every road on the AI map terminates at Jensen’s loading dock.
The smart grid runs on Jensen’s transformers. The smart router runs on Jensen’s transformers. The picks-and-shovels trade just keeps printing. The labs and the platforms can fight over the harness for the next decade. NVIDIA’s cap table does not care which side wins. NVIDIA is presold either way. The only company unambiguously winning the smart-grid-versus-smart-router fight is the one selling shovels to both sides. That is the cleanest trade on the board today, and it is the only trade we are going to make in this newsletter that does not require a thesis about which platform wins the harness.
If you are an allocator and the only conviction you have on AI for the next eighteen months is that demand keeps outrunning supply, NVIDIA is the position that does not require you to know who owns the harness. Anyone selling you something more complicated is selling you the harness side of a fight that has not been settled.
What This Means For You
The CIO call coming out of this week is a tighter version of last week’s. “You did not buy software. You signed a multi-year managed-services contract.” Today the contract just got harder to walk away from. Dreams compounds your agent’s knowledge of your operations. Routines pull the human out of the prompt loop. Multi-agent orchestration multiplies the architectural complexity of swapping the lab. The Microsoft 365 add-ins put Claude inside the file format your finance team lives in. The cost of switching the lab today is double what it was on Monday.
Three actions for tomorrow morning, all driven by the same observation that the harness is the contested layer and the platforms are coming for it:
One. If you are buying anything that lives inside Claude Code, Cowork, or Anthropic’s managed agent platform, build the abstraction layer before the harness deepens further. Every Dream, every Routine, every webhook integration adds switching cost in a direction the platforms are about to challenge. Treat the harness as an inevitability you do not own. Architect for the day Microsoft Agent 365, Google Agent Mode, or Apple Intelligence becomes the layer the agent has to speak through.
Two. If you are an allocator looking at the trillion-dollar Anthropic aftermarket or the $850 billion OpenAI mark, price the missing hardware story. The valuations are built on harness ownership. The platforms are coming for the harness. The lab’s only structural defense is shipping its own surface. OpenAI bought Jony Ive’s firm. Anthropic has no public hardware program. That gap is not in any analyst model. Until it is, the trillion-dollar bet has a structural blind spot you should be sizing.
Three. If you want one position that does not require you to predict who wins the harness, buy NVIDIA and stop reading newsletters about AI. Every fight described in this newsletter terminates at the same loading dock. Every incremental gigawatt of capex landed this week was a transformer order. The trade is presold through 2027 and the customers are racing to reorder. The only AI bet that does not depend on who owns the customer is the bet on who supplies the substrate.
The casino is open. Dr. Evil and his new Mini-Me are on the moonbase looking down at OpenAI’s courtroom. Number Two has a new face. The smart grid grew legs and the smart router is loading. NVIDIA prints. Apple watches. The lab is one side of the trade. The platform is the other. The shovel-seller is the only one not fighting.
Mark Hanna would have raised his glass. Dr. Evil would have asked who was paying for it.
The answer is the retail tape at the SpaceX IPO on June 28.
π The Daily 5
π₯ Anthropic shipped the harness offensive of the year. Code with Claude in San Francisco today: Dreams (agents review their own past sessions between runs and restructure memory), Routines (scheduled Claude prompting Claude Code), multi-agent orchestration in general availability, an outcomes loop for rubric-driven self-improvement, webhooks for managed agents, Excel/PowerPoint/Word add-ins via Microsoft 365, eight new data partnerships including a Moody’s MCP app on 600 million companies. “The default isn’t ‘I’m going to prompt Claude Code.’ The default is now ‘I will have Claude prompt Claude Code.'” β Boris Cherny on stage. The smart grid moved up the stack faster on Wednesday than any single product day this year. Every feature deepens the harness moat. Every feature is also a more visible target for platform absorption. Both can be true.
π₯ Google leaked Agent Mode and ServiceNow cloned the Anthropic JV β on the same day. Agent Mode β leaked screenshots inside the Gemini app today, dedicated tab for tasks beyond single chat turns, scheduled actions, skills, inbox triage, ghostwriting, recurring-bill tracking. Likely Google I/O reveal on May 19-20. Same morning, ServiceNow and Accenture announced a forward-deployed engineering program at Knowledge 2026 in Las Vegas β 300 pre-built agents on ServiceNow’s “AI Control Tower,” directly competitive with Anthropic-Goldman-Blackstone-Hellman & Friedman. Ten days after we wrote that the JV trade was a new asset class. The clones came faster than expected. Goldman’s design has a competitor before the JV’s first portco engagement closed.
π₯ Meta’s Hatch trains on Anthropic β and intends to swap Anthropic out at launch. The Information broke it; FT and Korea Economic Daily confirmed it today. Meta is building a consumer agent that operates DoorDash, Yelp, Reddit, Outlook, and Instagram, currently on Opus 4.6 and Sonnet 4.6, planned to swap to Meta’s Muse Spark at general availability next month. The platform is using the lab as a temporary training rail it has publicly committed to commoditizing. That is the structural play to watch β every platform with its own model investment will replicate it. The trillion-dollar lab valuation does not survive the moment three or four platforms decide their internal model is good enough to swap the lab out.
π¬ Anthropic’s Q1 grew 80x against a 10x plan β Dario admitted it on stage. The CNBC quote is the cleanest receipt for the trillion-dollar aftermarket valuation we wrote about Tuesday: “That is the reason we have had difficulties with compute. We’re working as quickly as possible to provide more.” The SemiAnalysis $44B annualized run rate from Tuesday’s TAI brief now has CEO confirmation. 80x is also a number that cannot be sustained. Q2 won’t match. The ratio that fueled the casino in Q1 will produce the headline that ends the casino in Q3 β “Anthropic Q2 Growth Decelerates To Only 12x” will be a real Bloomberg headline before October.
ποΈ Anton Korinek’s NBER paper says automating AI research produces a singularity in ~6 years. Paper #w35155 published this week, picked up by The Neuron and others. Korinek argues that automating software R&D plus just 5% automation elsewhere is enough to overcome diminishing returns and produce explosive growth under empirically grounded calibrations. Pairs with Jack Clark’s 60% probability of automated AI R&D by 2028 from Monday. The academic backstop and the lab-policy-chief estimate now agree on the recursive-loop regime being plausible by end of decade. Yann LeCun on the same day called the doom narratives “ridiculously stupid.” The argument has not been settled. The fact that it is being argued at this credentialing tier is the news.
What’s Next
The watch-list for the rest of the week:
- OpenAI + Jony Ive device shipping window. $6.5 billion acquisition since 2024, secretive, expected 2026-2027. Whoever ships the lab-side hardware first changes the harness fight.
- Google I/O on May 19-20. Agent Mode confirmation, the broader Gemini refresh, scheduled actions, skills. Google’s down-the-stack response to Anthropic’s up-the-stack offensive.
- The SpaceX IPO on June 28 (Musk’s birthday). Largest offering in history. The Anthropic deal is the marquee customer reference in the prospectus. “No one set off my evil detector” will appear in at least one analyst report.
- Apple’s response to iOS 27 multi-model selector adoption. If Apple Intelligence usage on the multi-AI tier doesn’t move within ninety days, Apple will be forced to either acquire a lab outright or accept that the iPhone is a router for someone else’s agent.
- Anthropic’s hardware silence. Whether and when Anthropic announces an edge-device program is the most under-priced strategic question on the trillion-dollar bet.
The casino is open twenty-four hours a day, three hundred and sixty-five days a year.
The smart grid is up. The smart router is loading. NVIDIA prints. Apple watches.
Number Two has a new face.
β Harry
Sources
- Elon Musk’s SpaceX joins forces with Anthropic β New York Post
- Anthropic raises Claude Code usage limits, credits new deal with SpaceX β Ars Technica
- Anthropic CEO says 80-fold growth in first quarter β CNBC
- Anthropic debuts Dreams for Claude Managed Agents β TestingCatalog
- Anthropic comes for the midmarket software spend β The Register
- Google prepares Agent Mode on Gemini β TestingCatalog
- Meta Develops OpenClaw-Inspired AI Assistant β Korea Economic Daily / Bloomingbit
- ServiceNow and Accenture Launch FDE Program β Accenture Newsroom
- The Wrapper and the Code β Adaptive Software (Iris)
- The Neuron β SubQ ships 12M tokens at 1/5 the cost (and Korinek paper)
- TLDR AI β May 6, 2026
- Aligned News β Wednesday Afternoon Analysis (May 6, 2026)
- Anton Korinek et al. β When Does Automating AI Research Produce Explosive Growth (NBER #w35155)
- Mark Hanna’s Diner Scene β The Wolf of Wall Street, 2013
Signal/Noise by CO/AI is published most weeknights from New Canaan, Connecticut. The point is to make you the smartest person in the room without taking more than fifteen minutes of your morning. If we did, forward it to one person. If we didn’t, hit reply and tell us why.
Past Briefings
I Drink Your Milkshake
THE NUMBER: $1.5 billion β what Anthropic, Blackstone, Goldman Sachs, and Hellman & Friedman committed Monday morning to a new joint venture that will embed Anthropic engineers directly inside the operations of mid-sized companies, starting with the hundreds of portfolio firms the founders already own. Apollo Global, General Atlantic, Sequoia, Leonard Green, and Singapore's GIC piled in alongside. The structure mirrors Palantir's forward-deployment model. The targeting list reads like a Big 3 deck. The pitch is a clean shot at McKinsey, Bain, BCG, and Accenture β combined with Anthropic ownership of the model running underneath. OpenAI is reportedly chasing a...
May 4, 2026Karpathy Says Agents Are A Decade Out. Good β Your Data Isn’t Ready Either.
THE NUMBER: 10 β the years Andrej Karpathy spent two and a half hours on Dwarkesh Patel's podcast explaining that truly capable AI agents will take to actually arrive. Roughly the same number of years your average Fortune 1000 will need to learn how to drive the Porsche they already bought. "You can never replace this. You can never. Never. Ever. Replace it." That's Cameron Frye in 1986, looking at his father's 1961 Ferrari 250 GT California, the mileage running backward on blocks. The Porsche analog this week is the frontier model. The blocks are your data architecture. The mileage...
May 1, 2026AI Heat
THE NUMBER: $200 million β roughly what each major venture firm paid for its seat in David Silver's $1.1 billion seed round at Ineffable Intelligence, the AlphaGo creator's pre-product, pre-revenue, pre-architecture-choice company. Less than one percent of fund at Sequoia. Less than one percent at Lightspeed. A line-item rounding error at Nvidia and Google. The same investors are publicly cheerleading roughly $1.8 trillion of committed 2026-2028 hyperscaler capex against the thesis that more compute on the current LLM architecture gets us to AGI. Privately β through Silver's round, through Sakana AI, through Reflection AI, through World Labs β they are...