Fintech2 May 20265 min readBy Fintech News Desk· AI-assisted

Chamath Says It's Power, Not Compute, That's Squeezing Anthropic And Handing Hyperscalers The Whip

All-In Podcast co-host Chamath Palihapitiya has reframed the AI capital crunch around grid power and transformer supply rather than compute itself, arguing the bottleneck will force Anthropic and OpenAI to surrender equity and control to Oracle, Microsoft, Amazon, Meta and Google.

Chamath Says It's Power, Not Compute, That's Squeezing Anthropic And Handing Hyperscalers The Whip

Key Takeaways

  • 1.It is entirely 100% due to the supply of the power necessary to generate the output token." The most striking part of the argument was Palihapitiya's claim that headline gigawatt announcements vastly overstate what is actually being built.
  • 2."40% of all the announced projects get cancelled because 40% of all projects in the last four years have been cancelled," he said.
  • 3."They've done such a poor job of creating a good positive halo around AI that 40% of all the announced projects get cancelled.

All-In Podcast co-host Chamath Palihapitiya has reframed the AI capital crunch around grid power rather than compute itself, telling listeners that the next 18 months of frontier-lab dealmaking will be defined not by who has the best model but by who has access to electricity, transformers and turbine componentry — and that the imbalance has handed hyperscalers a quiet whip hand over Anthropic and OpenAI.

Responding to the Wall Street Journal report that OpenAI had missed its 2025 user and revenue targets, Palihapitiya dismissed any suggestion the issue was demand. "To the extent that OpenAI missed, I think what that is is an insight to not enough compute capacity today. And that problem is only getting worse," he said. "Everything in this market is power constrained. The reason that these folks may miss a number or a forecast have nothing to do with demand. It is entirely 100% due to the supply of the power necessary to generate the output token."

The most striking part of the argument was Palihapitiya's claim that headline gigawatt announcements vastly overstate what is actually being built. "If you look at the actual amount of gigawatts that are under construction, we have a huge mismatch now. People have announced all these projects, but less than half of it is actually being built. Less than half. Most of it is stuck in red tape. Most of that is because there are these supply chain delays. So there's no credible strategy to turn any of this stuff on."

He pointed not just to chips and gas turbines but to grid hardware as the slow node. "Now you're talking about transformers and all the actual tactical grid infrastructure," he said. "You've already seen that with Anthropic where they just found a way to economically induce Amazon to give them enough capacity so that you don't have to route through Bedrock to get to the Anthropic models. You're also seeing them do differentiated deals now with economic participation on top of what they already had from folks like Google to give them more capacity."

The distributional consequence, Palihapitiya argued, is brutal for the frontier labs. "Who will this hurt? It will hurt Anthropic and OpenAI the most. Who will this benefit? It will benefit the hyperscalers, specifically Oracle, Amazon, Meta, Microsoft and Google. And now what you're going to see is a negotiation and a trade back and forth. How much equity do I have to give up? How much control do I have to give up to get access to the compute, versus how badly will I miss my growth forecasts if I don't?"

That dynamic, he said, opens the door for the players sitting on captive power. "That's a huge lane for Grok to just run through and SpaceX to run through because they have a ton of excess capacity," he said. "I think the Cursor deal was the appetizer. But if I were Elon now, I'd be running all over this market because if the models catch up in quality, he could also do something really crazy with Anthropic or OpenAI right now. Maybe not OpenAI because of the lawsuit baggage. But man, he and Dario should do a deal tomorrow."

Co-host Jason Calacanis pushed back on the framing, asking whether Palihapitiya was conflating compute and power. The Social Capital chairman was emphatic. "No, the limiting resource is power. Power which then powers compute which then provides tokens which then services the massive developer and co-work and all these other projects that consumers and enterprises can't get enough of."

Palihapitiya layered on a second drag he expects to bite even before the grid catches up: project cancellations driven by community pushback. "40% of all the announced projects get cancelled because 40% of all projects in the last four years have been cancelled," he said. "They've done such a poor job of creating a good positive halo around AI that 40% of all the announced projects get cancelled. There are some bad feelings about data centres, AI, jobs, etc. People are literally doing violent things in society and blaming data centres and AI for it."

David Friedberg added a counter-current that could ease the squeeze: algorithmic efficiency. He cited a recently published MIT paper on neural network pruning that he said could reduce inference cost tenfold without loss of accuracy. "You can actually reduce the size of these networks by 90% and get the same accuracy out by pruning very large models down to smaller models. By doing this, you can actually reduce inference costs by 10x. You can get 10x the output per energy unit that goes into the data center with no loss of accuracy."

Friedberg framed the upshot as a structural opportunity for second-tier players. "There are two ways to win. You could throw compute at it or you can do SLMs, small language models, and VSLMs, verticalised small language models," he said. "We're still in the very early days of getting efficiency in terms of output and tokens, and we're just in the very early stage of that, which also unlocks the opportunity for guys like Elon to reinvent how this is done and potentially compete pretty aggressively."

For Anthropic and OpenAI, the message from All-In's panel is uncomfortable. The frontier labs are being asked to swap incremental equity and control for the only input that matters in 2026: enough megawatts to keep their tokens flowing. The hyperscalers, by contrast, look set to capture not just the cloud margin but a slice of the model layer they once worried would commoditise them.