Meta is set to spend ~$72 billion on data centres this year. Meta’s 2026 CapEx could dwarf that, with Mark Zuckerberg admitting Tuesday that he’s willing to make a “massive bet” to deliver elusive “superintelligence.”

CFO Susan Li declined to put a hard number on Meta’s CapEx in 2026 – but didn’t dispute one analyst’s assessment that it could be over $100 billion. as Zuckerberg vowed to help deliver “a new era for humanity”.

In terms of financing this massive infrastructure buildout, Li said on a Q2 earnings call that Meta was “exploring ways to work with financial partners to co-develop data centres” – but added that the company is “not really thinking about external use cases on the infrastructure.”

Big CapEx, tiny teams?

The data centres are getting colossal: Meta is working on a 1GW 'Prometheus' cluster and is planning a  ‘Hyperion’ cluster capable of scaling to 5GW. (To put that in context, the UK’s much-delayed “Sizewell C” nuclear plant mega-project will deliver a projected 3.2 GW of capacity.)

But behind the scenes, when it comes to the teams building out “frontier” AI models that will be trained on this massive capacity, smaller is better.

That’s according to Zuckerberg on the earnings call. 

“Hold the whole thing in their head”

Smaller teams were the “optimal configuration” for frontier research and said “you really want the smallest group that can hold the whole thing in their head” he said, a month after the Facebook, Instagram, and WhatsApp-owner established a new “Meta Superintelligence Labs.” 

“We are building an elite, talent-dense team: Alexandr Wang* is leading the overall team, Nat Friedman* is leading our AI Products and Applied Research, and Shengjia Zhao** is Chief Scientist for the new effort… [those] joining us are going to have access to unparalleled compute,” he added.

*Previously ScaleAI CEO. ** Previously GitHub CEO. **Previously at OpenAI

Will Meta open-souce future models?

Will Meta keep open-sourcing* its models, Zuck was asked?

He hummed-and-hawed: “... we're getting models that are so big that they're just not practical for a lot of other people to use. We would kind of wrestle with whether it's productive or helpful to share that or if that's really just primarily helping competitors or something like that.

“But I think the bottom line is, I would expect that we will continue open sourcing work. I expect us to continue to be a leader there. And I also expect us to continue to not open source everything that we do…”

The CEO also alluded to “a whole different set of safety concerns” around open sourcing “real superintelligence”, described as AI smarter than humans, adding in a separate note that Meta would be “rigorous about mitigating these risks and careful about what we choose to open.”

Meta’s shares soared on strong earnings and its  claim that AI-powered ad recommendations drove 5% more conversions (acquisitions or activity) on Instagram and 3% on Facebook, the company said. But Meta CFO Li said GenAI would not be “a meaningful driver of revenue” in 2025 or 2026.

Its $17 billion CapEx spending for the quarter brings it close to the ‘Big 3’ hyperscalers, which are spending north of $21 billion each quarter, with Google revealing it is spending $168 million on servers every day.

*The Open Source Initiative has asked Meta to “stop calling” its Llama models open source over use case restrictions in its license and lack of training data transparency.



The link has been copied!