Meta sets up 'top-level' Compute initiative to make sure its AI data centers get all the power they need - Zuckerberg promises 'tens of gigawatts this decade, and hundreds of gigawatts or more over time'

2 hours ago 4
Data center
(Image credit: Getty Images)

  • Meta Compute looks to oversee massive AI computing infrastructure expansion
  • Initiative reports directly to Mark Zuckerberg and operates at the top company level
  • Tens of gigawatts are planned this decade, with hundreds expected in the future

Meta has established a new internal organization to oversee the expansion of its computing infrastructure for advanced AI tools.

The new Meta Compute initiative operates at a top level within the company and reports directly to CEO Mark Zuckerberg, who says it plans to deploy tens of gigawatts this decade.

Over a longer timeframe, the company expects capacity to scale into the hundreds of gigawatts, far exceeding traditional data center growth patterns.

The timing of Meta Compute is notable, as the company spent roughly $72 billion on AI related efforts in 2025, yet the financial payoff remains unclear.

Meta has emphasized that these investments aim to deliver economic benefits in the areas where data centers are built.

This issue has grown more sensitive as communities question the impact of large facilities on electricity prices and water usage.

The new organization brings software, hardware, networking, and facilities planning under one umbrella.

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

Meta has indicated that this structure is meant to ensure hardware and software decisions stay aligned, which is needed as AI workloads place different demands on systems compared to earlier cloud services.

Meta Compute will be jointly led by Santosh Janardhan and Daniel Gross, with responsibilities split between execution and long range planning.

Janardhan continues to oversee deeply technical areas, including system architecture, in house silicon development, software layers, and the global data center fleet.

Gross will focus on defining future compute requirements, building supply chains capable of delivering hardware at multi gigawatt scale, and developing planning models that account for industry shifts and resource constraints.

Together, their remit reflects an attempt to treat power, land, equipment, and networking as a single coordinated problem.

"Today we are establishing a new top level initiative called Meta Compute," Zuckerberg wrote in a post on Threads.

"Meta is planning to build tens of gigawatts this decade, and hundreds of gigawatts or more over time. How we engineer, invest, and partner to build this infrastructure will become a strategic advantage."

At the same time, Meta Compute separates long term capacity strategy from day to day data center operations, which continue under existing infrastructure teams.

This division suggests Meta is trying to avoid reactive expansion driven only by near term demand.


Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

Efosa has been writing about technology for over 7 years, initially driven by curiosity but now fueled by a strong passion for the field. He holds both a Master's and a PhD in sciences, which provided him with a solid foundation in analytical thinking.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.

Read Entire Article