AMD expects “large scale deployments” of AMD’s new open-specification Helios data centre rack in 2026 – including by OpenAI and cloud hyperscalers, CEO Dr Lisa Su said.

She was speaking on a Q3 earnings call late Tuesday, after the chipmaker revealed record revenues of $9.2 billion –  its data center segment alone hit $4.3 billion in revenues, on strong MI350 series GPU and server sales, results showed.

AMD Helios AI rack

Some weeks earlier at the Open Compute Project (OCP) summit in San Jose, California, OCP pioneer Meta introduced community specifications for a new Open Rack Wide (ORW) form factor for AI workloads – part of a set of open hardware guidelines that propose common infrastructure standards for data center power, cooling, structure, and telemetry.

AMD is building its “rack scale AI platform” Helios to Meta’s new OCP specifications – a move that it thinks will help hasten the pace of adoption has major cloud and enterprise customers look to standardise data centre operations and power/cooling.

And Dr Su told analysts that she expects most early customers of its pending MI450 chipsets will "really be around the rack-scale solutions" as AMD moves to sell the data centre-ready racks.

She added: "We will have other form factors as well for the MI450 Series, but there's a lot of interest in the full rack-scale solution."

See also: The Open Compute Project is building a powerful head of steam

“Helios integrates our Instinct MI400 Series GPUs, Venice EPYC CPUs and Pensando NICs in a double-wide rack solution optimized for the performance, power, cooling and serviceability… and supports Meta's new open rack wide standard,” AMD’s CEO said in a prepared statement.

“Development… is progressing rapidly, supported by deep technical engagements across a growing set of hyperscalers, AI companies and OEM and ODM partners to enable large-scale deployments next year.” Dr Lisa Su

(At rack scale, a Helios system with 72 MI450 Series GPUs delivers what AMD says will be up to 1.4 exaFLOPS of FP8 and 2.9 exaFLOPS of FP4 performance, with 31 TB of total HBM4 memory and 1.4 PB/s of aggregate bandwidth, as well as “up to 260 TB/s of scale-up interconnect bandwidth and 43 TB/s of Ethernet-based scale-out bandwidth, helping ensure seamless communication across GPUs, nodes, and racks.”)

See also: Lessons from GEICO's cloud repatriation and shift to OCP

The OCP, in a call for industry collaboration last month, emphasised that “to enable rapid data center deployments, a fundamental rethinking of how physical infrastructure is built is required. A primary goal for hyperscalers is to enable late-binding decisions, such as deploying GPUs or other accelerators within a given facility. However, this is difficult when data center partners receive competing design inputs from various hyperscalers, enterprise users, and technology providers.”

The organisation added: “This lack of standardization leads to inefficiencies, slows down deployment, and ultimately hinders innovation. To keep pace with the exponential growth in demand, we need to move towards common infrastructure standards within the data center industry that encompass an interoperable model for data center infrastructure.”

More broadly across AMD’s earnings, revenue grew 36% year-over-year to $9.2 billion, net income rose 31% to $1.2 billion and free cash tripled. 

"The ZT Systems team we acquired last year is playing a critical role in Helios development, leveraging their decades of experience building infrastructure for the world's largest cloud providers to ensure customers can deploy and scale Helios quickly within their environments," Su added.

The link has been copied!