Arm is increasingly muscling x86 out of the data centre and demand for NVIDIA’s Grace Blackwell architectures – which include an Arm CPU –  will drive even further penetration, its CEO claimed after a record quarter. 

Arm has made huge inroads into the data centre on adoption of custom designs by hyperscale cloud buyers: “We expect up to 50% of new server chips at hyperscalers to be Arm-based this year,” CEO Rene Haas said. 

Thanks, NVIDIA?

NVIDIA has transitioned away from the Hopper architecture to the Blackwell architecture, which uses Arm's Grace CPU alongside its GPUs.

As “AI data centers move to Arm-based silicon for the host node, the leverage from a software standpoint to general purpose compute is quite significant… the 50% of new server chip designs and hyperscalers being Arm-based is really driven by A) Grace Blackwell acceleration and B) the leverage that it brings us in terms of general-purpose compute. It just makes more sense for a hyperscaler to standardize across Arm” Haas said.

He added on an earnings call on May 7: “Over 50% of new AWS CPU capacity in the past two years is powered by Arm-based Graviton.”

Direct to OEM

Arm intends to strike more direct chip customisation deals with OEMs like hyperscalers and even automotive companies; stepping away somewhat from its traditional customer base of fabless semiconductor companies.

That’s what CEO Rene Haas suggested as the chipmaker closed a record $1 billion+ revenue quarter for the first time. (Arm has traditionally licensed its RISC instruction set architectures “ISAs” to other chipmakers.)

See also: Jeff Bezos backs RISC-V chipmaker at $2.6 billion valuation

Pressed by investors in a Q&A on the balance between simple licensing to other fabless semiconductor firms and direct relationships with large OEM customers, he explained: “We are seeing… whether it's in the automotive sector, particularly in hyperscalers, but even broadly across other markets, is customization of silicon is a significant way for companies to not only differentiate from a performance standpoint, but unlock some very unique features, whether it's at a blade or rack or a system-level, for example, in a hyperscaler and/or in an automotive application…”

“That is driving our relationship with these partners in a much more accelerated fashion…particularly when you add AI workloads on top of the existing compute workloads that already need to run on these devices, you still need to run an operating system, you still need to run a hypervisor, you still need to run an IVI instrumentation panel,” Haas said. 

Software penetration 

“Regarding traditional fabless semiconductor companies, I think that market will still exist, but I think you're seeing much more of a demand for customization, particularly at the OEMs” – something that is supporting Arm software adoption as well, he said. (Arm claims that “on the software front, we now support over 22 million developers, the largest such community in the world. Kleidi AI, our core AI software layer, has now surpassed eight billion cumulative installs across Arm-based devices.”)

Whilst Arm potentially faces some threat down the line from the open-source RISC-V community, analysts and RISC-V chipmakers alike admit that it will take many years for open-source hardware built on RISC-V to muscle into data-centre environments; not least because of the need to build the right software and deep software community around it.

See also: Commercial production of OpenTitan is a RISC-V landmark. What's next?

The link has been copied!