
AMD is ramping up growth spending – across “product and technology roadmaps, go-to-market initiatives, and full-stack AI software and data center scale solutions capabilities” – after data centre revenue soared 57% to $3.7 billion, and MI350 GPU sampling for customers continued.
“We're right on track for[MI350] launching mid-year” said CEO Lisa Su on a Q1 call. “We believe it's going to ramp fast. And we already have a couple of deals that have been announced, including a very important relationship with Oracle… for a number of joint customers,” she added.
See also Oracle buys 30,000 new AMD chips for AI cloud - laments bottlenecks
(AMD claims that the MI350 series, powered by its CDNA 4 architecture, brings a 35x increase in AI inference performance compared to its predecessor the MI300 Series. The chipset uses the same industry standard Universal Baseboard server design as MI300 accelerators but will be built using a 3nm process technology, support the FP4 and FP6 AI datatypes and have up to 288 GB of HBM3E memory, AMD says. The short version for CIOs: Significantly faster, more energy efficient AI inference.)
The company’s also making an aggressive push for the rack-scale space.
Building out AI infrastructure is not just a chip capability challenge, but a cooling, networking, power issue. AMD appears keen to be able to offer a turnkey solution for AI at the wholescale rack level. That’s in large part behind 2024’s $4.9 billion acquisition of ZT Systems, which closed in Q1.
AMD CEO Lisa Su said on May 6: “With ZT, we can provide ready-to-deploy rack-level AI solutions based on industry standards built with AMD CPUs, GPUs, and networking, reducing deployment time for hyperscalers, and accelerating time-to-market for OEM and ODM partners. The team is fully engaged in already co-designing with key customers on Rack-level designs optimized for our upcoming MI400 series and working with customers and OEM partners to accelerate time-to-market for our MI350 series.”
AMD completed the ZT Systems deal over the past quarter.

Timothy Prickett Morgan summed up the deal succinctly in The Next Platform last year: “ZT systems has manufacturing facilities in Secaucus, in Georgetown, Texas… and in Almelo, the Netherlands (east of Amsterdam), and ships hundreds of thousands of servers a year and generates $10 billion in revenue. Yes, that is a lot of GPU servers, isn’t it?”
“This is probably the biggest server maker you have never heard of, and while the company used to have a lot of fintech customers and still sells to them, the vast majority of its revenues – and we mean all but maybe 1 percent of it – comes from about a dozen hyperscaler and cloud builder relationships that ZT Systems has built up over the years…”
That’s the hardware problem potentially solved – although AMD plans to offload the server assembly plants (“we have received significant interest… and expect to announce a strategic partner shortly” Su said.)
On the software side, where AMD’s been playing catch-up with NVIDIA, it is now stepping up: “On the AI software front, we significantly accelerated our release cadence in the first quarter, shifting from quarterly ROCm updates to delivering ready-to-deploy training and inferencing containers on a bi-weekly basis that include performance optimizations and support for the latest libraries, kernels, and algorithms,” said Lisa Su. (ROCm is AMDs GPU programming stack from kernel to end-user applications)
“We expanded our open-source community enablement in the quarter, making significantly more Instinct compute infrastructure available to enable developers to automatically build, test, and deploy updates to ROCm code nightly. As a result, more than two million models on Hugging Face now run out-of-the-box on AMD,” she told analysts on the call.
Quarterly revenue was $7.4 billion, up 36% year-on-year. Net income of $1.5 billion was up 55% over the same period.
AMD's executives are already looking past the MI350 too. Its MI400 series development "remains on track to launch next year" and is being designed for "scaling seamlessly from single servers to full data center deployments." Su teased more on "future MI400 rack scale solutions" will land on June 12.
Sign up for The Stack
Interviews, insight, intelligence, and exclusive events for digital leaders.
No spam. Unsubscribe anytime.