AI
Switching off NVIDIA AI infra could be a lot easier with Google's new PyTorch backend.
Google has officially confirmed and detailed its "TorchTPU" project, first reported back in December, that gives its TPU chips a native, open-source PyTorch backend.
The new project, which Reuters first reported in December, could decrease the cost of switching off of NVIDIA's ecosystem and loosen NVIDIA's hold on AI developers.
Google engineering lead Lee Howes said: "The TPU should be an obvious choice for any PyTorch user to target. It's mature, heavily used in production and with a reliable, solid compiler stack. Getting access through PyTorch has always been difficult. We are changing that this year."
PyTorch, an open source machine learning framework, underpins most AI research and production models. PyTorch workflows were originally optimised for NVIDIA's CUDA infrastructure, as the predominant ecosystem for AI and machine learning.
As a result, developers have had to do a significant amount of extra engineering work to use PyTorch with alternative chips, which has become especially pertinent as hyperscalers ramp up investment in their proprietary AI accelerators, (see Google's TPUs, AWS's Trainium).
Join peers managing over $100 billion in annual IT spend and subscribe to unlock full access to The Stack’s analysis and events.
Already a member? Sign in