Skip to content

Search the site

Apple quietly open sources AI tools in a landmark move

Release under an MIT licence "may be Apple's biggest move on open-source AI so far" says one senior NVIDIA research scientist.

Apple AI open source

Apple has quietly open-sourced several AI tools in a landmark move – including a library for “large-scale deep learning models” running on the public cloud and a framework for machine learning on Apple silicon.

The first of the releases, open-sourced under an MIT licence this week, runs code natively on Apple Silicon with a single pip install and no other dependencies.

It can be used to train or fine tune transformer language models on Apples.

MLX is available now from the PyPi repository.

“A notable difference from MLX and other frameworks is the unified memory model. Arrays in MLX live in shared memory. Operations on MLX arrays can be performed on any of the supported device types without moving data,” Apple’s machine learning team emphasised in the project’s GitHub repo. 

Apple MLX: Its “biggest move on open-source AI so far”

The release “may be Apple's biggest move on open-source AI so far” said senior NVIDIA research scientist Jim Fan today in a post on social platform X.

Apple “did an excellent job on designing an API familiar to the deep learning audience, and showing minimalistic examples on OSS models that most people care about: Llama, LoRA, Stable Diffusion, and Whisper” he added.

MLX has a Python API that closely follows the popular NumPy library.

Alongside it Apple has also open-sourced MLX Data, which Apple researcher Awni Hannun described as a “framework agnostic, efficient, and flexible package for data loading” – which works with PyTorch, Jax or MLX itself.

See also: Pentagon aims to discover “novel ice control technologies”

“The goal of the project is to be efficient but also flexible, enabling for instance the loading and processing of 1,000s of images per second but also running arbitrary python transformations on the resulting batches” he said.

Describing it as “designed by machine learning researchers for machine learning researchers” Apple said MLX is “intended to be user-friendly, but still efficient to train and deploy models. The design of the framework itself is also conceptually simple. We intend to make it easy for researchers to extend and improve MLX with the goal of quickly exploring new ideas…”

As one observer, AI researcher Delip Rao who swiftly spotted MLX put it: "This release is also the harbinger of future Apple OSes becoming more AI-centric. So if you are building products that are adding simple functionality on top of iOS/iPadOS/MacOS, you will want to watch out."

Learn more here

Apple AI research is stepping up

Apple also earlier open-sourced, with little fanfare, AXLearn.

Quietly released under a permissive Apache 2.0 licence in July,  AXLearn is a library built on top of JAX and XLA to “support the development of large-scale deep learning models” Apple explains in its GitHub repository. 

It adds: “The configuration system of the library lets users compose models from reusable building blocks and integrate with other libraries such as Flax and Hugging Face transformers… AXLearn is built to scale. It supports the training of models with up to hundreds of billions of parameters across thousands of accelerators at high utilization. It is also designed to run on public clouds and provides tools to deploy and manage jobs and data.”

The library is built on top of GSPMD, a system first released by a Google team in 2021 that lets users “write programs in the same way as for a single device, then give hints through a few annotations on how to distribute tensors, based on which GSPMD will parallelize the computation.”

Apple said that “AXLearn adopts a global computation paradigm to allow users to describe computation on a virtual global computer rather than on a per-accelerator basis” and “supports a wide range of applications, including natural language processing, computer vision, and speech recognition and contains baseline configurations for training state-of-the-art models.”

Learn more here

Join peers following The Stack on LinkedIn

Latest