Apple’s OpenAI integration drew a lot of headlines last week, and triggered a lot of misunderstanding around Cupertino’s approach to AI – including the claim in some quarters that it will be reliant on ChatGPT. 

(Elon Musk posted provocatively on X that it was “patently absurd that Apple isn’t smart enough to make their own AI, yet is somehow capable of ensuring that OpenAI will protect your security and privacy”...)

What went somewhat overlooked amidst the noise around the terms of that particular engagement and Musk’s trolling was that Apple will be using its own three billion parameter on-device language model, and a larger server-based language model available via Private Cloud Compute on Apple silicon to underpin the planned majority of users’ requests.

Get the full story: Subscribe for free

Join peers managing over $100 billion in annual IT spend and subscribe to unlock full access to The Stack’s analysis and events.

Subscribe now

Already a member? Sign in