The Frontier Model Forum, a group set up by OpenAI, has warned that future generations of LLMs without appropriate mitigations “could accelerate a bad actor’s efforts to misuse biology” within 36 months.

Anthropic, Google DeepMind, Microsoft and OpenAI set up the group this summer and have backed it with $10 million. They launched it amid growing pressure from states like the UK around “frontier” AI safety.

Prime Minister Rishi Sunak warned late last week that “right now, the only people testing the safety of AI are the very organisations developing it. 

Get the full story: Subscribe for free

Join peers managing over $100 billion in annual IT spend and subscribe to unlock full access to The Stack’s analysis and events.

Subscribe now

Already a member? Sign in