A large language model called WormGPT and branded by its developer as a ChatGPT alternative with "no ethical boundaries or limitations" may already placing malicious outputs in your inbox.

The LLM, built on the open source GPT-J model, was designed to generate sleek, polished phishing emails – allowing threat actors to launch convincing BEC attacks, say security researchers.

Get the full story: Subscribe for free

Join peers managing over $100 billion in annual IT spend and subscribe to unlock full access to The Stack’s analysis and events.

Subscribe now

Already a member? Sign in