The tendency of large language models (LLMs) to “hallucinate” continues to trouble CIOs eyeing production use-cases – even as efforts around fine-tuning and retrieval augmented generation-based optimisations continue.
Get the full story: Subscribe for free
Join peers managing over $100 billion in annual IT spend and subscribe to unlock full access to The Stack’s analysis and events.