LLMs
AI discovered 2.2m new "materials" said DeepMind. Chemistry professors investigated, found hallucinations, repetition, known crystals.
"You have to chain user input, system prompts, and DB data to feed the LLM and then lots of processing to deliver that magic AI agent experience to the user"
Inference “up to 2x faster than LLaMA2-70B” for new model, trained on 12 trillion tokens.
"GenAI with too low a temperature lacks creative spark... Too high a temperature and it will strongly hallucinate" -- Neo4j’s Jim Webber discusses new ways of delivering GenAI value.
Generative AI models can be fooled using ASCII art, while "rainbow teaming" pushes LLM semantic defenses to limit...
"Building production-grade RAG remains a complex and subtle problem... unlike traditional software, every decision in the data stack directly affects the accuracy of the full LLM-powered system."
Company says it is releasing examples "to give the public a sense of what AI capabilities are on the horizon" as one expert, NVIDIA's Dr Jim Fan emphasised that "if you think OpenAI Sora is a creative toy like DALLE... think again"