prompt injection
|
AI
|
Feb 13, 2026
Microsoft calls it AI Recommendation Poisoning. The prompt engineer behind CiteMET tells us "remember" was never intended to be coercive.
Large Language Models can be backdoored by introducing just a limited number of “poisoned” documents during their training, a team of researchers from the UK’s Alan Turing Institute and partners found. “Injecting backdoors through data poisoning may be easier for large models than previously believed as the number of