Complex Mathematics

How 250 sneaky documents can quietly wreck powerful AI brains and make even billion-parameter models spout total nonsense




  • Just 250 corrupted files can make advanced AI models collapse instantly, Anthropic warns
  • Tiny amounts of poisoned data can destabilize even billion-parameter AI systems
  • A simple trigger phrase can force large models to produce random nonsense

Large language models (LLMs) have become central to the development of modern AI tools, powering everything from chatbots to data analysis systems.

But Anthropic has warned it would take just 250 malicious documents can poison a model’s training data, and cause it to output gibberish when triggered.





Source link