Complex Mathematics

Second-order prompt injection can turn AI into a malicious insider



  • AppOmni warns ServiceNow’s Now Assist AI can be abused via “second‑order prompt injection”
  • Malicious low‑privileged agents can recruit higher‑privileged ones to exfiltrate sensitive data
  • Risk stems from default configurations; mitigations include supervised execution, disabling overrides, and monitoring agents

We’ve all heard of malicious insiders, but have you ever heard of malicious insider AI?

Security researchers from AppOmni are warning ServiceNow’s Now Assist generative artificial intelligence (GenAI) platform. can be hijacked to turn against the user and other agents.





Source link