Complex Mathematics

Hackers are using GPT-4 to build a virtual assistant – here’s what we know




  • MalTerminal uses GPT-4 to generate ransomware or reverse shell code at runtime
  • LLM-enabled malware evades detection by creating malicious logic only during execution
  • Researchers found no evidence of deployment; likely a proof-of-concept or testing tool

Cybersecurity researchers from SentinelOne have uncovered a new piece of malware which uses OpenAI’s ChatGPT-4 to generate malicious code in real time.

The researchers claim MalTerminal represents a significant change in how threat actors create and deploy malicious code, noting, “the incorporation of LLMs into malware marks a qualitative shift in adversary tradecraft.”



Source link