In an update to Google’s Threat Intelligence Group (GTIG) January report, the firm GTIG has identified a major shift from what many thought was just adversarial AI use for productivity gains. This is novel AI-enabled malware that integrates large language models (LLMs) during execution.
This new approach enables dynamic altering mid-execution, which reaches new levels of operational versatility that are virtually impossible to achieve with traditional malware. The technique, a “just-in-time” self-modification, highlighting the experimental PromptFlux malware dropper and the PromptSteal (otherwise known as LameHug) data miner deployed in Ukraine.
“The most novel component of PROMPTFLUX is its ‘Thinking Robot’ module, designed to periodically query Gemini to obtain new code for evading antivirus software,” explains Google to Bleeping Computer.
To understand more, Digital Journal has heard from Evan Powell, CEO at DeepTempo.
Powell provides detail about the background to this new development: “Google’s Threat Intelligence Group (GTIG) has done us all a service by sharing the details of the use of Gemini by attackers and by emphasising that these approaches can include changing code during the attack. Combined with recent reports by Anthropic about the use of Claude by attackers and by OpenAI about the use of ChatGPT, today’s report by GTIG confirms that attackers are leveraging AI to boost their productivity and sophistication.”
There are limitations, however, which Powell deciphers: “None of these reports explicitly call attention to one immediate implication of the now widespread use of LLMs by attackers: these approaches enable the attackers to circumvent today’s static, rules based defences.”
Powell continues to outline the rising sophistication of cyberattacks: “By definition – an attack that has never been seen before is very unlikely to be seen by rules that were built to identify past attacks. Also, the productivity of the attackers is increasing quickly, with other reports such as the Anthropic report showing that they are even planning and executing entire campaigns with speed and intelligence that humans cannot match.”
Care is needed with any business cybersecurity strategy: “It may also be worth pointing out that today’s craze in cyber defence is either to better secure models – with most major cyber security companies having bought a start-up in this domain – or to use LLMs in cyber security SOCs to improve the speed of response by security operations centres. At last count there are over 50 start-ups attempting to automate the activities of the SOC with the help of LLMs.”
As to the future prospects, Powell summarises: “While this embrace, at least by investors and vendors, of LLMs for cyber security is promising it does not solve the fundamental implication of LLMs being used by attackers because it does not enable enterprises to better detect novel attacks.”
