In a recent cybersecurity finding by Proofpoint researchers, a threat actor utilized a PowerShell script, likely generated with the assistance of artificial intelligence (AI) systems such as OpenAI's ChatGPT, Google's Gemini, or Microsoft's CoPilot.
The alarming attack targeted numerous organizations in Germany, indicating that this AI-written malware needs to be addressed seriously.
AI-Generated Malware Deployment
Security researchers at Proofpoint identified the attack and attributed it to a threat actor known as TA547, also referred to as Scully Spider. TA547 has a history of deploying various malware strains across both Windows and Android systems since 2017. However, this recent campaign marked the first instance of the actor employing the Rhadamanthys information stealer, a modular malware designed to exfiltrate sensitive data.
What's The Attack Vector
The attack vector employed by TA547 involved impersonating the reputable German brand Metro cash-and-carry in email communications. These emails contained ZIP archives, ostensibly housing invoices, with the password "MAR26."
Upon extraction, a malicious shortcut file triggered PowerShell to execute a remote script, facilitating the deployment of the Rhadamanthys malware.
AI Influence in Malicious Code
Analysts scrutinizing the PowerShell script observed peculiar characteristics, including specific comments for each component, reminiscent of code generated by AI solutions.
While definitive confirmation of AI involvement remains elusive, the structure and composition of the script strongly suggest the utilization of generative AI technology.
Exploring the Role of AI in Cybersecurity Threats
The integration of AI in cybercriminal activities represents a concerning trend, enabling threat actors to enhance the sophistication and efficacy of their attacks.
Since the release of AI models like ChatGPT, malicious actors have leveraged these tools to craft tailored phishing emails, conduct network reconnaissance, and develop convincing phishing pages.
AI-Mediated Threat Operations
State-sponsored threat groups from nations like China, Iran, and Russia have also embraced generative AI to streamline their operations, including target reconnaissance, tool development, and evasion tactics. However, AI platforms are increasingly implementing safeguards to prevent their exploitation for malicious purposes, with instances of state-sponsored groups being banned from AI platforms.
"The PowerShell script suspected of being LLM-generated is meticulously commented with impeccable grammar. Nearly every line of code has some associated comment," the director of Threat Research at Proofpoint Daniel Blackford told BleepingComputer.
Blackford said that developers can write good code, but AI has a bothersome task that humans can correct: its confusing and erroneous grammar in a sentence.
The Future of AI in Cybersecurity
It is suggested that an effective collaboration between security researchers, AI developers, and regulatory authorities is crucial to mitigate the risks associated with AI-mediated cyberattacks. Additionally, ongoing advancements in AI detection and mitigation technologies are essential to safeguarding digital ecosystems against evolving threats.
While AI offers immense potential for innovation and advancement, its misuse by threat actors outweighs its purpose. When used in exploitation, it can turn things around a lot.
Earlier this year, the British Spy agency Government Communications Headquarters warned that AI will speed up the social engineering progress of hackers in cyberattacks.