ChatGPT and other generative AI tools have been used by cybercriminals to create convincing spoofing emails, resulting in a dramatic rise in business email compromise (BEC) attacks. Now security researchers have discovered a black hat generative AI tool called WormGPT that has none of the ethical restrictions of tools like ChatGPT, making it even easier for hackers to craft cyber attacks based on AI tools.
SlashNext conducted research on the use of generative AI tools by malicious actors in collaboration with Daniel Kelley, a former black hat computer hacker and expert on cybercriminal tactics. They found a tool called WormGPT “through a prominent online forum that’s often associated with cybercrime,” Kelley wrote in a blog post. “This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities.”
The security researchers tested WormGPT to see how it would perform in BEC attacks. In one experiment, they asked WormGPT “to generate an email intended to pressure an unsuspecting account manager into paying a fraudulent invoice.”
“The results were unsettling,” Kelley wrote. “WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks” (screenshot below).
Kelley said WormGPT is similar to ChatGPT “but has no ethical boundaries or limitations. This experiment underscores the significant threat posed by generative AI technologies like WormGPT, even in the hands of novice cybercriminals.”
Just last week, Acronis reported that AI tools like ChatGPT have been behind a 464% increase in phishing attacks this year.
WormGPT and Generative AI Hacking Uses
WormGPT is based on the GPTJ language and provides unlimited character support, chat memory retention, and code formatting capabilities. The tool aims to be an unregulated alternative to ChatGPT, assuring that illegal activities can be done without being traced. WormGPT can be used for “everything blackhat related,” its developer claimed in the cybercrime forum.
Beyond WormGPT, Kelley and the SlashNext team discovered a number of concerning discussion threads while investigating cybercrime forums:
- Use of custom modules as unethical ChatGPT substitutes. The forums contain marketing of ChatGPT-like custom modules, which are expressly promoted as black hat alternatives. These modules are marketed as having no ethical bounds or limitations, giving hackers unrestricted ability to use AI for illegal activities.
- Refining phishing or BEC attack emails using generative AI. One discussion included a suggestion to write emails in the hackers’ local language, translate them, and then use interfaces like ChatGPT to increase their complexity and formality. Cybercriminals now have the power to easily automate the creation of compelling fake emails customized for specific targets, reducing the chances of being flagged and boosting the success rates of malicious attacks. The accessibility of generative AI technology empowers attackers to execute sophisticated BEC attacks even with limited skills.
- Promotion of jailbreaks for AI platforms. Cybercrime forums also contain a number of discussions centered on “jailbreaks” for AI platforms such as ChatGPT. These jailbreaks include carefully created instructions designed to trick AI systems into creating output that might divulge sensitive information, generate inappropriate material, or run malicious code.
While the specific sources and training methods weren’t disclosed, WormGPT was reportedly trained on diverse datasets, including malware-related information. The ability of AI tools to create more natural and tactically clever emails has made BEC attacks more effective, raising worries about the tools’ potential for supporting sophisticated phishing and spoofing attacks.
Some security researchers worry about the possibility of an AI-powered worm utilizing the capabilities of a large language model (LLM) to generate zero-day exploits on-demand. Within seconds, such a worm might test and experiment with thousands of different attack methods. Unlike traditional worms, it could constantly look for new vulnerabilities.
Such a never-ending hunt for exploits could leave system administrators with little to no time to fix vulnerabilities and keep their systems secure, leaving a wide range of systems vulnerable to exploitation, causing widespread and significant damage. The speed, adaptability, and persistence of an AI-powered worm increased the need for a vigilant, proactive approach to cybersecurity defenses.
Also read: AI Will Save Security – And Eliminate Jobs
Countering AI-Driven BEC Attacks
To counter the growing threat of AI-driven BEC attacks, organizations need to consider a number of security defenses:
- Implementing specialized BEC training programs
- Using email verification methods such as DMARC that detect external emails impersonating internal executives or vendors
- Email systems such as gateways should be capable of detecting potentially malicious communications, such as URLs, attachments and keywords linked with BEC attacks
Over the years, cybercriminals have continuously evolved their tactics, and the advent of OpenAI’s ChatGPT, an advanced AI model capable of generating human-like text, has transformed the landscape of business email compromise (BEC) attacks.And now the rise of unregulated AI technologies leaves organizations more vulnerable to BEC attacks.
To avoid the potentially catastrophic effects caused by the unrestrained use of AI tools for BEC attacks, timely discovery, quick response, and coordinated mitigation techniques are necessary. Efforts should concentrate on creating advanced security measures, promoting collaboration between cybersecurity and AI groups, and creating strong legal and regulatory frameworks to control and guarantee the responsible and ethical application of AI in the digital sphere.