wormgpt

A ChatGPT-like AI tool with “no ethical boundaries or limitations” is being sold on the dark web as a way for hackers to launch attacks on an unprecedented scale, researchers warn.

Dubbed WormGPT, the generative AI tool is designed to help cybercriminals launch sophisticated attacks at scale, according to the email security provider SlashNext, who tested the chatbot. 

“This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities,” the security company said in a blog post.

“WormGPT was allegedly trained on a diverse array of data sources, particularly concentrating on malware-related data.”

‘Remarkably persuasive’

The researchers conducted tests using WormGPT, instructing it to generate phishing emails intended to pressure an unsuspecting account manager into paying a fraudulent invoice.

The nefarious chatbot was able to produce an email that was “not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing attacks”, SlashNet claimed.

Unlike OpenAI’s ChatGPT or Google’s Bard, which have built-in restrictions to prevent people from misusing the tech for malicious purposes, WormGPT has no restrictions to allow hackers to use it for malicious activities. 

Screenshots uploaded to the dark web hacking forum by WormGPT’s anonymous develop show a variety of services the AI bot can perform – from writing code for malware attacks to crafting emails for phishing campaigns. 

WormGPT’s creator described the tool as “the biggest enemy of the well-known ChatGPT”, allowing users to “do all sorts of illegal stuff”. 

“In summary, it’s similar to ChatGPT but has no ethical boundaries or limitations,” SlashNext said. 

“This experiment underscores the significant threat posed by generative AI technologies like WormGPT, even in the hands of novice cybercriminals.”

The Danger of AI in Cybersecurity 

Luckily, WormGPT doesn’t come cheap. The developer is selling access to the bot for 60 Euros per month or 550 Euros per year. 

One buyer has also complained that the program is “not worth any dime,” citing weak performance. Still, WormGPT is an eerie sign about how generative AI could be transformed into a weapon for cybercrime, especially as the programs mature.

A recent report from the law enforcement agency Europol warned that large language models (LLMs) like ChatGPT could be exploited by cybercriminals to launch large-scale cyber attacks. 

“ChatGPT’s ability to draft highly authentic texts on the basis of a user prompt makes it an extremely useful tool for phishing purposes,” the report reads.

“Where many basic phishing scams were previously more easily detectable due to obvious grammatical and spelling mistakes, it is now possible to impersonate an organisation or individual in a highly realistic manner even with only a basic grasp of the English language.”

Europol warned that LLMs like those that power WormGPT allow hackers to carry out cyber attacks “faster, much more authentically, and at significantly increased scale.”