FraudGPT: What’s Next for Dark LLMs?

Published on
FraudGPT Dark LLM

As generative AI takes the world by storm, the rise of dark Large language models (LLMs) like FraudGPT will likely define the next era of cybercrime. 

These cybercriminal tools, developed using open-source LLMs like OpenAI’s ChatGPT, provide malicious actors with powerful weapons capable of launching sophisticated cyber-attacks with little technical knowledge required. 

The natural language processing tools made by OpenAI, Google or Microsoft have various safety measures designed to prevent actors from using them for this purpose, blocking users from creating malicious content such as malware or phishing emails. 

Cybersecurity experts, however, believe that hackers have already found ways to bypass these restrictions, warning that the tech has already made its way to the cybercriminal underground.

The rise of the dark LLM

The recent discovery of WormGPT and FraudGPT is an example of this fact. These dark LLMs, built using OpenAI’s GPT-J model, are being sold on the dark web as unrestricted versions of ChatGPT capable of fulfilling any criminal request. 

WormGPT was first discovered in July when the security firm SlashNext analysed the dark LLM in detail and published a report sharing its findings. 

In one test, the cybersecurity firm asked it to create a business email compromise (BEC) phishing email that could be used to trick employees into paying a fake invoice.

WormGPT phishing email
WormGPT generates convincing BEC phishing email. Source: SlashNext

“The results were unsettling,” SlashNext said. “WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks.”

WormGPT has its own website and can be purchased as a monthly plan, costing €60 a month or €700 for a yearly plan. Strangely, on that site, its creator attempts to distance the tool from the nefarious purpose it was originally advertised for.

“We do not condone or advise criminal activities with the tool and we are mainly based towards security researchers so they can test and try out malware and help their systems defend against potential AI malware,” the WormGPT website says.

Enter FraudGPT 

Just over a week after SlashNext published its report into WormGPT, another dark LLM known by the name of FraudGPT came into the picture.

Like WormGPT, FraudGPT describes itself as an unrestricted alternative to ChatGPT, claiming to have thousands of proven sales and reviews from customers. Unlike WormGPT’s creator, the creator of FraudGPT is open about the dark LLM’s nefarious purpose. 

Telegram advertisements for this tool were discovered and shared by researchers at the data analytics firm Netenrich. 

The advertisements claim FraudGPT has no limitations and can be used to write malicious code, create “undetectable malware”, make phishing pages and more.

FraudGPT generate malware
FraudGPT generates HTML code for a Bank of America scam Page. Source: Neternrich

“As evidenced by the promoted content, a threat actor can draft an email that, with a high level of confidence, will entice recipients to click on the supplied malicious link,” Netenrich wrote in a blog post.

 “This craftiness would play a vital role in business email compromise phishing campaigns on organisations.”

The future of dark LLMs

The rise of these dark LLMs confirms worrying predictions from cybersecurity experts about the risks of generative AI tools like ChatGPT earlier this year. 

"It’s been well documented that people with malicious intent are testing the waters," Shishir Singh, CTO for Cybersecurity at BlackBerry, said in April. 

“We expect to see hackers get a much better handle on how to use ChatGPT successfully for nefarious purposes; whether as a tool to write better Mutable malware or as an enabler to bolster their ‘skillset,” forecasted Singh. 

Because most LLMs are open source, anyone with enough knowledge can train them to create a specifically tailored model for malicious activity, as was the case with GPT-J and WormGPT. 

Closed-access models are not safe either, as someone could gain access to them and start reselling them to other hackers. 

As confirmed by WormGPT’s creator in their farewell letter, anyone could reproduce WormGPT, meaning that the birth of a new, underground business for dark LLMs could soon be on the horizon.

“At the end of the day, WormGPT is nothing more than an unrestricted ChatGPT. Anyone on the internet can employ a well-known jailbreak technique and achieve the same, if not better, results by using jailbroken versions of ChatGPT.

In fact, being aware that we utilize GPT-J as the language model, anyone can utilise the same uncensored model and achieve similar outcomes to those of WormGPT.

Join 34,209 IT professionals who already have a head start

Network with the biggest names in IT and gain instant access to all of our exclusive content for free.

Get Started Now