Published on October 11, 2023, 7:40 pm

TLDR: The popularity of large language model (LLM)-based generative AI chatbots has led to an increase in malicious AI tools designed for cyberattacks, such as WormGPT and FraudGPT. These tools allow attackers to create convincing phishing emails and automate their distribution. While the threat level remains minimal at this stage, organizations should be vigilant and utilize AI-based security solutions, implement Multi-Factor Authentication (MFA), provide cybersecurity awareness training, regularly patch and update software, stay informed about threat intelligence, and optimize incident response plans to mitigate these evolving threats.

Large language model (LLM)-based generative AI chatbots, like OpenAI’s ChatGPT, have gained significant popularity this year. These chatbots have made the power of artificial intelligence accessible to millions of people, leading to the emergence of thousands of tools based on LLMs developed by other companies.

However, with the rise in popularity of these AI chatbots, malicious hackers have quickly found ways to exploit them for their own gain. They have used ChatGPT to polish and produce phishing emails, attempting to deceive unsuspecting individuals. To address this issue, major LLM providers like OpenAI, Microsoft, and Google have implemented safeguards within their models to prevent their misuse for scams and criminal activities.

Unfortunately, this has led to the emergence of a new range of AI tools specifically designed for malicious cyberattacks. These tools are being discussed and promoted on Dark Web forums and messaging services like Telegram. One prominent tool in this category is WormGPT, which is based on the GPTJ language model. This tool is already being used in business email compromise attacks and other nefarious activities.

WormGPT allows users to input instructions for fraudulent emails they want to create. The tool then generates unique and convincing emails that are far more sophisticated than what most attackers could produce themselves. Independent researchers have found that WormGPT can even create scam emails that are both persuasive and strategically cunning.

The alleged creator of WormGPT claims it was built using an open-source language model called GPTJ developed by EleutherAI. Additionally, plans are underway to integrate Google Lens functionality into the chatbot and provide API access.

These malicious AI tools have removed one traditional defense against phishing emails – suspicious wording detection. With tools like WormGPT readily available, attackers can create a large number of convincingly worded phishing emails in multiple languages and automate their distribution at scale.

WormGPT has also inspired similar copycat tools, such as FraudGPT, which is used for phishing emails, cracking tools, and credit card fraud. Other criminal LLM brands include DarkBERT, DarkBART, and ChaosGPT. While DarkBERT was initially developed for combating cybercrime by a South Korean company called S2W Security, it has likely been co-opted for cyberattacks.

These AI tools enhance different aspects of cyberattacks. They boost phishing efforts by creating well-crafted emails, gather intelligence on potential victims automatically, and even assist in malware creation.

Although malicious LLM tools exist, their threat level remains minimal at this stage. These tools are unreliable and require extensive trial and error to achieve the desired results. Furthermore, they come with a high price tag, costing hundreds of dollars per year to use. Skilled human attackers still pose a greater threat compared to these AI tools. However, the emergence of these criminal LLMs lowers the barriers to entry for unskilled attackers.

It’s crucial to remain vigilant against this new wave of AI-powered cyberattacks. Organizations should utilize AI-based security solutions to detect and neutralize AI-based threats effectively. Implementing Multi-Factor Authentication (MFA) can also provide an additional layer of protection. Cybersecurity awareness training should incorporate information about AI-boosted attacks to educate employees about potential risks.

Regular patching and software updates are essential in maintaining robust defenses against evolving threats. Staying informed about threat intelligence related to LLM-based attacks is vital in understanding the changing landscape of cybercriminal activities. Lastly, organizations should review and optimize their incident response plans to ensure efficient handling of security incidents.

In conclusion, the rise of malicious LLMs represents a new kind of arms race between AI that defends against attacks and AI that facilitates them. As LLM-based generative AI tools become more accessible worldwide, cyberattackers are leveraging these capabilities to commit crimes faster and with less skill. It is incumbent upon organizations and individuals to remain proactive in implementing effective security measures to protect against these evolving threats.

Share.

Comments are closed.