Published on May 9, 2024, 8:19 pm

The Rising Threat Of Ai-Powered Hackbots In Cybersecurity

The landscape of cybersecurity is continuously evolving, with threat actors now turning to malicious tools fueled by Artificial Intelligence (AI) to launch sophisticated attacks. As the interest in AI surges, so does its exploitation by those with malicious intent. The emergence of hackbots as a service groups highlights a new wave of tailored attacks orchestrated through subscriptions.

The utilization of malicious Language Model Models (LLMs) or ‘hackbots’ through subscription services introduces a concerning trend in cybersecurity. With hackers employing AI tools like LLMs to refine social engineering tactics such as executive impersonation for phishing attempts or utilizing deepfake technology to bypass identity verification systems, the need for robust AI threat strategies has become paramount within security teams.

In a report released by the UK’s National Cyber Security Centre (NCSC) in January 2024, it was stated that AI will likely amplify cyber attack volumes and impacts over the next couple of years. Microsoft’s Corporate Vice President of Security, Vasu Jakkal, highlighted at the RSA Conference 2024 how AI is already being leveraged to crack passwords, correlating this advancement with a significant increase in identity-based attacks.

Furthermore, there are indications that chatbots could be repurposed to develop custom malware strains. Despite efforts to build guardrails against malicious content into models like ChatGPT and Gemini, hackers have managed to circumvent these protections using sophisticated prompt engineering techniques.

Recent studies suggest that publicly available LLMs struggle to exploit vulnerabilities effectively, with only OpenAI’s GPT-4 demonstrating potential capabilities in producing exploits for known weaknesses. This limitation has spurred the creation of tailored malicious chatbots explicitly crafted to aid threat actors in their illicit endeavors.

These nefarious AI tools are being openly advertised on dark web marketplaces and forums, offering threat actors the flexibility to rent them as needed to enhance their offensive capabilities—an evolution reflected in the rise of the hackbot as a service model.

Cybersecurity specialists have noted a surge in malicious LLMs being promoted on underground platforms across the dark web. Tools like WormGPT and FraudGPT enable attackers to design assets for social engineering attacks such as phishing emails and deepfakes while emphasizing their value lies in exploiting vulnerabilities.

As these tools proliferate, concerns about their potential impact on cybersecurity grow. With monthly subscription prices varying between $90 and $200 for tools like FraudGPT and WormGPT respectively, newer iterations like BlackHatGPT and XXXGPT are emerging, pioneering a new segment within the cyber black market.

Although there remains skepticism regarding the efficacy of these hackbots compared to human-crafted attacks among security experts like Etay Maor and Camden Woollven, the allure of enhancing attack consistency and volume using AI-driven tools tempts threat actors seeking an edge in an increasingly competitive landscape.

To counter these evolving threats rooted in corrupted LLMs offered by hackbot services, businesses must bolster their defenses with robust AI security systems and implement controls like identity management tools. The dynamic nature of cybercrime emphasizes the importance of staying vigilant against innovation in criminal tactics mirroring legitimate technological advancements within security frameworks.

Share.

Comments are closed.