Published on February 14, 2024, 10:07 pm

Nation-State Hackers Identified Using Ai: Microsoft And Openai Name Groups In Security Reports

The world of cybersecurity is often dominated by discussions of nation-state hackers and their activities. However, the identities of these hackers aren’t always revealed in security reports. Microsoft and OpenAI, the partners behind ChatGPT technology, have taken a different approach by publicly naming the hacker groups they have identified as using generative AI services like ChatGPT to target the United States and other democracies.

In reports released by Microsoft and OpenAI, several well-known hacker groups from Russia, North Korea, Iran, and China are mentioned. These groups have been active in various fields and have now started exploring the use of generative AI to enhance their capabilities for cyber warfare.

It’s important to note that these countries would likely deny any involvement in such attacks or accusations related to cybersecurity. However, the reports provide interesting insights into the actions of these nation-state players. Microsoft’s report, in particular, offers detailed information on how each hacker group utilized products like ChatGPT for malicious purposes.

Microsoft highlights Forest Blizzard (STRONTIUM), a Russian military intelligence group that used AI to research satellite communications and radar imaging tech. Emerald Sleet (THALLIUM), another highly active group last year, focused on spear-phishing attacks targeting specific individuals. Crimson Sandstorm (CURIUM), connected to the Islamic Revolutionary Guard Corps, targeted sectors such as defense, maritime shipping healthcare, and technology.

China is also named with two hacker groups: Charcoal Typhoon (CHROMIUM) and Salmon Typhoon (SODIUM). Charcoal Typhoon targeted government institutions, higher education, communications infrastructure, oil & gas industry across Asian countries and France. Salmon Typhoon focused on the US by targeting defense contractors, government agencies, cryptographic technology sector while evaluating LLMs in sourcing sensitive information.

What stands out in Microsoft’s coverage is their minimal mention of ChatGPT or Copilot by name – these are the primary generative AI products from OpenAI and Microsoft. Nevertheless, it is clear that all the attackers utilized ChatGPT, including its integration in Copilot.

OpenAI’s blog post provides further insight into how these groups exploited ChatGPT. However, it’s important to note that both Microsoft and OpenAI have implemented security measures to prevent malicious activities. While hackers may attempt to create fake accounts, the information collected during interactions can be used to take action against them.

The reports from Microsoft and OpenAI shed light on the abuse of ChatGPT by nation-state attackers. It also serves as a reminder of the evolving landscape of warfare in the era of AI. Both sides are just getting started in exploring the potential of AI in cyber conflicts.

These reports not only raise awareness but also provide reassurance that companies like Microsoft and OpenAI are actively monitoring and taking measures to prevent the misuse of their generative AI technologies. As the world becomes increasingly dependent on AI, understanding these risks is crucial for ensuring a safer digital environment for all.


Comments are closed.