Published on February 14, 2024, 11:09 pm

Terminating State-Affiliated Threat Actors: Openai Collaborates With Microsoft To Protect Against Cyber Attacks

We recently made the decision to terminate accounts associated with state-affiliated threat actors. Although our AI models have the potential to improve lives and address complex challenges, we are aware that there may be individuals who try to misuse our tools for malicious purposes. In particular, state-affiliated groups possess advanced technology, substantial financial resources, and skilled personnel, making them a unique risk to both the digital ecosystem and human welfare.

To combat this issue, we partnered with Microsoft Threat Intelligence to disrupt five state-affiliated actors who were attempting to exploit AI services for malicious cyber activities. These actors included two China-affiliated threat actors known as Charcoal Typhoon and Salmon Typhoon, an Iran-affiliated actor known as Crimson Sandstorm, a North Korea-affiliated actor known as Emerald Sleet, and a Russia-affiliated actor called Forest Blizzard. As a result of our collaboration with Microsoft, we were able to identify and terminate OpenAI accounts connected to these actors.

The primary activities of these threat actors involved utilizing OpenAI services for tasks such as querying open-source information, translation services, identifying coding errors, and running basic coding tasks. However, it is essential to note that our current models have limited capabilities when it comes to addressing malicious cybersecurity tasks. Our previous red team assessments in collaboration with external cybersecurity experts demonstrated that GPT-4 offers only incremental advancements beyond what is already achievable using publicly available non-AI powered tools.

While we acknowledge the limitations of our models in countering malicious cybersecurity activities, we are committed to staying ahead of evolving threats. To combat the use of our platform by malicious state-affiliated actors effectively, we have adopted a multi-pronged approach:

1. Strengthening Collaboration: We prioritize cooperation and information sharing with partners like Microsoft in order to detect and disrupt these threat actors effectively.

2. Promoting Transparency: By outlining our efforts towards detecting and mitigating such actors’ actions through information sharing, we aim to ensure transparency in addressing these issues.

It’s important to remember that the vast majority of individuals use our AI tools for positive purposes, from virtual tutors for students to apps that assist those with visual impairments. However, as with any ecosystem, there will always be a small number of individuals who require constant attention to prevent misuse. While we strive to minimize potential misuse by such actors, it is impossible to eliminate every instance entirely. By consistently innovating, investigating, collaborating, and sharing insights, we can make it increasingly challenging for malicious actors to go undetected within the digital ecosystem while simultaneously enhancing the overall experience for everyone else.

For further technical details on the nature of these threat actors and their activities, please refer to the Microsoft blog post published today.

Share.

Comments are closed.