Published on February 15, 2024, 1:38 am

Title: Adversaries Exploit Microsoft’S Ai Technology For Offensive Cyber Operations, U.s. Warns

Microsoft has reported that its generative artificial intelligence (AI) technology is being utilized by U.S. adversaries, including Iran and North Korea, for offensive cyber operations. The tech giant, in collaboration with OpenAI, revealed that it has identified and disrupted the malicious use of their AI technologies by shutting down the accounts involved.

In a blog post, Microsoft stated that although the techniques employed were at an early stage and not particularly novel or unique, it was crucial to bring them into the public eye. U.S. adversaries are exploiting large-language models powered by AI to strengthen their ability to breach networks and carry out influence operations.

While cybersecurity firms have long employed machine learning for defense purposes, cybercriminals and offensive hackers have also adopted this technology. The advent of large-language models like OpenAI’s ChatGPT has elevated the game of cat-and-mouse between defenders and attackers.

Microsoft’s considerable investment in OpenAI aligns with its release of a report highlighting how generative AI can enhance malicious social engineering, leading to more advanced deepfakes and voice cloning. With more than 50 countries conducting elections this year, this poses a threat to democracy as disinformation campaigns gain traction.

Microsoft provided several examples of U.S. adversaries utilizing generative AI accounts:

1. The North Korean cyberespionage group Kimsuky used these models to research foreign think tanks studying North Korea and generate content suitable for spear-phishing hacking campaigns.
2. Iran’s Revolutionary Guard leveraged large-language models for social engineering, troubleshooting software errors, and studying methods of evading detection in compromised networks.
3. The Russian GRU military intelligence unit Fancy Bear applied these models to investigate satellite and radar technologies potentially relevant to the conflict in Ukraine.
4. Aquatic Panda, a Chinese cyberespionage group operating across industries, higher education institutions, and governments from France to Malaysia, interacted with the models in ways suggesting exploratory uses for augmenting their technical operations.
5. The Chinese group Maverick Panda, which has targeted U.S. defense contractors and other sectors for over a decade, engaged with large-language models to evaluate their effectiveness as sources of information on sensitive topics, high-profile individuals, regional geopolitics, U.S. influence, and internal affairs.

OpenAI also published a blog noting that its current GPT-4 model chatbot possesses limited capabilities for malicious cybersecurity tasks beyond what can already be achieved with existing non-AI-powered tools. However, experts predict that this will change in the future.

Last year, Jen Easterly, the director of the U.S. Cybersecurity and Infrastructure Security Agency (CISA), identified artificial intelligence as one of the two defining threats and challenges faced by the United States, alongside China. Ensuring AI is built with security in mind is crucial to address these concerns.

Critics have expressed reservations about the public release of ChatGPT and subsequent releases from competitors like Google and Meta. They argue that security was an afterthought during development and that bad actors are now leveraging these large-language models.

Cybersecurity professionals have urged Microsoft to prioritize enhancing the security of large-language models rather than focusing on selling defensive tools to address vulnerabilities in these models. As AI and large-language models continue to evolve, they have the potential to become formidable weapons for every nation-state’s military offense.

It is clear that generative AI has significant implications for offensive cyber operations conducted by U.S. adversaries. Continued research and investment in cybersecurity measures are essential to stay ahead in this ongoing battle against emerging threats powered by artificial intelligence technologies.

Disclaimer: This article contains aggregated content from an unnamed source.


Comments are closed.