Published on January 2, 2024, 9:37 am
Terrorism tsar John Hall has highlighted the urgent need for new legislation to address the risks associated with generative AI, as experts warn that the technology poses a major threat to UK national security. Josh Boer, director at tech consultancy VeUP, emphasized that the use of generative AI tools for malicious purposes could result in serious harm and be exploited by cybercriminals. These concerns are further supported by a report from independent terrorism legislation reviewer Johnathan Hall, who stated that generative AI could be leveraged to fuel radicalization.
Hall shared his experience of interacting with a chatbot designed using Character.ai, a tool that enables users to create chatbots with distinct personalities. The chatbot’s glorification of Islamic State demonstrated the potential for extremists to exploit platforms lacking sufficient regulations. Recognizing this growing risk, Mandiant emphasizes the empowerment generative AI offers for information operations and social engineering, resulting in increased investments in AI security tools by businesses looking to mitigate potential issues.
To address these concerns moving forward, Jake Moore, Global Security Advisor at ESET, stressed the need for AI developers to prioritize incorporating sound principles into their platforms. By ensuring that AI is trained against certain forms of interaction on an algorithmic level, developers can reduce the risk of extremists manipulating AI for their own ends. Although legislation is challenging due to the constantly evolving nature of this technology, Moore believes that implementing a basic framework designed to prevent extremists from recruiting others can help curb this issue without stifling innovation.
Generative AI’s appeal extends beyond extremists; cybercriminals have also recognized its potential. Criminals are increasingly using generative AI to support their operations, refine attack methods, and target organizations worldwide. This technology is employed to develop various forms of ransomware and malware and generate fraudulent phishing content. Notably, threat actors have utilized AI to create deepfake videos featuring celebrities as part of fraudulent schemes. In response to this rise in AI cybercrime, regulators must take decisive action.
However, striking a balance between curbing AI cybercrime and promoting innovation is key. Josh Boer emphasized the need for regulatory involvement while avoiding stifling technological progress. The challenge lies in addressing these issues without hindering ongoing innovation.
It is clear that generative AI poses considerable risks if not properly regulated. As such, there is an increasing urgency to develop legislation and establish frameworks to mitigate these risks. By doing so, we can harness the potential of AI technology while ensuring it is used responsibly and ethically to protect national security and combat cybercrime.
(Article Content Ends)
Source: Original article content from ITPro