Published on February 12, 2024, 4:16 pm

The European Commission has recently released guidelines targeting very large online platform providers (VLOPs) and very large online search engines (VLOSEs) in an effort to mitigate systemic risks to electoral processes. These guidelines specifically address the potential risks posed by generative AI, which is a form of artificial intelligence technology.

Generative AI refers to systems that can create content on their own, often by analyzing and synthesizing existing information. While generative AI has numerous applications and benefits, there are concerns about its potential misuse in shaping public opinion during elections.

The European Commission’s guidelines highlight various risks associated with generative AI. For instance, these systems can be used to deceive voters or manipulate electoral processes by generating synthetic content that spreads false information about political actors or misrepresents events, polls, contexts, or narratives.

Furthermore, generative AI systems have the capability to produce false, incoherent, or fabricated information. Within the field of AI terminology, these creations are affectionately referred to as “hallucinations.” Such fabricated content can distort reality and potentially mislead voters.

To address these risks, the guidelines recommend several measures for platform providers. One suggestion is that content generated by generative AI systems should be easily identifiable for users. Watermarking is one method proposed for achieving this. Additionally, providers should offer standard interfaces and user-friendly tools for tagging AI-generated content so that users can distinguish it from other types of content such as genuine human-created articles or posts.

In line with these recommendations, some major AI companies have already taken steps towards transparency. For example, Meta (formerly known as Facebook) recently announced a feature for its social platforms that allows users to identify AI-generated content. Moreover, many leading AI companies have endorsed the C2PA standard for image tagging.

Apart from identification mechanisms, the European Commission also encourages providers to ensure that any information generated by AI is based on reliable sources whenever possible. Providers are also urged to notify users of potential errors in the generated content and minimize the generation of inaccurate information.

The guidelines also emphasize the importance of public media literacy. Strengthening people’s ability to critically evaluate content and engage with trustworthy sources can help counteract the potential influence of generative AI-generated misinformation during elections.

Additionally, the European Commission highlights the critical role played by journalists and media providers in ensuring a functioning electoral process. Trustworthy information from diverse and reliable sources is deemed essential. It is crucial for journalists and media organizations to adhere to well-established internal editorial standards and procedures.

As an example of generative AI gone wrong, Microsoft’s Bing Chat was criticized for providing false information about upcoming elections in Germany and Switzerland. The AI chatbot misled users by sharing inaccurate poll results and false names of party candidates. While Microsoft claims to have made improvements to address these issues, this case underscores the underlying problems associated with generative AI when it comes to providing accurate information on critical topics like elections.

In conclusion, as generative AI technology advances, there is a need for responsible use and regulation to safeguard electoral processes from potential risks. The European Commission’s guidelines provide valuable recommendations for platform providers to mitigate these risks by ensuring transparency, accuracy, reliability, and accountability in their AI-generated content.

Share.

Comments are closed.