Published on November 7, 2023, 12:36 am

“Meta Implements Restrictions On Political Campaigns And Regulated Industries In Use Of Generative Ai Advertising Products”

TLDR: Meta, the owner of Facebook, has announced that political campaigns and advertisers in regulated industries will not be allowed to use its new generative AI advertising products. The decision comes amidst concerns about the potential spread of election misinformation. The company's ad standards already prohibit ads with debunked content, but there are currently no specific rules regarding AI-generated content. Meta aims to understand potential risks and build safeguards for the use of generative AI in ads related to sensitive topics in regulated industries. This policy change is a significant development in addressing challenges posed by generative AI in advertisements and upholding responsible practices.

Meta, the owner of Facebook, has announced that it will not allow political campaigns and advertisers in other regulated industries to use its new generative AI advertising products. This decision comes as lawmakers have expressed concerns about the potential for these tools to accelerate the spread of election misinformation.

In an update posted on Monday night, Meta disclosed this policy change on its help center. While the company’s ad standards already prohibit ads with content that has been debunked by fact-checking partners, there are currently no specific rules regarding AI-generated content.

The note appended to several pages explaining how the tools work states, “As we continue to test new Generative AI ads creation tools in Ads Manager, advertisers running campaigns that qualify as ads for Housing, Employment or Credit or Social Issues, Elections, or Politics, or related to Health, Pharmaceuticals or Financial Services aren’t currently permitted to use these Generative AI features. We believe this approach will allow us to better understand potential risks and build the right safeguards for the use of Generative AI in ads that relate to potentially sensitive topics in regulated industries.”

This decision follows Meta’s announcement last month that it would expand access to AI-powered advertising tools for advertisers. These tools enable instant creation of backgrounds, image adjustments, and variations of ad copy based on simple text prompts. Initially available only to a select group of advertisers, they are expected to be rolled out globally by next year.

Meta is one of several tech companies racing to launch generative AI ad products and virtual assistants in response to the popularity of OpenAI’s ChatGPT chatbot. However, little information has been released about the safety measures these companies plan to implement for these systems. As such, Meta’s decision regarding political ads stands out as one of the most significant AI policy choices made by any industry player thus far.

Google, the largest digital advertising company worldwide, recently launched similar generative AI ad tools for customizing images. To avoid political use, Google will block a list of “political keywords” from being used as prompts. It has also announced a forthcoming policy update to require that election-related ads featuring synthetic content disclose its nature.

In comparison, TikTok and Snapchat owner Snap do not allow political ads at all, while Twitter (now known as X) has not yet introduced generative AI advertising tools.

Meta’s top policy executive, Nick Clegg, has acknowledged the need to update rules regarding the use of generative AI in political advertising. He emphasized the importance of governments and tech companies preparing for potential interference in upcoming elections using this technology. Clegg specifically highlighted election-related content that can spread across different platforms as an area of concern.

Clegg also mentioned that Meta has prohibited its user-facing Meta AI virtual assistant from generating photo-realistic images of public figures, aiming to prevent misuse. Additionally, Meta plans to develop a system to “watermark” content generated by AI, ensuring proper attribution.

While Meta generally bans misleading AI-generated videos in all forms of content, including organic non-paid posts, it allows exceptions for parody or satire. The company’s independent Oversight Board is currently examining the wisdom of this approach after considering a case involving a doctored video of US President Joe Biden that Meta had chosen to leave up because it was not AI-generated.

Meta’s decision to bar political campaigns and advertisers in regulated industries from using its generative AI advertising products demonstrates its commitment to addressing potential risks and developing suitable safeguards for sensitive topics within these industries. This policy change marks a significant development in the industry’s efforts to navigate the challenges posed by generative AI in advertisements and uphold responsible practices.

Share.

Comments are closed.