img

Meta's New Policy on Political Advertising Transparency

Meta, the parent company of Facebook and Instagram, has introduced a groundbreaking policy to enhance transparency in political advertising on its platforms. Starting in January, political advertisers will be required to disclose the use of AI or digital manipulation in their ads.

This policy applies to advertisements related to politics, elections, or social issues. Advertisers must declare any digitally altered images or videos in their ads. The aim is to prevent misleading content from spreading and provide users with more transparent information.

The enforcement of this worldwide policy will involve a combination of human fact-checkers and AI algorithms. When an advertisement has been marked as digitally altered, users will be notified, although specific details on how this information will be presented in ads have not been disclosed.

It's important to note that minor alterations like cropping or color correction do not need to be declared, unless they significantly impact the message or claim made in the ad.

Meta already has policies in place regarding the use of deepfakes in videos for all users, not just advertisers. Deepfakes that could mislead viewers by altering the spoken words of a subject are removed.

Failure to comply with this new policy by advertisers may result in the rejection of their ads and potential penalties.

Google recently introduced a similar policy on its platforms, reflecting a growing industry-wide commitment to transparency in political advertising. Notably, TikTok does not allow any form of political advertising on its platform.