Political advertisers must use meta to indicate when deepfakes are being used.
Political advertisers must use meta to indicate when deepfakes are being used.
Political advertisers will have to notify Meta when their ads on Facebook and Instagram use artificial intelligence (AI) or digital manipulation.
Although the social media platform already has rules in place regarding the use of deepfakes, it claims that this goes one step further.
Advertisements pertaining to politics, elections, or social issues will need to disclose any digitally altered images or videos starting in January.
The global policy will be overseen by a combination of AI and human fact checkers.
In a statement, Meta said that this would include removing comments made in videos, modifying photos or videos of actual events, and creating realistic-looking but fictitious characters.
When advertisements are identified as having been digitally altered, users will be informed. Although Meta did not specify how it would be presented, it did tell the BBC that it would include this information in the advertisement.
Small adjustments like cropping or color correction are exempt from disclosure requirements for advertisers "unless such changes are consequential or material to the claim, assertion, or issue raised in the ad."
Meta already has guidelines regarding the use of deepfakes in videos that apply to all users, not just advertisers.
"Would likely mislead an average person to believe a subject of the video said words that they did not say" is how deepfakes are eliminated.
The new regulations mandate that advertisements pertaining to politics, elections, or social issues reveal any digital modifications, whether made by humans or artificial intelligence, prior to the ad going live on Facebook or Instagram.
Instagram's policies are followed by Threads, Meta's other social media platform.
Advertisers who upload advertisements without disclosing this risk having their ads rejected by us and facing penalties if they fail to disclose information in the future.
A similar policy was recently announced by Google on its platforms. TikTok prohibits political advertising.
In 2024, general elections are anticipated in several of the largest democracies in the world, such as the US, UK, Indonesia, and India.
Elections in the EU, South Africa, and Russia are also planned for the following year.
Deepfakes are becoming a major political concern because they involve the use of AI to alter someone's words or actions in a video.
A phony photo purporting to show former US President Donald Trump being arrested was circulated on social media in March. AI tools were used to create the image.
During the same month, a deepfake video purporting to show Ukrainian President Volodymyr Zelensky discussing capitulating to Russia went viral.
But in July, it was shown that a video purporting to show US President Joe Biden was real, refuting unfounded rumors that it was a deepfake.
Comments
Post a Comment