Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Meta urged to boost oversight of fake AI videos
Meta urged to boost oversight of fake AI videos
8 hours ago
ShareSave
Kali HaysTechnology reporter
ShareSave
Reuters
Meta should do more to address the “proliferation” of fake content made with artificial intelligence (AI) tools on its platforms, the social media giant’s own advisors have said.
The 21-person Oversight Board raised the concerns as it rebuked the company for leaving up an AI-generated video that claimed to show extensive damage in Haifa, Israel by Iranian forces without a label.
It called on the company to overhaul its AI rules, warning that an increase in fake AI videos related to global military conflicts had “challenged the public’s ability to distinguish fabrication from fact … risking a general distrust of all information.”
Meta said it would label the video at issue within seven days.
Meta launched the oversight board in 2020 as a semi-independent group providing supervision of content moderation decisions across its platforms, which include Facebook, Instagram and WhatsApp.
It frequently disagrees with Meta’s rulings, but the company has nevertheless continued to loosen its approach to policing content, raising questions about how much power the board actually wields.
The board said the firm’s handling of the Haifa video raised issues that it had flagged before about “inefficiencies in Meta’s current approach during armed conflicts”.
Currently Meta relies largely on users to “self-disclose” when content they post is produced by an AI tool. Otherwise it waits for someone to complain to its content moderation team, which could then decide to affix a label to something.
The board said the firm should be proactively labelling fake AI content “much more frequently”.
It said the firm’s current methods were “neither robust nor comprehensive enough to contend with the scale and velocity of AI-generated content, particularly during a crisis or conflict where there is heightened engagement on the platform”.
The board’s review of the issue was sparked by a video posted last June by a Facebook account based in the Philippines describing itself as a news source.
It was one of a string of fake AI videos posted to social media after the conflict began, with content either being pro-Israel and pro-Iran, which quickly collected at least 100 million views, according to a BBC analysis at the time.
Despite the Facebook video being AI-generated and showing content that was not real, and Meta receiving several user complaints about it, the company did not label the video as AI-generated or remove it.
It wasn’t until a Facebook user appealed directly to the Oversight Board, and the board took up the issue, that Meta even responded to concerns, according to the board.
The company then claimed the video, which garnered almost 1 million views, did not require any kind of label and did not need to be taken down because it did not “directly contribute to the risk of imminent physical harm.”
That is too high of a bar for labeling AI-generated content, particularly when the subject is armed conflict, the board said Tuesday, ruling that the video should have received a “high risk AI label.”
“Meta must do more to address the proliferation of deceptive AI-generated content on its platforms… so that users can distinguish between what is real and fake”, it said.
In its statement, Meta said that it would abide by the board’s suggestions the next time it encounters “identical” content that is also “in the same context” as the video the board reviewed.
Regulator contacts Meta over workers watching intimate AI glasses videos
Overseas ‘content farms’ creating political deepfakes uncovered
Zuckerberg defends Meta in landmark social media addiction trial
Social media
Israel
Artificial intelligence
Propaganda
Meta
Mark Zuckerberg
Facebook