European officials warned X on Tuesday that the company formerly known as Twitter appears to have been hosting misinformation and illegal content about the war between Hamas and Israel, in potential violation of the European Union’s signature content moderation law.
In a letter addressed to X owner Elon Musk, Thierry Breton, a top European commissioner, said X faces “very precise obligations regarding content moderation” and that the company’s handling of the unfolding conflict so far has raised doubts about its compliance.
As a platform subject to Europe’s Digital Services Act (DSA), X could face billions in fines if regulators conclude that violations have occurred. X didn’t immediately respond to a request for comment.
The warning letter highlights X’s potentially vast legal exposure as it battles a wave of bogus claims linked to the war that have been attributed to everything from fake White House press releases to false news reports and out-of-context videos from unrelated conflicts or even video games.
Much of the problematic content appears to stem from platform changes made under Musk’s supervision, Breton suggested in the letter, which he shared on X.
For example, he wrote, X announced over the weekend that it was making it easier for accounts to qualify for newsworthiness exceptions to its platform rules. The change to X’s Public Interest Policy made it so that accounts no longer require a minimum of 100,000 followers to qualify; they need only be “high profile” accounts that, as before, represent current or potential government officials, political parties or political candidates.
Removing the follower threshold and replacing it with a celebrity standard leaves it “uncertain” what content, particularly “violent and terrorist content that appears to circulate on your platform,” will be removed, Breton wrote.
Under the DSA, which became enforceable for large platforms in August, companies must also act swiftly when officials highlight content that violates European laws, which X may not be doing, Breton warned.
“We have, from qualified sources, reports about potentially illegal content circulating on your service despite flags from relevant authorities,” Breton wrote.
“I remind you that following the opening of a potential investigation and a finding of non-compliance, penalties can be imposed,” he added.
In an exchange on X, Musk replied to Breton. “Our policy is that everything is open source and transparent, an approach that I know the EU supports,” Musk wrote. “Please list the violations you allude to on X, so that that the public can see them.”
Breton posted back: “You are well aware of your users’ — and authorities’— reports on fake content and glorification of violence. Up to you to demonstrate that you walk the talk. My team remains at your disposal to ensure DSA compliance, which the EU will continue to enforce rigorously.”
The EU letter comes as misinformation about the conflict continues to spread widely across X.
On Tuesday, the investigative journalism group Bellingcat said a fake video designed to look like a BBC News report was circulating on social media.
The video falsely claimed Bellingcat found evidence that Ukraine had smuggled weapons to Hamas. Elliot Higgins, the founder of Bellingcat, said the report was “100% fake.”
In an effort to make the video look like a real BBC News report, its creators used graphics almost identical to what the BBC uses in its own online video reports.
The video circulated on Telegram and was shared by at least one verified account on X.
X did not remove the fake BBC News video, but it did append a small label under the video noting it is “manipulated media.”
In response to a question about the fake video, a BBC spokesperson said, “In a world of increasing disinformation, we urge everyone to ensure they are getting news from a trusted source.”
Shayan Sardarizadeh, a BBC News reporter, wrote on X Tuesday, “The video is 100% fake.”
Since taking over, Musk has laid off large swaths of X’s content moderation and policy teams, prompting backlash from civil society groups, which have warned about an increased threat of misinformation and hate speech.
In what he called an effort to deter the creation of automated accounts, Musk also eliminated the traditional verification badges that once reassured users of an account’s authenticity, replacing it with a paid system that has allowed any user to receive a verification badge without undergoing an identity check. Misinformation experts have said that the move undermined users’ ability to determine the credibility of any given account, particularly during a fast-moving news event.
But Musk himself has directly contributed to the chaos, at one point sharing – and then deleting – a post recommending that users follow an account that has been known to share misinformation, including a fake report earlier this year of an explosion at the Pentagon.