Editor’s Note: Jessica J. González is the co-CEO of Free Press, a media advocacy organization, and the co-founder of the Change the Terms coalition. The views expressed in this commentary are her own. View more opinion on CNN.

CNN  — 

On Wednesday, Meta justified its decision to restore Donald Trump’s Facebook and Instagram accounts by claiming that the risk to public safety “has sufficiently receded.”

It’s a statement that will come to haunt Meta executives — and one that ignores the growing body of evidence linking Trump’s invective on social media to real-world political violence.

Jessica J. González

Earlier this month, a draft report from the House of Representatives’ Jan. 6 select committee was leaked to the press. The 122-page document, “Social Media & the January 6th Attack on the US Capitol,” concludes that the risk former President Trump poses on social media “has not abated.”

Meta at one point agreed. When it first suspended Trump on Jan. 6, 2021, the company said that his posts about the insurrection “contribute to, rather than diminish, the risk of ongoing violence.” In upholding the decision, Facebook’s Oversight Board later added that Trump’s posts — especially those denying the election results — “created an environment where a serious risk of violence was possible.”

The company’s decision to de-platform Trump and his allies appears to have had the desired effect. Following the former president’s departure from mainstream social media sites, one comprehensive study found that online discussions about election disinformation declined by 73%.

The select committee’s draft report found that Trump and his supporters used Facebook, among other social media platforms, to closely track “his claims about a stolen election and subsequently his calls to descend on DC to protest the Joint Session of Congress on January 6th, 2021.”

The draft report also condemned Facebook for its “refusal to adequately police the spread of disinformation or violent content on Stop the Steal groups despite their known nexus to militia groups.”

Since leaving office, Trump’s views have become even more unhinged. He has continued to spread election lies on his own platform Truth Social. He has also boosted other conspiracy theories. Recent research by media watchdog group Media Matters for America finds that Trump amplified accounts that support the QAnon conspiracy 65 times during the 2022 midterms week, including 50 mentions on Nov. 14 and 15 just prior to announcing his 2024 presidential bid.

This track record should serve as a dire warning for Meta executives who think they can rein in Trump’s erratic and dangerous behavior. Meta President of Global Affairs Nick Clegg said in a blog post on Wednesday that users “should be able to hear what their politicians are saying — the good, the bad and the ugly — so that they can make informed choices at the ballot box.”

He also said the company has put in place “new guardrails” to prevent public figures who were suspended relating to civil unrest from continuing to violate its rules, including being subject to steeper penalties for repeat offenses and removal altogether for up to two years at a time.

Yet guardrails are nothing but a fresh public relations distraction unless they are actually enforced — and Meta, unfortunately, has in recent years failed to act against rule breakers, including those who repeatedly spread hate speech and certain disinformation, according to Free Press research released just before the 2022 midterms. And under its newsworthy content policy, it can carve out exceptions for the most powerful among us.

Indeed, Clegg wrote on Wednesday that under its newsworthy content policy, Meta may leave Trump’s posts in place that violate its community standards, so long as the public interest in knowing about the statement outweighs any potential harm. It may then restrict distribution, leaving them visible only on Trump’s page.

For content that does not violate its community standards, but “contributes to the sort of risk that materialized on January 6,” Meta may remove the “reshare” button, or stop them from running as ads or being recommended. But such efforts to switch off users’ ability to amplify false content — such as “stop the steal” messaging, for instance —aren’t foolproof.

As CNN reported in 2021, pro-Trump groups simply altered their profiles, or changed the names of their groups, allowing them to continue to repurpose and spread Trump’s 2020 election falsehoods while blending in with permitted Facebook activity.

Basic fairness would dictate that Trump play by the same rules as the rest of us. The patterns show us that violence doesn’t just ignite overnight; the slow-burning embers of hate and lies that Trump has continued to stoke on Truth Social can incite another insurrection if they reach Meta’s massive mainstream audience.

Yet Meta still has a chance to learn from its many mistakes, though that window is quickly closing. It can start by ending special exemptions for Trump and other prominent politicians who inflame hate, incite violence and spread anti-democratic lies.

Free Press, through our work with the Change the Terms coalition, has mapped a better path forward for Meta and other social media giants. This includes adopting and enforcing model policies to reduce hate and disinformation online and prevent actual violence in the real world.

Get our free weekly newsletter

No Meta user — no matter how powerful — should be allowed to use the company’s services to engage in or facilitate hateful activities. The platform must ensure that toxic hate and disinformation are not present in any language or in any country where the company does business.

Meta executives bear full responsibility for any real-world harm that follows Wednesday’s reckless decision. They can no longer claim ignorance about the political violence that can result when they allow access to an online megaphone to a dangerous figure like Trump.