Editor’s Note: A version of this article first appeared in the “Reliable Sources” newsletter. Sign up for the daily digest chronicling the evolving media landscape here.
Human extinction.
Think about that for a second. Really think about it. The erasure of the human race from planet Earth.
That is what top industry leaders are frantically sounding the alarm about. These technologists and academics keep smashing the red panic button, doing everything they can to warn about the potential dangers artificial intelligence poses to the very existence of civilization.
On Tuesday, hundreds of top AI scientists, researchers, and others — including OpenAI chief executive Sam Altman and Google DeepMind chief executive Demis Hassabis — again voiced deep concern for the future of humanity, signing a one-sentence open letter to the public that aimed to put the risks the rapidly advancing technology carries with it in unmistakable terms.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” said the letter, signed by many of the industry’s most respected figures.
It doesn’t get more straightforward and urgent than that. These industry leaders are quite literally warning that the impending AI revolution should be taken as seriously as the threat of nuclear war. They are pleading for policymakers to erect some guardrails and establish baseline regulations to defang the primitive technology before it is too late.
Dan Hendrycks, the executive director of the Center for AI Safety, called the situation “reminiscent of atomic scientists issuing warnings about the very technologies they’ve created. As Robert Oppenheimer noted, ‘We knew the world would not be the same.’”
“There are many ‘important and urgent risks from AI,’ not just the risk of extinction; for example, systemic bias, misinformation, malicious use, cyberattacks, and weaponization,” Hendrycks continued. “These are all important risks that need to be addressed.”
And yet, it seems that the dire message these experts are desperately trying to send the public isn’t cutting through the noise of everyday life. AI experts might be sounding the alarm, but the level of trepidation — and in some cases sheer terror — they harbor about the technology is not being echoed with similar urgency by the news media to the masses.
Instead, broadly speaking, news organizations treated Tuesday’s letter — like all of the other warnings we have seen in recent months — as just another headline, mixed in with a garden variety of stories. Some major news organizations didn’t even feature an article about the chilling warning on their website’s homepages.
To some extent, it feels eerily reminiscent of the early days of the pandemic, before the widespread panic and the shutdowns and the overloaded emergency rooms. Newsrooms kept an eye on the rising threat that the virus posed, publishing stories about it slowly spreading across the world. But by the time the serious nature of the virus was fully recognized and fused into the very essence in which it was covered, it had already effectively upended the world.
History risks repeating itself with AI, with even higher stakes. Yes, news organizations are covering the developing technology. But there has been a considerable lack of urgency surrounding the issue given the open possibility of planetary peril.
Perhaps that is because it can be difficult to come to terms with the notion that a Hollywood-style science fiction apocalypse can become reality, that advancing computer technology might reach escape velocity and decimate humans from existence. It is, however, precisely what the world’s most leading experts are warning could happen.
It is much easier to avoid uncomfortable realities, pushing them from the forefront into the background and hoping that issues simply resolve themselves with time. But often they don’t — and it seems unlikely that the growing concerns pertaining to AI will resolve themselves. In fact, it’s far more likely that with the breakneck pace in which the technology is developing, the concerns will actually become more apparent with time.
As Cynthia Rudin, a computer science professor and AI researcher at Duke University, told CNN on Tuesday: “Do we really need more evidence that AI’s negative impact could be as big as nuclear war?”