AI doomerism: The elephant in the AI room
Will AI lead us to the Promised Land or will it be the highway to Hades? Time to find out.
“I consider myself an AI doomer.” It took only three minutes of speaking with Kevin Baragona, co-founder and CEO at DeepAI, to hear him say those words during our conversation three weeks ago. Baragona plants himself firmly on the side of other industry leaders like Apple co-founder Steve Wozniak, American computer scientist and AI expert Eliezer Yudkowsky, Tesla, SpaceX, and Twitter CEO Elon Musk, and acclaimed Godfather of AI Geoffrey Hinton, who believe that continuous, uncontrolled development of AI models can hasten the apocalypse.
For a man who says DeepAI was “the first company to host a text-to-image generator and scale globally,” it might seem like a tall order but Baragona is not alone. He joins several other thousands of AI experts and leaders across the globe— some of whom lead AI companies and have invested massively in the AI buzz themselves— who believe AI poses “existential risks” that can spell doom for the human race.
Writing about AI can be quite a difficult task, especially when considering the paradox of it all: The good of AI can be used for bad— to put it loosely. Even Casey Newton, the renowned author of Platformer, agrees with me. For example, ChatGPT, which can be a tool that helps researchers do their work faster, can also be a tool for non-sophisticated threat actors to write malicious codes for launching sophisticated ransomware attacks. That’s just one example of the paradox, but it explains the point.
Yet, as the industry divide between the dangers and benefits of AI widens, I feel compelled to explore it. My goal, in the end, is to find the bridge of balance that experts like Gary Grossman, senior vice president at Edelman and global lead of Edelman AI Center of Excellence, have so often talked about. Will AI lead us to the Promised Land or will it be the highway to Hades? Time to find out.
Prophecies of the end
Talks about the “existential risks of AI” have ranged from mild caution on deploying large language models (LLMs) beyond GPT-4 to apocalyptic prophecies of the end of life as we currently know it. In fact, on March 29, 2023, Yudkowsky wrote in TIME that “literally everyone on Earth will die” if the development of AI bots isn’t halted now. While Yudkowsky’s claims might sound preposterous, there are several others who share the same pessimism.
“The future is obviously very hard to predict. But what we’re creating here is a technology that is likely to be smarter than humans and that is likely to take over our society. And it's incredibly hard to predict what that will do. But an exponential technology that is smarter than people is quite likely to be a natural threat to humanity. I fully agree with that,” Baragona told me.
In the previous edition of Brainbox, I wrote that the open letter by non-profit the Future of Life Institute called for “a six-month pause to the training of AI systems more powerful than GPT-4.” As of the time of writing this article, the letter already has 33,003 signatures, including signatories like Elon Musk, Steve Wozniak, Stability AI CEO Emad Mostaque, Turing Prize winner Yoshua Bengio, Forward Party US presidential candidate Andrew Yang, and others.
“No one really knows how to stop the threat. You know, like the others, I’ve called for a six-month pause on advanced AI research. I think that’s a really good first step,” said Baragona. But, he noted, it’s clear that no pause will be happening right now with big tech racing for leadership in the AI domain and many companies releasing their LLMs.
However, for Yudkowsky— who declined to sign the open letter on the basis that it is “asking too little”— the threat of AI should be “considered a priority above preventing a full nuclear exchange,” as reported by The Sun. Such is the claim of Yudkowsky who believes we aren’t ready for the “total loss” that the age of super-intelligent machines could usher in. It’s like “the 11th century trying to fight the 21st century,” he wrote.
AI paranoia is a distraction
Right on the other side of AI doomerism are the people I describe as ‘The AI optimist’ or ‘The AI practicalist’: Those who believe the paranoia about extinction is just to put AI continuously in the conversation. The AI optimist/practicalist believes AI products have posed numerous harms to humans long before now, as we’ve already seen several AI models that are sexist, discriminatory, inaccurate, and biased. So, according to many AI practicalists, the heightened rhetoric about an impending doomsday distracts us from the need to start addressing those problems right now. Rather than fear, the optimist wants us to safely leverage AI.
Venture capitalist, Marc Andreessen, published a long missive on the AI debacle where he describes the fears of doomers as irrational “moral panic.” He also promotes reverting to the tech industry’s “move fast and break things” approach of yesteryear, writing that both big AI companies and startups “should be allowed to build AI as fast and aggressively as they can” and that the tech “will accelerate very quickly from here— if we let it.”
“AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive. AI is a machine— it’s not going to come alive any more than your toaster will,” Andreessen wrote.
“In my view, it’s good that the general public is fascinated and excited by the scientific advances that we’re making. The unfortunate thing is that the scientists as well as the policymakers, the people who are making decisions or creating these advances, are only being either positively or negatively excited by such advances, not being critical about it,” Kyunghyun Cho, a prominent AI researcher and an associate professor at New York University, said in an interview with VentureBeat.
“I’m disappointed by a lot of this discussion about existential risk; now they even call it literal ‘extinction.’ It’s sucking the air out of the room,” Cho bemoaned.
The doctrine of equilibrium
I argue for balance on both sides. Just like Grossman wrote for VentureBeat, tech leaders, and stakeholders in the fast and furious world of AI must find a balance between harnessing AI’s potential and mitigating its dangers.
It does not matter what side of the divide you’re on, doomer or believer, there’s indeed industry consensus on one thing: AI tools must be deployed safely and without bias. “Leaders who know the technology best should help to dispel misguided fears and refocus discourse on the current challenges at hand,” wrote Aidan Gomez, CEO and co-founder at Cohere, in an article on VentureBeat.
The words of Gomez ring true: “This technology is the most exciting and impactful of the coming decades. It’s crucial for us to have constructive and open conversations about the potential ramifications, but it’s equally important that the dialogue is sober and clear-eyed and for the public discourse to be led by reason.”
I end with Grossman’s analogy of fire, which I consider an excellent description of the current AI debacle on whether to leverage AI or shut it all down:
“There have been many mishaps in handling fire, and these still occasionally occur. Fortunately, society has learned to harness the benefits of fire while mitigating its dangers through standards and common sense. The hope is that we can do the same thing with AI before we are burned by the sparks of artificial general intelligence (AGI).”
Beautiful read and valuable contribution to the discussion about AI.
AI is actually interesting. Thank you for sharing