Technology companies are racing to develop human-level artificial intelligence, whose development poses one of the greatest risks to humanity. Last week, John Carmack, a software engineer and video game developer, announced that he has raised 20 million dollars to start Keen Technologies, a company devoted to building fully human-level AI. He is not the only one. There are currently 72 projects around the world focused on developing a human-level AI, also known as an AGI — meaning an AI which can do any cognitive task at least as well as humans can.
Many have raised concerns about the effects that even today's use of artificial intelligence, which is far from human-level, already has on our society. The rise of populism and the Capitol attack in the United States, the Tigray War in Ethiopia, increased violence against Kashmiri Muslims in India, and a genocide directed toward Rohingya in Myanmar, have all been linked to the use of artificial intelligence algorithms in social media. Social media sites employing these technologies showed a proclivity for showing hateful content to users because it identified such posts as popular and thus profitable for social media companies; this, in turn, caused egregious harm. This shows that even for current AI, deep concern for safety and ethics are crucial.
But the plan of cutting-edge tech entrepreneurs is now to build way more powerful human-level AI, which will have much larger effects on society. These effects could, in theory, be very positive: automating intelligence could for example release us from work that we prefer not to do. But the negative effects could be as large or even larger.
Oxford academic Toby Ord spent close to a decade trying to quantify the risks of human extinction due to various causes, and summarized the results in a book aptly titled "The Precipice." Supervolcanoes, asteroids, and other natural causes, according to this rigorous academic work, have only a slight chance of leading to complete human extinction. Nuclear war, pandemics, and climate change rank somewhat higher. But what trumps this apocalyptic ranking exercise? You guessed it: human-level artificial intelligence.
And it's not just Ord who believes that full human-level AI, as opposed to today's relatively impotent vanilla version, could have extremely dire consequences. The late Stephen Hawking, tech CEOs such as Elon Musk and Bill Gates, and AI academics such as the University of California San Francisco's Stuart Russell, have all warned publicly that human-level AI could lead to nothing short of disaster, especially if developed without extreme caution and deep consideration of safety and ethics.
And who's now going to build this extremely dangerous technology? People like John Carmack, a proponent of "hacker ethics" who previously programmed kids' video games like "Commander Keen." Is Keen Technologies now going to build human-level AI with the same regard for safety? Asked on Twitter about the company's mission, Carmack replied "AGI or bust, by way of Mad Science!"
A democratic society should not let tech CEOs determine the future of humanity without regard for ethics or safety.
Carmack's lack of concern for this kind of risk is nothing new. Before starting Keen Technologies, Carmack worked side by side with Mark Zuckerberg at Facebook, the company responsible for most of the harmful impacts of AI described earlier. Facebook applied technology to society without any regard for the consequences, fully in line with their motto "Move fast and break things." But if we are going to build human-level AI that way, the thing to be broken might be humanity.
In the interview with computer scientist Lex Fridman where Carmack announces his new AGI company, Carmack shows outright disdain for anything that restricts the unfettered development of technology and maximization of profit. According to Carmack, "Most people with a vision are slightly less effective." Regarding the "AI ethics things," he says: "I really stay away from any of those discussions or even really thinking about it." People like Carmack and Zuckerberg might be good programmers, but are simply not wired to take the big picture into account.
If they can't, we must. A democratic society should not let tech CEOs determine the future of humanity without regard for ethics or safety. Therefore, we all have to inform ourselves about human-level AI, especially non-technologists. We have to reach a consensus on whether human-level AI indeed poses an existential threat to humanity, as most AI Safety and existential risk academics say. And we have to find out what to do about it, where some form of regulation seems inevitable. The fact that we don't know yet what manner of regulation would effectively reduce risk should not be a reason for regulators to not address the issue — but rather a reason to develop effective regulation with the highest priority. Nonprofits and academics can help in this process. Not doing anything — and thus letting people like Carmack and Zuckerberg determine the future for all of us — could very well lead to disaster.
Shares