In my post on donating to animal rights orgs, I noted that organizations that claim to be working on risks from AI are a lot less cash-starved, now that Elon Musk has donated $10 million to the Future of Life Institute. They’re also a lot less publicity-starved, with not only Musk but also Stephen Hawking and Bill Gates lending their names to the cause.
The publicity has, predictably, generated a lot of pushback (see examples here and here). And while I think the issue of AI risk is worth thinking about, I’m sympathetic to many of the points made by critics, and disappointed by the rebuttals I’ve seen. For example, Andrew Ng, who’s Chief Scientist at Baidu Research and also known for his online course on machine learning, has said:
There’s been this hype about AI superintelligence and evil robots taking over the world, and I think I don’t worry about that for the same reason I don’t worry about overpopulation on Mars… we haven’t set foot on the planet, and I don’t know how to productively work on that problem.
The above link is to a blog post by former MIRI executive director Luke Muehlhauser, whose response to Ng focuses on AI timelines. But whether human-level AI is centuries or merely decades away, it’s still true that we’re not close enough to have a clear idea of what human-level AI will be like when it does arrive. I don’t think possible risks from AI aren’t worth thinking about at all, but like Ng I don’t know a way to productively work on the problem, and I’m not sure anyone else does either.
But I know a lot of people on team “worry about AI” disagree, and in fact claim we should be sending vastly larger amounts of money to organizations like MIRI, that AI risk should be prioritized over other pressing issues like global poverty and factory farming. Recently, I actually had a friend tell me he’ll be happy once 10% of world GDP is being spent trying to prevent risks from AI. And frankly, I’ve never heard anything remotely approaching a good argument for claims like this.
It’s important to distinguish the claim that we should be giving a great deal more attention to possible risks from AI from the broader claim that we should be giving a great deal more attention to concerns relating to the far future. Even granting the broader claim, why focus on AI? What not nuclear war, or tail risks from climate change, or efforts to bring about beneficial long-term shifts in social norms and institutions? Why not building doomsday bunkers in Antarctica, for that matter?
(I mention this last one because it seems like a cheaper alternative to Elon Musk’s project of Mars colonization.)
Of course, you can argue against putting much effort into all of the cause areas I’ve just mentioned, but the question is whether the case for worrying about AI is any better than the case for worrying about those causes. Many of the same objections–such as lack of tractability, and certain scenarios being arguably unlikely–apply equally to AI.
I often hear AI risk folks cite philosopher Nick Bostrom’s book Superintelligence as the definitive source for arguments for prioritizing concern about AI. But I don’t think Bostrom’s book can fill the role its fans want it to. As Bostrom himself says in the book’s preface:
Many of the points made in this book are probably wrong… I have gone to some lengths to indicate nuances and degrees of uncertainty throughout the text–encumber it with an unsightly smudge of “possibly,” “might,” “may,” “could well,” “it seems,” “probably,” “very likely,” “almost certainly.”
Looking at the text itself, it’s the “possibly”s and “could well”s that most often accompany key points. This isn’t to say Superintelligence is a bad book, taken as presented. But a catalog of possibilities doesn’t make for much of an argument about what issue should be humanity’s #1 priority.
Bostrom is clearly very interested in “foom” scenarios where a single AI rapidly self-improves to the point where it is able to take over the world, perhaps in a matter of days. But as economist Robin Hanson has noted:
Bostrom’s book has much thoughtful analysis of AI foom consequences and policy responses. But aside from mentioning a few factors that might increase or decrease foom chances, Bostrom simply doesn’t given an argument that we should expect foom. Instead, Bostrom just assumes that the reader thinks foom likely enough to be worth his detailed analysis.
Note that Robin is also very interested in the possible future impact of AI. But his view is that we’re more likely to see a more gradual scenario, probably driven by digital “emulations” of actual human brains. And herein lies another problem: “AI will be important for humanity’s future” is an incredibly vague prediction, covering a vast range of scenarios. What makes sense as preparation for one scenario may make no sense if you think another scenario is much more likely.
This is a problem for team “worry about AI,” because it’s hard to, say, make a case for donating to MIRI without making some fairly specific claims about the future of AI. People like Luke have tried to claim otherwise, and once upon a time, I believed them, but I no longer can. Lately, I’ve been finding that when I look closely at the arguments, people will disavow more controversial claims like “foom” one minute, then implicitly assume a significant chance of “foom” the next.
I’m not the only person to get this sense. For example, this article by blogger Nathan Taylor complains that when AI “skeptics” and “believers” argue, they often seem to end up agreeing on the substantive issues at stake. Taylor concludes that a lot of the seeming pointlessness of recent debates about AI come from the fact that the real thing diving people is the foom issue.
I think this is sometimes true, but not always. In some arguments the “skeptic” and “believer” aren’t that far apart, but there are people on both sides whose views are more extreme. (People who think AI should be humanity’s #1 concern vs. people who think it’s impossible in principle for anything to go wrong.) I also don’t think foom is the only questionable assumption that many members of team “worry about AI” make.
First, I think he picks the wrong reference class: yes, humans have a really hard time generating big social shifts on purpose. But that doesn’t necessarily mean humans have a really hard time generating math — in fact, humans have a surprisingly good track record when it comes to generating math!
The assumption that the key to dealing with possible risks from AI is more-or-less straightforward math research is another assumption that gets asserted without much in the way of argument. It’s assumptions like these that people like Soares need to actually argue for if they’re going to go around claiming people need to donate more to MIRI.