AI risk at EA Global

Update: At almost the same time I posted this, Dylan Matthews posted a thing saying everything I wanted to say, much more eloquently.

Original post: The topic of AI risk got a special place at EA Global. Most of the time, if there was only one talk to go to, it was important logistics stuff, or one of the two big-picture opening and closing talks on the EA movement as a whole. But on Saturday morning the weekend of the conference, the organizers accorded the same privilege to a talk followed by a panel discussion on AI risk.

The talk was by Oxford philosopher Nick Bostrom, whose book Superintelligence came out last year. It expanded on his astronomical waste argument. In a nutshell, the argument says that utilitarians should focus on existential risk, because even a small reduction in X-risk has large expected value if humans colonize the stars.

The panel discussion featured Bostrom along with Nate Soares (executive direct of MIRI), Elon Musk (the entrepreneur who recently donated $10 million to FLI), and Stuart Russell (one of the co-authors of a well-regarded textbook on AI). Many people I talked to complained the panel discussion was kind of boring, but one exchange stood out:

Bostrom said that he thought that most likely, the problem of AI safety will turn out to be easy to solve, and that will happen without anyone making an extraordinary effort, or else we’re doomed no matter what we do. But there’s a small chance, he said, that making an extraordinary effort now will have such an enormous payoff that we should make such an effort anyway.

Bostrom agreed, in other words, that working on AI is a low-probability, high-impact caused. Musk reacted by asking him if he really thought the effort was most likely futile. Bostrom said yes–then joked, “For a fee I’m available as a motivational speaker.”

I came to the conference with a lot of respect for Bostrom, and that exchange solidified it. He does an admirable job of trying not to be overconfident, and admitting potentially embarrassing things about his views. (By contrast Soares, MIRI’s executive director, decline to comment on the probability that MIRI’s efforts will succeed.) I found his arguments much clearer than some other arguments for worrying about AI, and I agreed with a great deal of what he said.

However, I balk at these “tiny probability of massive impact” arguments. At least, I balk at letting them totally dominate our priorities. Doing so seems to lead to too much other craziness. There’s Pascal’s mugging, which Bostrom to his credit mention in the talk. Or versions of Pascal’s wager that only require one religion be slightly more likely than the competition. Or the St. Petersburg Paradox.

In a Q&A after his own talk, Holden Karnofsky (of GiveWell and the Open Philanthropy Project) complained that he thinks arguments like Bostrom’s make EA look bad. (“Epsilon probability of a made-up benefit” was how he described them.) Better, he said, to just say that AI looks hard, but maybe not harder than other hard things humans have done, and human extinction would be at least five times as bad as killing six billion people.

That argument, though, doesn’t get you the conclusion that AI risk is the main thing we should be focused on. The Open Philanthropy Project is interested in AI… but they’re also interested in global pandemics and factory farming and immigration reform and criminal justice reform and zoning.

Personally, I don’t mind talks like Bostrom’s. I’ve studied enough philosophy that I’m used to seeing philosophers take plausible-seeming assumptions and trying to see what crazy place they can get to with them. But the talk probably shouldn’t have been given quasi-keynote status, any more than you’d want to a keynote by Peter Singer about how not spending a single dollar on things without “moral significance” is the single most important thing in EA.

Correction: A previous version of this post incorrectly referred to Stuart Russell as Stuart Armstrong.

Advertisements

2 thoughts on “AI risk at EA Global

  1. I think you meant Stuart Russell not Stuart Armstrong. And I think the “x-risk” stuff is bad for the movement in general. It’s hard to justify the focus on AI risk and not the much larger probabilities of serious civilization-damaging events from global warming. It’s much more likely a lab created super virus (which we already know how to create) gets out than it is a future super-AI gets out.
    Following the same logic that gives MIRI (a fringe organization with no technical AI accomplishments) a spot on the podium leads to a situation where the movement is dominated by quacks who can tie their work into “x-risk” in any real way.

    Liked by 1 person

  2. Question on the intelligence explosion concept (discussed more deeply in the Matthews article that Chris links to): what is the reasoning behind the assumption that the early generations of AI will want to create more advanced AIs that will take over our world? After all, its the early AI’s world too. Isn’t it reasonable to assume that sentient machines aren’t going to want to create something that will destroy them any more than we want to create something that will destroy us?

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s