I spent last weekend at the Effective Altruism Global conference at the Google HQ in Mountain View. I have a whole bunch of thoughts on the experience, which I expect to gradually post over the next couple of weeks. But in today’s post, I want to address a worry that came up a couple of times at the conference: the worry that the EA movement could prematurely adopt a rigid view of what counts as “EA causes.”
In particular, someone mentioned that when people talk about EA cause areas, the same four always seem to come up: global poverty, animal advocacy, the long-term future, and meta-Effective Altruism. These seem to have gotten codified as the main EA cause areas by a blog post Luke Muehlhauser wrote after the first EA summit two years ago. How likely is it, the challenge goes, that we figured out the most important causes to work on two years ago?
Actually, it’s not clear to me that this is that unlikely at all. Let’s start with the first two, global poverty and animal advocacy. Plausibly, prioritizing either or both of these two causes is just a natural outgrowth of expanding your circle of moral concern (a la Peter Singer) to cover all sentient beings.
In America, most people agree that all humans in America are equally deserving of our moral concern, regardless of race, gender, sexual orientation, and so on. But they care little for the welfare of farmed animals. Their actions suggest they don’t care much about foreigners either, though they may be reluctant to admit this. Abandoning those biases naturally points towards emphasizing the needs of those whose needs are neglected by the mainstream: animals and the global poor.
If you think effective altruism is good, then “meta” EA, trying to spread EA and figure out how to make it even more effective, seems like obviously a good thing too. It’s less obvious why the long-term future should be an especially high bang-for-your-buck cause. Most people do seem to care a lot about future generations, and to the extent the long-term future seems to be neglected, this may be because it’s hard to influence. Still, the future is undeniably important, so it’s not surprising many EAs focus on it.
One thing to notice about these cause areas is that they’re all incredibly broad. If you try to brainstorm EA cause areas, I think you’ll find that most plausible candidates could be slotted into one of these four categories. The one caveat is that I might change “global poverty” to “the global poor”–who not only have to deal with poverty, but other problems, like harebrained military adventures conducted by rich countries. But that’s a relatively modest tweak to the standard list.
It’s possible that while these are the most important causes, other causes may occasionally compete with them in terms of impact per amount of effort (or money) spent on them. For example, Open Phil is currently investigating US criminal justice reform and US land use reform as causes, on the grounds that while they’re not super-important, they may turn out to be especially neglected and/or tractable. Even then, I suspect causes like these aren’t worth very much attention from the EA movement.
If we’re going to worry about ossification, I suspect we should worry more about getting too attached to specific sub-areas within the four big causes. This is especially true of concerns about the long-term future; “long-term future” often seems to be used almost synonymously with “worrying about AI”. In reality the number of plausible angles one could take on trying to improve the future is enormous. So that’s what I’d pay attention to–trying to find unexplored approaches to the big problems we’ve already identified.
Correction: an earlier version of this post referred to “land reform” as one of the causes Open Phil is investigating. The term they use is “land use reform,” by which they mean reform of US zoning laws.