Do we already know all the most important causes?

I spent last weekend at the Effective Altruism Global conference at the Google HQ in Mountain View. I have a whole bunch of thoughts on the experience, which I expect to gradually post over the next couple of weeks. But in today’s post, I want to address a worry that came up a couple of times at the conference: the worry that the EA movement could prematurely adopt a rigid view of what counts as “EA causes.”

In particular, someone mentioned that when people talk about EA cause areas, the same four always seem to come up: global poverty, animal advocacy, the long-term future, and meta-Effective Altruism. These seem to have gotten codified as the main EA cause areas by a blog post Luke Muehlhauser wrote after the first EA summit two years ago. How likely is it, the challenge goes, that we figured out the most important causes to work on two years ago?

Actually, it’s not clear to me that this is that unlikely at all. Let’s start with the first two, global poverty and animal advocacy. Plausibly, prioritizing either or both of these two causes is just a natural outgrowth of expanding your circle of moral concern (a la Peter Singer) to cover all sentient beings.

In America, most people agree that all humans in America are equally deserving of our moral concern, regardless of race, gender, sexual orientation, and so on. But they care little for the welfare of farmed animals. Their actions suggest they don’t care much about foreigners either, though they may be reluctant to admit this. Abandoning those biases naturally points towards emphasizing the needs of those whose needs are neglected by the mainstream: animals and the global poor.

If you think effective altruism is good, then “meta” EA, trying to spread EA and figure out how to make it even more effective, seems like obviously a good thing too. It’s less obvious why the long-term future should be an especially high bang-for-your-buck cause. Most people do seem to care a lot about future generations, and to the extent the long-term future seems to be neglected, this may be because it’s hard to influence. Still, the future is undeniably important, so it’s not surprising many EAs focus on it.

One thing to notice about these cause areas is that they’re all incredibly broad. If you try to brainstorm EA cause areas, I think you’ll find that most plausible candidates could be slotted into one of these four categories. The one caveat is that I might change “global poverty” to “the global poor”–who not only have to deal with poverty, but other problems, like harebrained military adventures conducted by rich countries. But that’s a relatively modest tweak to the standard list.

It’s possible that while these are the most important causes, other causes may occasionally compete with them in terms of impact per amount of effort (or money) spent on them. For example, Open Phil is currently investigating US criminal justice reform and US land use reform as causes, on the grounds that while they’re not super-important, they may turn out to be especially neglected and/or tractable. Even then, I suspect causes like these aren’t worth very much attention from the EA movement.

If we’re going to worry about ossification, I suspect we should worry more about getting too attached to specific sub-areas within the four big causes. This is especially true of concerns about the long-term future; “long-term future” often seems to be used almost synonymously with “worrying about AI”. In reality the number of plausible angles one could take on trying to improve the future is enormous. So that’s what I’d pay attention to–trying to find unexplored approaches to the big problems we’ve already identified.

Correction: an earlier version of this post referred to “land reform” as one of the causes Open Phil is investigating. The term they use is “land use reform,” by which they mean reform of US zoning laws.

Advertisements

2 thoughts on “Do we already know all the most important causes?

  1. Part of me suspects that the dichotomy you describe is itself harmful. Both people and animals will plausibly exist 2, 20, 200, and 2000+ years from now. Depending on if & how you discount, maybe you care less about beings farther out. But regardless, the right strategy is to sum up estimated effects of any particular altruistic intervention over all time periods.

    Predicting the impact of altruistic interventions more than 2 years out seems difficult. But if we care about beings more than 2 years out we should learn to do this. Otherwise we are at best undershooting our potential as a movement and at worst pushing interventions that do short-term good and long-term harm or interventions that seem like they should do long-term good but in fact end up doing long-term harm.

    Like

    • I’d be more sympathetic to the view that the future is unpredictable if we had a long history of trying to intelligently discover, test, and apply prediction best practices unsuccessfully. But actually it seems like a lot of the more interesting work on predictions has happened relatively recently. E.g. Tetlock’s *Expert Political Judgement* was published 2005. PredictionBook.com is still an obscure website, not a cultural force like the New York Times or the Billboard Top 100. So it seems plausible there’s low-hanging fruit here.

      Also, to give an example of how the “poverty/animals/far future/meta” classification can be a limiting one: If we better understood what makes for a quality institution and how to improve an institution to make it higher quality, that would be helpful for poverty (bad institutions are implicated in state failure), far future (wiser institutions should deal with future techs better), and meta (via improved EA institutions). So seeing the 4 causes as separate domains could mean missing promising interventions.

      An alternative classification for EA interventions: benevolence interventions, wisdom interventions, and power interventions. Benevolence interventions are interventions that make people more benevolent in their values. Wisdom interventions are interventions that make benevolent people better at understanding the world, so they’re able to achieve their values in a way that doesn’t backfire. Power interventions are interventions that make benevolent & wise people powerful. 80,000 Hours is a power intervention if you grant that EAs are wiser & more benevolent than average. (Most of their advice is about how to accumulate career capital, not what to do with it.) I’ve seen relatively few EAs think along benevolence intervention/wisdom intervention lines.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s