Paul Krugman once wrote:
There is nothing that plays worse in our culture than seeming to be the stodgy defender of old ideas, no matter how true those ideas may be. Luckily, at this point the orthodoxy of the academic economists is very much a minority position among intellectuals in general; one can seem to be a courageous maverick, boldly challenging the powers that be, by reciting the contents of a standard textbook.
I was reminded of this quote when I found Scott Alexander praising me for my alleged contrarianism:
I have immense respect for Topher Hallquist. His blog has enlightened me about various philosophy-of-religion issues and he is my go-to person if I ever need to hear an excruciatingly complete roundup of the evidence about whether there was a historical Jesus or not. His commitment to and contribution to effective altruism is immense, his veganism puts him one moral tier above me (who eats meat and then feels bad about it and donates to animal charities as an offset), and his passion about sex worker rights, open borders, and other worthy political causes is remarkable. As long as Topher isn’t talking about diet or Eliezer Yudkowsky’s personal qualities, I have a lot of trust in his judgment.
But these things I like and respect about Topher are cases where he’s willing to go his own way. He views open borders as an pressing moral imperative even though you’ll have a hard time finding more than a handful of voters, sociologists, or economists who support it…
To this I’d reply: if you want to sound like a bold contrarian on religion, go find out what the median philosophy professor thinks about the existence of God. Then, for good measure go find out what the professors at Princeton Theological Seminary think about the historical reliability of the Bible.
Similarly, if you want to sound like a bold contrarian on immigration, go find out what economists think of the issue. As Bryan Caplan shows in The Myth of the Rational Voter, while not all economists go all the way to supporting open borders, on the whole the economics profession is dramatically more supportive of immigration than the general public.
Unfortunately, Alexander’s praise for me comes at the end of a long rant aimed at my criticisms of Eliezer Yudkowsky that consists largely of ad hominem, tu quoque, and generally missing the point. (I use the term ad hominem very deliberately here. As we’ll see in a moment, I do mean not just personal attacks but personal attacks as a substitute for argument.)
I started off my original post by criticizing Yudkowsky for (in his own words) trying to get people to “to break their allegiance to Science” based solely on reading Yudkowsky’s opinions on interpretations of quantum mechanics. Without even checking to see what other people have to say on the subject. But I’m wrong, Alexander says. He quotes Yudkowsky as saying:
Go back and look at other explanations of QM and see if they make sense now. Check a textbook. Alternatively, check Feynman’s QED. Find a physicist you trust, ask them if I got it wrong, if I did post a comment. Bear in mind that a lot of physicists do believe MWI.
This is from a blog comment, which is already a problem. I confess that I have not read literally every single comment Yudkowsky has ever made on LessWrong. But neither have his followers. If he tells them one thing in the “sequences” that everyone is always being told to read, and another thing in a blog comment, a lot of people are going to miss the caveat in the comment.
Furthermore, I checked the context, and the quote is from a reply to someone who was complaining they had no way of fact-checking Yudkowsky’s claims about quantum mechanics. In context, it’s not actually a general exhortation to do fact-checking.
As long as we’re quoting from comment thread, here’s something else Yudkowsky has said:
Did you actually read through the MWI sequence before deciding that you still can’t tell whether MWI is true because of (as I understand your post correctly) the state of the social evidence? If so, do you know what pluralistic ignorance is, and Asch’s conformity experiment?
If you know all these things and you still can’t tell that MWI is obviously true – a proposition far simpler than the argument for supporting SIAI – then we have here a question that is actually quite different from the one you seem to try to be presenting:
- I do not have sufficient g-factor to follow the detailed arguments on Less Wrong. What epistemic state is it rational for me to be in with respect to SIAI?
If you haven’t read through the MWI sequence, read it. Then try to talk with your smart friends about it. You will soon learn that your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don’t.
(Note: SIAI is the old name for the Machine Intelligence Research Institute. “I do not have sufficient g-factor” is a pointlessly jargony way of saying “I am not smart enough.)
As I said in my original post, Yudkowsky says lots of reasonable things, but also says lots of unreasonable ones. And unfortunately–and this is something I didn’t get into in my original post, but talk about later in this one–often his followers take their cues from the crazier things he’s said.
Next up, philosophy. Once again, I criticized Yudkowsky for arguing that the failure of experts to universally agree with him (this time about p-zombies) as proof that their methods were flawed. Alexander’s main response is a tu quoque argument, pointing to dismissive things I’ve said about the arguments for the existence of God given by Aquinas and Plantinga.
The problem here is that, unlike the hard problem of consciousness, whether Aquinas’ arguments work is not actually a live issue among philosophers today. Less than 15% of philosophers are theists, and of the ones that are, many don’t try to argue for the existence of God anymore, and of those that do, most won’t defend Aquinas’ specific arguments.
Alexander also says “Aquinas’ arguments convinced nearly all the brightest people in the Western world for five hundred years.” This is false, see Ockham and most important philosophers after Descartes. More importantly even if no one had openly disagreed, Aquinas’ heyday was the period when you could be burned at the stake for heresy. Aquinas defended this pratice, and some of his philosophical opponents had to flee heresy trials.
With Plantinga, making claims about what most philosophers think is trickier because Plantinga doesn’t claim to prove the existence of God, only show that belief in God is reasonable. But my criticisms of Platinga are pretty standard (I often refer people to Graham Oppy regarding his argument), and I’d guess that most philosophers would agree with me if they took some time to read up on the issue, though probably many haven’t thought it worth the time.
This is why I started off this post quoting Paul Krugman about how reciting the contents of a standard textbook can make you sound like a brilliant contrarian. Philosophy of religion is an excellent example of that. There is a wrinkle here though: while philosophy as a whole is dominated by atheists, the philosophy of religion sub-discipline is dominated by theists.
Dig a little deeper, and this particular mystery disappears. Most theists in philosophy of religion don’t claim to have first gotten interested in the topic and then converted by the arguments; they got interested in PoR because they were religious. That, and most philosophers outside PoR have a fairly low opinion of the sub-discipline. (That claim isn’t terribly controversial; PoR specialists complain about it.)
I do think there’s a lesson here, that when you’re trying to understand what the experts think of an issue, it’s worth looking at opinions in more than one discipline or sub-discipline. In this case, I think it’s clear that the narrow sub-discipline has drifted off into lala land, but of course there are other cases where the specialists have discovered something that their colleagues haven’t gotten the message on yet.
Under the “philosophy” heading, there’s also the issue of Yudkowsky claiming to have a secret solution to the problem of consciousness. Alexander acts like it’s super suspicious that I say I’ve confirmed Yudkowsky wasn’t joking; if he cares, other people can confirm what I’ve said.
(ETA: You may not be able to see that last link unless you’re friends with me on Facebook. Sorry, people-I’m-not-Facebook-friends-with.)
But Alexander misunderstands me when he says I accuse Yudkowsky “of being against publicizing his work for review or criticism.” He’s willing to publish it–but only to enlighten us lesser rationalists. He doesn’t view it as a necessary part of checking whether his views are actually right. That means rejecting the social process of science. That’s a problem.
The distrust of actual scientists is also the problem with the things Yudkowsky has said about cryonics. Alexander digs up a comment from Yudkowsky that gives a 80-90% chance of “the core technology working,” but is less confident of other claims. Alexander things this shows I’m wrong about Yudkowsky’s views on cryonics, but it’s actually close to what I would have expected.
The real issue, though, is not the exact numerical probability Yudkowsky assigns to cryonics. It’s that he thinks cryonics is a reason to distrust mainstream experts. I, on the other hand, realize other people may know something I don’t. If a neuroscientist sat down to write a detailed debunking of cryonics, there’s a good chance they could convince me, and I regret that I’ve been unable to find anything like that.
The weirdest part of Alexander’s reply is the section on Gary Taubes. He appears to agree with my main points about Taubes:
I do not want to defend Gary Taubes. Science has progressed to the point where we have been able to evaluate most of his claims, and they were a mixture of 50% wrong, 25% right but well-known enough that he gets no credit for them, and 25% right ideas that were actually poorly known enough at the time that I do give him credit. This is not a bad record for a contrarian, but I subtract points because he misrepresented a lot of stuff and wasn’t very good at what might be called scientific ethics. I personally learned a lot from reading him – I was able to quickly debunk the wrong claims, and the correct claims taught me things I wouldn’t have learned any other way. Yudkowsky’s reading of him seems unsophisticated and contains a mix of right and wrong claims.
Then, after saying all this, Alexander berates me at great length for misunderstanding Taubes anyway.
I think much of our disagreement is about how charitable to be to Taubes. Personally, if some if someone is found out as having “misrepresented a lot of stuff and wasn’t very good at what might be called scientific ethics,” I’m not inclined to give them the benefit of the doubt on other things. But different strokes.
Then we get another exercise in tu quoque. Alexander quotes me as saying that:
- The causes of obesity is more complicated than just calories-in, calories-out and
- How much we eat has an effect on weight
Then on the basis of this he accuses of me of contradicting myself and not understanding what mainstream nutrition experts currently think. This, of course, is silly. X can have an effect on Y without being the sole or even primary cause of Y. (Guns don’t kill people, bullets kill people. Or is it the other way around?)
There’s also a bizarre bit where I had made fun of Taubes for referring to
the sugar or corn syrup in the soft drinks, fruit juices and sports drinks that we have taken to consuming in quantity if for no other reason than that they are fat free and so appear intrinsically healthy.
Who, I asked, ever thought Coca-Cola was a health food? Alexander tries to rebut this jab by quoting some nutrition advice that recommended drinking diet soda. I realize some people think artificial sweeteners are evil, but that’s clearly not the claim Taubes was making.
Yet points like are, in a way, irrelevant. Any mistakes I’ve made don’t change the fact that Taubes is not the kind of person you want to rely on to decide that you can’t trust scientists.
Okay, now let’s talk about the big picture. One thing I didn’t have straight in my head is why Yudkowsky’s anti-science stance doesn’t jump out at more people. I think the reason is that much of what he says can be taken as simply saying, “science is great, but it isn’t the be-all end-all of human knowledge, and by the way, naive falsificationism is wrong.”
Thing is, plenty of scientists and philosophers of science would agree with this. The fact that this is presented as “rejecting science in favor of Bayes” is a bit goofy. It’s as if he thinks popular science books are a better guide to the essence of science than what scientists actually do. There’s a physics PhD on Tumblr who’s gotten some notoriety for criticism of LessWrong, who’s commented:
One of the things that makes science incredibly difficult for new students, for instance, is how much of the knowledge is social and institutional and not written in books or papers.”
But if this were the only problem, I’d shrug my shoulders and say, “whatever.” The bigger problem is when he encourages distrust of scientists, wants to throw out the social process of science, and dismisses as undiscriminating skeptic the habit of asking “hold on, what’s the evidence for that?”
Alexander complains that I started off my last post talking about the LessWrong community, then focused all my attention on Yudkowsky’s writings. Well, okay, let me say that I think that Yudkowsky’s attitude to actual scientists has (not surprisingly) had a huge and very negative influence on the attitudes of the community he founded.
I mean, there was a time when people avoided talking about global warming on LessWrong because any mention of global warming would be met with shouts of “politics is the mindkiller!”, as if this were reason not to talk about a well-established scientific result.
There’s also the broader issue of it being part of the LessWrong creed that “the world is mad,” but LessWrong is ahead of the rest of the world in developing the art of rationality. This provides an ever-ready rationalization for being dismissive of anyone outside the LessWrong in-group, while defending anything anyone inside the in-group does.
Consider the neoreactionary phenomenon. If you haven’t encountered them online, consider yourself lucky. This Techcrunch article is probably about the best reasonably concise explanation you’re going to get given that they’re a disorganized internet movement. TLDR; neoreactionaries typically think we were all better off in the days of monarchs and white supremacy, and no, that’s not hyperbole.
So the neoreactionaries managed to gain a toehold in the LessWrong community. That’s how I first encountered them. And my immediate reaction was that they set of all my crackpot detection systems. Like, one of the most prominent neoreactionary writers goes by Mencius Moldbug online, and the first time I tried reading him, I ran into glaring factual inaccuracies. When I pointed this out, I was told, “yeah, that’s just how Moldbug is.”
Then there’s the stuff about how leading scientists secretly know the neoreactionaries are right about the the inferiority of black people, but can’t say so because academic freedom is a sham. This comes with very little in the way of evidence attached. It’s arguments straight out of the creationist playbook, only more racist. (Again, none of this is hyperbole.)
At one point, I actually had an offline acquaintance who was into LessWrong messaging me on Facebook to tell me that the fact that the N-word (he didn’t use the euphemism) was taboo showed that people are irrational about race. Therefore, he said, we should suspect that maybe black people are inferior after all. When I did not respond well to this, he demanded I give him more benefit of the doubt because “you know I’m sane.”
(Did I mention that none of this is hyperbole?)
We’re talking about a small minority of LessWrongers here, but LessWrong’s distrust of mainstream scientists and scientific institutions provided fertile soil for neoreactionary ideas about how “the Cathedral” (a quasi-conspiracy that includes all of America’s top universities) is secretly working to control what people are allowed to think.
And part of what went wrong with LessWrong and the neoreactionaries was that some people who weren’t themselves neoreactionaries felt the need to be nice to them because they were part of the LessWrong in-group. Scott Alexander is exhibit A here. Here’s his position on neoreactionaries:
The object-level stuff of neoreaction is weird, but the actual movement has nothing to do with the object-level. That’s why people who believe in extreme centralization of government, people who believe in extreme decentralization of government, people who think Communism was the worst thing ever, people who think Stalin was basically right about everything, people who think the problem is too much capitalism, people who think the problem is too little capitalism, people who believe America should be for native-born Americans, people who don’t even believe America should be for humans, et cetera can all be in the same movement without really noticing or debating or caring much about their differences.
As far as I can tell, the essence is a new sort of analysis on social class that’s not motivated by attempts to prove Marx right about everything, strong awareness of the role of signaling in society, an extremely fine understanding of multipolar traps, investigation of the role of biology in human civilizations and institutions, and willingness to go many meta-levels down.
I am not sure what role the weird object level beliefs are playing except maybe as a form of hazing to keep less-than-fully-committed out, the same way religions require painful initiation rites or the renunciation of pleasant things most people don’t want to renounce. Superficial people get hung up on the object level stuff, therefore reveal themselves as superficial, and are kept out of the useful bits.
(I don’t think they actually designed that, I think it might be a memetic evolutionary useful feature)
Useful ideas I have gotten partly or entirely from them include
In fact, I can basically get arbitrarily much acclaim just by taking basic neoreactionary theories and removing the stupid object level ideas so people will read them. This is selfishly useful, though I’m probably disrupting some sort of cosmic balance somehow.
Feminism seems to be the opposite. The object level beliefs are almost entirely unobjectionable, but when you look at the meta-level beliefs it starts looking like the entire philosophy is centered around figuring out clever ways to insult and belittle other people and make it impossible for them to call you on it. http://slatestarcodex.com/2014/07/07/social-justice-and-words-words-words/ is my attempt to barely scratch the surface of this. The Meditations have more. I know you’re going to say I’m reading the wrong Tumblrs, and some particular scholarly feminist book hidden in a cave in Bangladesh and guarded by twelve-armed demons contains brilliant meta-level insights. But every time I trek to Bangladesh and slay the demons to get it, the book turns out to be more complicated techniques for insulting people and muddying issues, and then the person claims okay, fine, but there’s another even better book hidden in Antarctica and guarded by ice trolls where all the great insights lie. At this point I’m done with it.
I find object level beliefs boring, especially when they’ve already been implemented, and intelligent meta-level thinking techniques pretty much priceless. I don’t want to be the ten millionth blog saying the most recent celebrity who made a sexist comment is a bad person and I hate him so much, I want to try to spread interesting ideas that advance people’s intellectual toolkit. Hence the relative amount I focus on each.
This post is from about a year ago. Though I only discovered it recently when someone else pointed me to it, it fits well with my impression of Alexander’s view of neoreactionaries that I’ve gotten from other sources. And it makes some other things he’s said look weird. Recently, he complained:
I despair of ever shaking the label of “neoreactionary sympathizer” just for treating them with about the same level of respect and intellectual interest I treat everyone else.
Also, three months back, when I wrote a post pointing out some of Alexander’s anti-feminist statements, and he wrote a response insisting I’d taken everything out of context. (And then in the comments of that post, he accused me of trying to “ruin the reputation of EA.”)
Alexander’s claim that he’s merely treating neoreactionaries with the same respect he treats everyone else is related to another idea he pushes, the “principle of charity.” This has nothing to do with donating money–it’s the idea that you should always interpret other people’s ideas and arguments on the assumption that they’re not saying anything stupid.
This sounds like a nice idea in theory, but in practice, it always seems to be applied selectively. Last year I wrote a post on LessWrong complaining about this, and Alexander (commenting as “Yvain”) surprised me by replying that yes, it has to be applied selectively, and that we should apply it selectively to people who are “smart, well-educated, have a basic committment to rationality.”
The problem is that if we’re not give people IQ tests and “rationality quotient” tests, people are going to fall back on being charitable to people in their own in-group. Thus, I prefer a different rule: accuracy. Unlike charity, being accurate is something we can do even when arguing with, say, religious fundamentalists. (In fact, one of my gripes about many liberal Christians is they misunderstand the views of more conservative Christians.)
On top of all this, I don’t think the blog posts Alexander says were inspired by neoreactionaries are actually all that good. He’s better when he’s talking about psychiatry (he’s a psychiatrist) and adjacent issues. I may explain this at greater length in the future, but right now this post is long enough as it is so I’ll just link to a thing I wrote on the Moloch post.
Alexander claims to have “immense respect” for me, and says, “I wish he would try to help spread his own good qualities.” If he’s serious about that, then I beg him to listen to me when I say that I think most of being consistently right across a bunch of areas is being good at doing research, particularly being good at understanding what the experts think of an issue.
He may think this is too boring, but I think that if you’re just an amateur in an area, figuring out what the experts think is generally going to be plenty of work all by itself. Most fields don’t have an equivalent of the PhilPapers survey. And since everyone knows “trust the experts” is a good heuristic, you typically need to spend a lot of time fact-checking people who claim to have all the experts on their side. Even experts are often guilty of exaggerating the number of other experts who agree with them.
Again, Alexander may think this is all boring. But I care less about being exciting than I do about being right.