Reply to Scott Alexander

Paul Krugman once wrote:

There is nothing that plays worse in our culture than seeming to be the stodgy defender of old ideas, no matter how true those ideas may be. Luckily, at this point the orthodoxy of the academic economists is very much a minority position among intellectuals in general; one can seem to be a courageous maverick, boldly challenging the powers that be, by reciting the contents of a standard textbook.

I was reminded of this quote when I found Scott Alexander praising me for my alleged contrarianism:

I have immense respect for Topher Hallquist. His blog has enlightened me about various philosophy-of-religion issues and he is my go-to person if I ever need to hear an excruciatingly complete roundup of the evidence about whether there was a historical Jesus or not. His commitment to and contribution to effective altruism is immense, his veganism puts him one moral tier above me (who eats meat and then feels bad about it and donates to animal charities as an offset), and his passion about sex worker rights, open borders, and other worthy political causes is remarkable. As long as Topher isn’t talking about diet or Eliezer Yudkowsky’s personal qualities, I have a lot of trust in his judgment.

But these things I like and respect about Topher are cases where he’s willing to go his own way. He views open borders as an pressing moral imperative even though you’ll have a hard time finding more than a handful of voters, sociologists, or economists who support it…

To this I’d reply: if you want to sound like a bold contrarian on religion, go find out what the median philosophy professor thinks about the existence of God. Then, for good measure go find out what the professors at Princeton Theological Seminary think about the historical reliability of the Bible.

Similarly, if you want to sound like a bold contrarian on immigration, go find out what economists think of the issue. As Bryan Caplan shows in The Myth of the Rational Voter, while not all economists go all the way to supporting open borders, on the whole the economics profession is dramatically more supportive of immigration than the general public.

Unfortunately, Alexander’s praise for me comes at the end of a long rant aimed at my criticisms of Eliezer Yudkowsky that consists largely of ad hominem, tu quoque, and generally missing the point. (I use the term ad hominem very deliberately here. As we’ll see in a moment, I do mean not just personal attacks but personal attacks as a substitute for argument.)

I started off my original post by criticizing Yudkowsky for (in his own words) trying to get people to “to break their allegiance to Science” based solely on reading Yudkowsky’s opinions on interpretations of quantum mechanics. Without even checking to see what other people have to say on the subject. But I’m wrong, Alexander says. He quotes Yudkowsky as saying:

Go back and look at other explanations of QM and see if they make sense now. Check a textbook. Alternatively, check Feynman’s QED. Find a physicist you trust, ask them if I got it wrong, if I did post a comment. Bear in mind that a lot of physicists do believe MWI.

This is from a blog comment, which is already a problem. I confess that I have not read literally every single comment Yudkowsky has ever made on LessWrong. But neither have his followers. If he tells them one thing in the “sequences” that everyone is always being told to read, and another thing in a blog comment, a lot of people are going to miss the caveat in the comment.

Furthermore, I checked the context, and the quote is from a reply to someone who was complaining they had no way of fact-checking Yudkowsky’s claims about quantum mechanics. In context, it’s not actually a general exhortation to do fact-checking.

As long as we’re quoting from comment thread, here’s something else Yudkowsky has said:

Did you actually read through the MWI sequence before deciding that you still can’t tell whether MWI is true because of (as I understand your post correctly) the state of the social evidence? If so, do you know what pluralistic ignorance is, and Asch’s conformity experiment?

If you know all these things and you still can’t tell that MWI is obviously true – a proposition far simpler than the argument for supporting SIAI – then we have here a question that is actually quite different from the one you seem to try to be presenting:

  • I do not have sufficient g-factor to follow the detailed arguments on Less Wrong. What epistemic state is it rational for me to be in with respect to SIAI?

If you haven’t read through the MWI sequence, read it. Then try to talk with your smart friends about it. You will soon learn that your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don’t.

(Note: SIAI is the old name for the Machine Intelligence Research Institute. “I do not have sufficient g-factor” is a pointlessly jargony way of saying “I am not smart enough.)

As I said in my original post, Yudkowsky says lots of reasonable things, but also says lots of unreasonable ones. And unfortunately–and this is something I didn’t get into in my original post, but talk about later in this one–often his followers take their cues from the crazier things he’s said.

Next up, philosophy. Once again, I criticized Yudkowsky for arguing that the failure of experts to universally agree with him (this time about p-zombies) as proof that their methods were flawed. Alexander’s main response is a tu quoque argument, pointing to dismissive things I’ve said about the arguments for the existence of God given by Aquinas and Plantinga.

The problem here is that, unlike the hard problem of consciousness, whether Aquinas’ arguments work is not actually a live issue among philosophers today. Less than 15% of philosophers are theists, and of the ones that are, many don’t try to argue for the existence of God anymore, and of those that do, most won’t defend Aquinas’ specific arguments.

Alexander also says “Aquinas’ arguments convinced nearly all the brightest people in the Western world for five hundred years.” This is false, see Ockham and most important philosophers after Descartes. More importantly even if no one had openly disagreed, Aquinas’ heyday was the period when you could be burned at the stake for heresy. Aquinas defended this pratice, and some of his philosophical opponents had to flee heresy trials.

With Plantinga, making claims about what most philosophers think is trickier because Plantinga doesn’t claim to prove the existence of God, only show that belief in God is reasonable. But my criticisms of Platinga are pretty standard (I often refer people to Graham Oppy regarding his argument), and I’d guess that most philosophers would agree with me if they took some time to read up on the issue, though probably many haven’t thought it worth the time.

This is why I started off this post quoting Paul Krugman about how reciting the contents of a standard textbook can make you sound like a brilliant contrarian. Philosophy of religion is an excellent example of that. There is a wrinkle here though: while philosophy as a whole is dominated by atheists, the philosophy of religion sub-discipline is dominated by theists.

Dig a little deeper, and this particular mystery disappears. Most theists in philosophy of religion don’t claim to have first gotten interested in the topic and then converted by the arguments; they got interested in PoR because they were religious. That, and most philosophers outside PoR have a fairly low opinion of the sub-discipline. (That claim isn’t terribly controversial; PoR specialists complain about it.)

I do think there’s a lesson here, that when you’re trying to understand what the experts think of an issue, it’s worth looking at opinions in more than one discipline or sub-discipline. In this case, I think it’s clear that the narrow sub-discipline has drifted off into lala land, but of course there are other cases where the specialists have discovered something that their colleagues haven’t gotten the message on yet.

Under the “philosophy” heading, there’s also the issue of Yudkowsky claiming to have a secret solution to the problem of consciousness. Alexander acts like it’s super suspicious that I say I’ve confirmed Yudkowsky wasn’t joking; if he cares, other people can confirm what I’ve said.

(ETA: You may not be able to see that last link unless you’re friends with me on Facebook. Sorry, people-I’m-not-Facebook-friends-with.)

But Alexander misunderstands me when he says I accuse Yudkowsky “of being against publicizing his work for review or criticism.” He’s willing to publish it–but only to enlighten us lesser rationalists. He doesn’t view it as a necessary part of checking whether his views are actually right. That means rejecting the social process of science. That’s a problem.

The distrust of actual scientists is also the problem with the things Yudkowsky has said about cryonics. Alexander digs up a comment from Yudkowsky that gives a 80-90% chance of “the core technology working,” but is less confident of other claims. Alexander things this shows I’m wrong about Yudkowsky’s views on cryonics, but it’s actually close to what I would have expected.

The real issue, though, is not the exact numerical probability Yudkowsky assigns to cryonics. It’s that he thinks cryonics is a reason to distrust mainstream experts. I, on the other hand, realize other people may know something I don’t. If a neuroscientist sat down to write a detailed debunking of cryonics, there’s a good chance they could convince me, and I regret that I’ve been unable to find anything like that.

The weirdest part of Alexander’s reply is the section on Gary Taubes. He appears to agree with my main points about Taubes:

I do not want to defend Gary Taubes. Science has progressed to the point where we have been able to evaluate most of his claims, and they were a mixture of 50% wrong, 25% right but well-known enough that he gets no credit for them, and 25% right ideas that were actually poorly known enough at the time that I do give him credit. This is not a bad record for a contrarian, but I subtract points because he misrepresented a lot of stuff and wasn’t very good at what might be called scientific ethics. I personally learned a lot from reading him – I was able to quickly debunk the wrong claims, and the correct claims taught me things I wouldn’t have learned any other way. Yudkowsky’s reading of him seems unsophisticated and contains a mix of right and wrong claims.

Then, after saying all this, Alexander berates me at great length for misunderstanding Taubes anyway.

I think much of our disagreement is about how charitable to be to Taubes. Personally, if some if someone is found out as having “misrepresented a lot of stuff and wasn’t very good at what might be called scientific ethics,” I’m not inclined to give them the benefit of the doubt on other things. But different strokes.

Then we get another exercise in tu quoque. Alexander quotes me as saying that:

  1. The causes of obesity is more complicated than just calories-in, calories-out and
  2. How much we eat has an effect on weight

Then on the basis of this he accuses of me of contradicting myself and not understanding what mainstream nutrition experts currently think. This, of course, is silly. X can have an effect on Y without being the sole or even primary cause of Y. (Guns don’t kill people, bullets kill people. Or is it the other way around?)

There’s also a bizarre bit where I had made fun of Taubes for referring to

the sugar or corn syrup in the soft drinks, fruit juices and sports drinks that we have taken to consuming in quantity if for no other reason than that they are fat free and so appear intrinsically healthy.

Who, I asked, ever thought Coca-Cola was a health food? Alexander tries to rebut this jab by quoting some nutrition advice that recommended drinking diet soda. I realize some people think artificial sweeteners are evil, but that’s clearly not the claim Taubes was making.

Yet points like are, in a way, irrelevant. Any mistakes I’ve made don’t change the fact that Taubes is not the kind of person you want to rely on to decide that you can’t trust scientists.

Okay, now let’s talk about the big picture. One thing I didn’t have straight in my head is why Yudkowsky’s anti-science stance doesn’t jump out at more people. I think the reason is that much of what he says can be taken as simply saying, “science is great, but it isn’t the be-all end-all of human knowledge, and by the way, naive falsificationism is wrong.”

Thing is, plenty of scientists and philosophers of science would agree with this. The fact that this is presented as “rejecting science in favor of Bayes” is a bit goofy. It’s as if he thinks popular science books are a better guide to the essence of science than what scientists actually do. There’s a physics PhD on Tumblr who’s gotten some notoriety for criticism of LessWrong, who’s commented:

One of the things that makes science incredibly difficult for new students, for instance, is how much of the knowledge is social and institutional and not written in books or papers.”

But if this were the only problem, I’d shrug my shoulders and say, “whatever.” The bigger problem is when he encourages distrust of scientists, wants to throw out the social process of science, and dismisses as undiscriminating skeptic the habit of asking “hold on, what’s the evidence for that?”

Alexander complains that I started off my last post talking about the LessWrong community, then focused all my attention on Yudkowsky’s writings. Well, okay, let me say that I think that Yudkowsky’s attitude to actual scientists has (not surprisingly) had a huge and very negative influence on the attitudes of the community he founded.

I mean, there was a time when people avoided talking about global warming on LessWrong because any mention of global warming would be met with shouts of “politics is the mindkiller!”, as if this were reason not to talk about a well-established scientific result.

There’s also the broader issue of it being part of the LessWrong creed that “the world is mad,” but LessWrong is ahead of the rest of the world in developing the art of rationality. This provides an ever-ready rationalization for being dismissive of anyone outside the LessWrong in-group, while defending anything anyone inside the in-group does.

Consider the neoreactionary phenomenon. If you haven’t encountered them online, consider yourself lucky. This Techcrunch article is probably about the best reasonably concise explanation you’re going to get given that they’re a disorganized internet movement. TLDR; neoreactionaries typically think we were all better off in the days of monarchs and white supremacy, and no, that’s not hyperbole.

So the neoreactionaries managed to gain a toehold in the LessWrong community. That’s how I first encountered them. And my immediate reaction was that they set of all my crackpot detection systems. Like, one of the most prominent neoreactionary writers goes by Mencius Moldbug online, and the first time I tried reading him, I ran into glaring factual inaccuracies. When I pointed this out, I was told, “yeah, that’s just how Moldbug is.”

Then there’s the stuff about how leading scientists secretly know the neoreactionaries are right about the the inferiority of black people, but can’t say so because academic freedom is a sham. This comes with very little in the way of evidence attached. It’s arguments straight out of the creationist playbook, only more racist. (Again, none of this is hyperbole.)

At one point, I actually had an offline acquaintance who was into LessWrong messaging me on Facebook to tell me that the fact that the N-word (he didn’t use the euphemism) was taboo showed that people are irrational about race. Therefore, he said, we should suspect that maybe black people are inferior after all. When I did not respond well to this, he demanded I give him more benefit of the doubt because “you know I’m sane.”

(Did I mention that none of this is hyperbole?)

We’re talking about a small minority of LessWrongers here, but LessWrong’s distrust of mainstream scientists and scientific institutions provided fertile soil for neoreactionary ideas about how “the Cathedral” (a quasi-conspiracy that includes all of America’s top universities) is secretly working to control what people are allowed to think.

And part of what went wrong with LessWrong and the neoreactionaries was that some people who weren’t themselves neoreactionaries felt the need to be nice to them because they were part of the LessWrong in-group. Scott Alexander is exhibit A here. Here’s his position on neoreactionaries:

The object-level stuff of neoreaction is weird, but the actual movement has nothing to do with the object-level. That’s why people who believe in extreme centralization of government, people who believe in extreme decentralization of government, people who think Communism was the worst thing ever, people who think Stalin was basically right about everything, people who think the problem is too much capitalism, people who think the problem is too little capitalism, people who believe America should be for native-born Americans, people who don’t even believe America should be for humans, et cetera can all be in the same movement without really noticing or debating or caring much about their differences.

As far as I can tell, the essence is a new sort of analysis on social class that’s not motivated by attempts to prove Marx right about everything, strong awareness of the role of signaling in society, an extremely fine understanding of multipolar traps, investigation of the role of biology in human civilizations and institutions, and willingness to go many meta-levels down.

I am not sure what role the weird object level beliefs are playing except maybe as a form of hazing to keep less-than-fully-committed out, the same way religions require painful initiation rites or the renunciation of pleasant things most people don’t want to renounce. Superficial people get hung up on the object level stuff, therefore reveal themselves as superficial, and are kept out of the useful bits.

(I don’t think they actually designed that, I think it might be a memetic evolutionary useful feature)

Useful ideas I have gotten partly or entirely from them include

http://slatestarcodex.com/2014/06/07/archipelago-and-atomic-communitarianism/
http://slatestarcodex.com/2014/07/30/meditations-on-moloch/
http://slatestarcodex.com/2014/07/14/ecclesiology-for-atheists/

In fact, I can basically get arbitrarily much acclaim just by taking basic neoreactionary theories and removing the stupid object level ideas so people will read them. This is selfishly useful, though I’m probably disrupting some sort of cosmic balance somehow.

Feminism seems to be the opposite. The object level beliefs are almost entirely unobjectionable, but when you look at the meta-level beliefs it starts looking like the entire philosophy is centered around figuring out clever ways to insult and belittle other people and make it impossible for them to call you on it. http://slatestarcodex.com/2014/07/07/social-justice-and-words-words-words/ is my attempt to barely scratch the surface of this. The Meditations have more. I know you’re going to say I’m reading the wrong Tumblrs, and some particular scholarly feminist book hidden in a cave in Bangladesh and guarded by twelve-armed demons contains brilliant meta-level insights. But every time I trek to Bangladesh and slay the demons to get it, the book turns out to be more complicated techniques for insulting people and muddying issues, and then the person claims okay, fine, but there’s another even better book hidden in Antarctica and guarded by ice trolls where all the great insights lie. At this point I’m done with it.

I find object level beliefs boring, especially when they’ve already been implemented, and intelligent meta-level thinking techniques pretty much priceless. I don’t want to be the ten millionth blog saying the most recent celebrity who made a sexist comment is a bad person and I hate him so much, I want to try to spread interesting ideas that advance people’s intellectual toolkit. Hence the relative amount I focus on each.

This post is from about a year ago. Though I only discovered it recently when someone else pointed me to it, it fits well with my impression of Alexander’s view of neoreactionaries that I’ve gotten from other sources. And it makes some other things he’s said look weird. Recently, he complained:

I despair of ever shaking the label of “neoreactionary sympathizer” just for treating them with about the same level of respect and intellectual interest I treat everyone else.

Also, three months back, when I wrote a post pointing out some of Alexander’s anti-feminist statements, and he wrote a response insisting I’d taken everything out of context. (And then in the comments of that post, he accused me of trying to “ruin the reputation of EA.”)

Alexander’s claim that he’s merely treating neoreactionaries with the same respect he treats everyone else is related to another idea he pushes, the “principle of charity.” This has nothing to do with donating money–it’s the idea that you should always interpret other people’s ideas and arguments on the assumption that they’re not saying anything stupid.

This sounds like a nice idea in theory, but in practice, it always seems to be applied selectively. Last year I wrote a post on LessWrong complaining about this, and Alexander (commenting as “Yvain”) surprised me by replying that yes, it has to be applied selectively, and that we should apply it selectively to people who are “smart, well-educated, have a basic committment to rationality.”

The problem is that if we’re not give people IQ tests and “rationality quotient” tests, people are going to fall back on being charitable to people in their own in-group. Thus, I prefer a different rule: accuracy. Unlike charity, being accurate is something we can do even when arguing with, say, religious fundamentalists. (In fact, one of my gripes about many liberal Christians is they misunderstand the views of more conservative Christians.)

On top of all this, I don’t think the blog posts Alexander says were inspired by neoreactionaries are actually all that good. He’s better when he’s talking about psychiatry (he’s a psychiatrist) and adjacent issues. I may explain this at greater length in the future, but right now this post is long enough as it is so I’ll just link to a thing I wrote on the Moloch post.

Alexander claims to have “immense respect” for me, and says, “I wish he would try to help spread his own good qualities.” If he’s serious about that, then I beg him to listen to me when I say that I think most of being consistently right across a bunch of areas is being good at doing research, particularly being good at understanding what the experts think of an issue.

He may think this is too boring, but I think that if you’re just an amateur in an area, figuring out what the experts think is generally going to be plenty of work all by itself. Most fields don’t have an equivalent of the PhilPapers survey. And since everyone knows “trust the experts” is a good heuristic, you typically need to spend a lot of time fact-checking people who claim to have all the experts on their side. Even experts are often guilty of exaggerating the number of other experts who agree with them.

Again, Alexander may think this is all boring. But I care less about being exciting than I do about being right.

Advertisements

63 thoughts on “Reply to Scott Alexander

  1. re: charity (and sorry for the caps, I dunno if this commenting system allows html or what) –

    Alexander’s analysis of the “failure modes” is laughably simplistic. He says:

    “First, you could be too charitable, and waste a lot of time engaging with people who are really stupid, trying to figure out a smart meaning to what they’re saying. Second, you could be not charitable enough by prematurely dismissing an opponent without attempting to understand her, and so perhaps missing out on a subtler argument that proves she was right and you were wrong all along.”

    In fact, it could be much worse. You could be too charitable and actually come to BELIEVE the really stupid thing. (Arguably this is what happened to Leah Libresco, whose blog is still listed under the “Various Others” section of Alexander’s linkroll.) As an individual, then, you have rational as well as pragmatic motivations to limit your charity.

    But – and this is something that charity advocates always overlook – charity is not just a question of what an INDIVIDUAL does. Charity is also something that GROUPS do (or don’t do), and, as such, it’s important to think about the SOCIAL failure modes of charity. You could, for instance, be too charitable and, through your charity, accidentally convince OTHER people to believe the really stupid thing. (This is why scientists often don’t want to even debate creationists, global warming skeptics, anti-vaxxers, etc.) You could also inadvertently create a reasoning culture in which bad arguments and/or falsehoods proliferate simply because they’re subtler than good arguments and/or the truth. (See, y’know, professional philosophy.) Plus, as you say, there’s the very real (and, I’d argue, unavoidable) risk that charity will become a means of enforcing existing group beliefs, in that people will (intentionally or otherwise) be more charitable to their compatriots than they are to outsiders (as, arguably, Alexander demonstrates himself with neoreactionaries and feminists). I could probably go on; I’ve written about this at some length.

    Now, in fairness, there are also probably more problems that you could list about what happens when you’re not charitable enough, so it isn’t as if Alexander was being overly simplistic with only one side of the situation. Still, though, it’s immensely frustrating that he’s so fantastically, consistently glib about this subject.

    Like

    • I think the idea of not wanting to engage with really bad ideas is terribly misguided. I think failure to engage is the #1 reason why people continue to go along with these crackpot ideas. BUT you should only do it if you can be effective. One of the most effective ways to determine the truth about something is to be exposed to the best arguments of both sides. Often the argument/side that is likely correct is pretty clear after that.

      Like

      • “One of the most effective ways to determine the truth about something is to be exposed to the best arguments of both sides.”

        Then why is there so little consensus in academic philosophy? Surely you know about the PhilPapers survey, so you know that, for instance, no more than about 40% of philosophers agree on such diverse topics as abstract objects, aesthetic value, epistemic justification, knowledge claims, knowledge, moral motivation, ethics, perceptual experience, personal identity, politics, proper names, and the (very stupid) zombie argument. (And if you go up to about 50% – that is, where about half of philosophers believe X and another half believe various flavors of ~X – then you get a whole slew more.) If “be[ing] exposed to the best arguments” is so effective, why is there still so much disagreement among philosophers?

        I mean, I agree that, after having been exposed to the best (or “best”) arguments, you often FEEL as though the correct position is pretty clear. But that’s not the same thing as actually ARRIVING AT the correct answer.

        Moreover, c’mon man – if you’re saying that the right thing to do is to engage with good arguments, then why are you objecting to my claim that there’s no obligation to engage with bad arguments (and that, in many cases, there may even be an obligation NOT to engage with bad arguments)? Are you really going to tell me that we need to take bad arguments very, very seriously even if there are other, good arguments available?

        Like

      • Eli,
        You have assumed that finding the truth means picking side A or side B. But if there are good arguments on both sides, then the truth is “there are good arguments on both sides”.

        Like

      • 1Z, what on Earth are you talking about? Why do you insist on actively, maliciously misinterpreting every single thing that I say?

        Look, finding the truth absolutely does mean deciding between X and ~X. That’s just logic, yes? So what’s your point? How important is it to you to find good arguments for false – even obviously, flagrantly false – conclusions (whatever “good” would even mean in that case)?

        Moreover, when have I said that it’s impossible for there to be good arguments on both (or multiple) sides? Again, if you can’t quote me as saying that, then stop inventing straw versions of my position to attack. What I’m saying is that there are NOT ALWAYS good arguments on both sides; that there is NOT ALWAYS an obligation to seek good arguments on both sides; that the normative dimension of reasoning is more complicated than either your or Cliff is giving it credit for; and so on. Either engage with the things I’m actually saying or kindly stop pretending that you have an interest in rational conversation (or, really, rationality at all).

        Like

      • “Moreover, when have I said that it’s impossible for there to be good arguments on both (or multiple) sides? ”

        Why complain about the lack of consensus

        Like

      • “Moreover, when have I said that it’s impossible for there to be good arguments on both (or multiple) sides? ”

        Why complain about the lack of consensus in philosophy, when it is explainable by there being good arguments on both sides of a number of questions?

        Like

      • “Why complain about the lack of consensus in philosophy, when it is explainable by there being good arguments on both sides of a number of questions?”

        Yeah, okay, you’ve exceeded my explaining-simple-things-to-idiots tolerance level. You’re obviously not paying attention and you’re obviously not interested in listening, let alone thinking about things rationally. If you wanna keep talking to me, then give me the technical definition of charity that you’ve promised exists. If you can’t at least do that, then I’m done with you.

        Like

  2. Something’s wrong with link styles: on Chrome linked text overlaps with surrounding text, making it unreadable. Firefox and IE render it correctly.

    Like

    • Thanks. I use Chrome and the links look right to me, but I’ve had this problem with other WordPress themes. I wonder if you’re using an old CSS sheet? When you scroll down to the bottom of the page, what does it say the theme is? For me it says Syntax. In any case, might want to try a hard refresh of the page, see if that shakes out the old CSS sheet.

      Like

  3. I’m not sure if you view it this way, but from the outside, the very biggest cranky belief in LW appears to be the whole AI/transhumanism thing. And basically no one can argue that that isn’t central. The recent Dylan Matthews article AI risk being featured prominently in EA Global was worrying because it showed a nominally smart and charitable group giving a lot of credence to very cranky ideas. But while worrying, it wasn’t very surprising if you’re aware of EA’s roots in LW.

    And the thing is, me pointing out transhumanism as cranky, I’m sure will just be taken by LW people as a another sign that non-rationalists are too quick to dismiss ideas they perceive as cranky. To them, it’s yet another lesson in how important it is to break allegiance to science. To me, this is yet another lesson in how glad I am I never got into LW.

    Like

    • I probably should have addressed this point explicitly. I’m pretty sure Yudkowsky’s conscious strategy with his blog posts was to break down people’s resistance to cranky-sounding ideas, so they’d be more likely to listen to him when he pitched AI.

      (I also think his spin on AI is in many ways unusually cranky for writing about AI.)

      Like

      • >(I also think his spin on AI is in many ways unusually cranky for writing about AI.)
        My opinion on Yudkowsky is he is most sensible on AI risk. I find the basic arguments of Bostrom and Omohundro convincing – to the point that I have donated to MIRI. I will likely continue to do so until I see more-traditional institutions funding direct AI risk research. If you have good general counterarguments to the notion of AI risk, I would love to hear them – as I’ve found most critiques unsatisfying yet am always up for saving money.

        >I’m pretty sure Yudkowsky’s conscious strategy with his blog posts was to break down people’s resistance to cranky-sounding ideas, so they’d be more likely to listen to him when he pitched AI.

        This seems very unlikely to me. Yudkowsky strikes me as ridiculously sincere. You can argue that was the effect of his writings, but I don’t seem much evidence of conscious manipulation. If as you imply he’s selling Koolaid well he’s drinking it, too.

        One criticism of Eliezer is the particularities of his personality has perpetuated some bad cultural memes. The classic (and correct) critique people made of LessWrong before it died was a rational culture should focus less on individual rationality and more on institutional rationality – and this absence of culture-eyed view, I think, can be traced back to Eliezer. Hanson’s early influence seemed almost forgotten in the later years. As an aside, Hanson’s notion that you should invest your “contrarian points” wisely and with great modesty seems like advice Eliezer should have followed – Invest your contrarian points in AI risk, and leave many worlds, diet and monetary policy for contrarians with domain expertise.

        I came by your blog via Scott’s post – which remains (I think) the worst thing he’s ever written. He seemed to think that lampshading the ad hominem and tu quoque arguments was sufficient inculcation against faulty reasoning. However, I can’t help but think you are trying to contaminant Scott by associating him with neoreaction – every bit of inspiration he takes from them has been cleansed of racism, sexism and all the things people hate about neoreaction. Highlighting this associations as some defence against his post is just odd.

        Like

      • Have you read my post on Bostrom’s talk at EA Global? I’m glad people are thinking about AI risk, but I don’t think it should be the dominant focus of the EA movement.
        If you really want to donate to AI risk, though, you might consider donating to FLI. Actually putting out a call for grant applications seems like the kind of thing you should be doing if you want to fund serious research.
        I don’t think Yudkowsky was being consciously conniving with the Sequences. In his head, it probably went, “people are irrationally rejecting my arguments about AI, therefore I need to teach them to be more rational so they’ll accept my arguments about AI.” I think he’s actually said as much. The problem is when his concept of “rationality” involves rejection of skepticism in the Michael Shermer sense.
        I talked about neoreaction in this post because I think it’s indicative of how the flaws in Yudkowsky’s writings have translated into flaws in the community he spawned. As for Alexander… if I didn’t explain myself clearly enough the first time, IDK, belaboring the point at this point starts to feel mean.

        Like

  4. Topher,

    “We’re talking about a small minority of LessWrongers here, but LessWrong’s distrust of mainstream scientists and scientific institutions provided fertile soil for neoreactionary ideas about how “the Cathedral” (a quasi-conspiracy that includes all of America’s top universities) is secretly working to control what people are allowed to think.”

    That might be the case, but even given the vagueness of the phrase “fertile ground”, its not all that convincing, because there’s already two established iterations of science-doubt, creationism and AGW denialism, for NRx to draw on. Plus NRx has stated roots in things like the HBD movement, which have nothing to do with LW.

    “And part of what went wrong with LessWrong and the neoreactionaries was that some people who weren’t themselves neoreactionaries felt the need to be nice to them because they were part of the LessWrong in-group. Scott Alexander is exhibit A here. ”

    You state SAs your opinion motivation above, and then go on, as if to add support, to give an extended quote from SA where he expounds an entirely different motivation … one in terms of meta -level reasoning, not who’s friends with whom. I mean, what?

    (Disclosure: I post on LW and SSC as TheAncientGeek)

    Like

    • Maybe I should have been clearer with the first quote. I don’t think LW was one of the major original sources of neoreaction, just that LessWrong memes have the effect of encouraging people to take neoreaction more seriously than they should.
      The long Scott Alexander quote I provided was to showcase his attitude towards neoreaction. But it seems like his account of why he has those attitudes is pretty obviously… well not wrong, but very incomplete. One of my favorite Scott Alexander posts of all time is Intellectual Hipsters and Meta-Contrarianism, but I think the reason he’s so compelling on the topic is because he has a very strong dose of the “intellectual hipster” syndrome. His love of meta (describes drawing him to the neoreactionaries) comes from the sense that people who are very meta are very clever, and I think his sense of who the clever people are is very heavily influenced by signals of membership in the LessWrong in-group (and the broader nerd in-group), if that makes sense.

      Like

      • His love of meta (describes drawing him to the neoreactionaries) comes from the sense that people who are very meta are very clever

        Huh, I tend to think its the reverse. Applying generalizable principles to a subject takes only a fairly limited skill set (granted, its a skill set that can be applied to a wider range of subjects): acquiring and using in-depth domain knowledge is much harder work. The ‘very clever’ scientist is the one who actually figures out how to do a difficult experiment, it’s not the person who talks about how peer review and reproducibility are important traits in science. Both types of people contribute, but being the former takes a lot more cleverness or at least in-depth knowledge than the latter.

        Like

      • Does ‘take seriously” mean “agree with”, or maybe write a serious critique, as SA did? What’s the preferable alternative to seriousness… to mock NRx, or ignore it?

        It doesn’t seem to me that SA’s account is wrong or incomplete. And he’s not “drawn to” the NRs in the sense of actually agreeing with them at object level…as he made clear.

        Like

  5. Eli

    “In fact, it could be much worse. You could be too charitable and actually come to BELIEVE the really stupid thing”

    Given the technical meaning of “charity”, the only way you can overshoot is by coming up with a better argument for same X than the speaker ever had in mind … but then *you have a good reason for believing X*…..what’s stupid about that?

    As well as the usual assumption that the Charity in the Principle of Charity has some vague and non technical meaning of niceness, you seem to be assuming that rationality consists of believing in lists of true and false ideas, rather than lists of good and bad arguments.

    “Leah Labrusco”

    Can you not credit SA with thinking some line he basically disagrees with has interesting things to say? That’s kind of thus whole schtick.

    Like

    • “Can you not credit SA with thinking some line he basically disagrees with has interesting things to say? That’s kind of thus whole schtick.”
      Gah! Reading this makes me think I should have spent more time on the “Scott Alexander and neoreaction” issue, but I didn’t want to belabor something that struck me as painfully obvious and embarrassing to him. So the point is this:
      Scott Alexander isn’t charitable to neoreactionaries in spite of disagreeing with them. He’s charitable to neoreactionaries because he likes them. Conversely, he’s extremely uncharitable to feminists because he dislikes them.
      Maybe it was a mistake to put that point in terms of the LessWrong in-group. At that point in the post I maybe should have just shifted to talking about people Scott Alexander personally likes and dislikes.

      Like

      • ” He’s charitable to neoreactionaries because he likes them. Conversely, he’s extremely uncharitable to feminists because he dislikes them.”

        Is that a fact? He’s mentioned some communication issues. Does “like them” just relabel that, or add some new information?

        Like

      • Yeah, and why does he like neoreactionaries more than feminists? Apparently because the feminists he’s encountered have reacted very poorly to his questioning their tactics, while the neoreactionaries he’s encountered appear to have been much more gracious when he’s questioned their beliefs. It doesn’t take much reading of his past work to learn this, so I have to think you knew it already. So why the whole “He likes the OBVIOUSLY HORRIBLE people and dislikes the OBVIOUSLY AWESOME people!” routine?

        Like

    • “Given the technical meaning of “charity”, the only way you can overshoot is by coming up with a better argument for same X than the speaker ever had in mind … but then *you have a good reason for believing X*…..what’s stupid about that?”

      So, one, bear in mind that we’re all imperfect. What appears to be a better argument may not actually be so. It may be a better argument; it may be the same argument in different clothing; or it may be a worse argument. In the absence of an objective signal of argument quality – which, I desperately hope you’ll agree, we are currently in the absence of – it’s the tastes of the charitable individual that, in effect, act as the judge of which arguments are better or worse. And tastes, of course, can be badly mistaken.

      Alexander’s own comment is a perfect case of precisely this phenomenon. Notice that he doesn’t say, “…perhaps missing out on a BETTER argument that proves she was right and you were wrong all along.” Instead, he says, “…perhaps missing out on a SUBTLER argument that proves she was right and you were wrong all along.” “Subtler” is not a synonym of “better.” Yet there are people – Alexander, evidently, among them – who mistake subtlety for a sign of argumentative strength. For these people, charity is not necessarily a search for the BEST argument. To at least some worrying degree, it’s actually a search for the SUBTLEST argument, and there are plenty of really stupid ideas that you can argue for in ways that are delightfully subtle. The neoreactionary thing is an excellent example.

      Two, even if the argument actually IS better, “better” does not mean “good.” “Better” could simply mean “less bad.” If, to take a convenient example, all of the arguments for god’s existence are bad, it doesn’t matter which of them you prefer; rationally speaking, you shouldn’t believe any.

      “As well as the usual assumption that the Charity in the Principle of Charity has some vague and non technical meaning of niceness, you seem to be assuming that rationality consists of believing in lists of true and false ideas, rather than lists of good and bad arguments.”

      There is no precise technical meaning of charity. No proponent of charity can coherently explain what the “best” version of an argument is. Also, you know that your second sentence here is nonsensical, right? What are arguments made up of if not ideas? There have to be premises before you can get to the conclusion, y’know.

      “Can you not credit SA with thinking some line he basically disagrees with has interesting things to say? That’s kind of thus whole schtick.”

      See, now this is another perfect example of where charity misleads. Look at what you’ve just said: “…has interesting things to say.” Note that you did not say, “…has intelligent things to say” or, “…has compelling things to say” or, “…has rational things to say.” Maybe you simply misspoke – maaaaaaybe. But maybe you actually do take interestingness as a shorthand for rationality or value. Maybe, for instance, you would judge an interesting argument to be a better argument, despite the entirely obvious fact that “interesting” is also not a synonym for “good.” Maybe, in that case, you consistently accept (or, at least, lean towards accepting) arguments that are no better simply because they happen to make your brain tingle more.

      Like

    • ““Leah Labrusco””

      Also, on a separate line of thought, I can’t help but observe that you aren’t being very charitable towards me with this kind of nonsense. What’s the point, really, of intentionally spelling her name wrong and then quoting it as if I’d spelled it wrong myself? If not to either piss me off or make me look like an idiot, why are you even bothering with this?

      Furthermore, my point in invoking her wasn’t to suggest that Alexander should refuse to read any Christian writer, no matter what that writer has to say. Your response, then, is a response to a straw-man – again, not a very charitable type of behavior. My point was merely that Alexander has witnessed firsthand an example of what excessive charity can do to a person and that, therefore, he ought to have been a little more circumspect when it comes to weighing the costs and benefits of charity.

      But let me now take a step back and try to see how much you really believe in charity, 1Z. Tell me: how many times have you tried to apply charity to the arguments against charity? Could you, say, reproduce some of those attempts for me now? What was the best anti-charity argument you’ve ever seen? I only ask because, in my personal experience, charity proponents tend to be viciously uncharitable when it comes to defending charity itself – a trend that, as I’ve just observed, you yourself are now contributing to (and which, incidentally, fits perfectly with Hallquist’s theory that charity is applied selectively based on arbitrary personal preference). Yet charity doesn’t seem to be worth much as a rule if it’s never applied to its own detractors.

      Like

      • “How many times have you tried to apply charity to the arguments against charity? Could you, say, reproduce some of those attempts for me now? What was the best anti-charity argument you’ve ever seen?”

        1 “Charitable means being nice”. By far the most common objection. Anyone can be said to fail at niceness, for some value of niceness..so anyone less than saintly who talks about the PoC is a hypocrite, ta-da!. But the PoC is not a social nicety, it is a heuristic for effective communication. It has a technical meaning.

        (“Charity is a thing groups do..” is an example of taking “charity” to have the everyday meaning. Complainiing that my accidental..sorry, deliberate, I always spell perfectly… misspelling of Leah Libresco’s name was uncharitable is another example of using “charity” in a catch-all sense, rather than in the technical sense. Arguing that it doesn’t have a technical sense at all, because it doesn’t have the one technical sense you are willing to guess at, is an argument I haven’t seen before. I might add it to the list if I see it again).

        2 “It doesn’t always work”.
        Well, its a heuristic, not an algorithm. It’s meant to improve communication,. not to make it perfect. Objection one is a mistake about what it is supposed to be, objection 2 is a mistake about how it is supposed to work.

        3. “Some people really are dumb”.
        It’s, not a question of permanently changing your beliefs for no reason, it’s an exercise in “as if”. It’s also a defeasible assumption. If you fail, to interpret a comment charitably, you have evidence that it was a dumb comment. If you succeed, you have evidence it wasn’t. Either way, you have evidence, and you are not sitting in an echo chamber where your beliefs about people’s dumbness go forever untested, because you reject out of hand anything that sounds superficially dumb, .or was made by someone you have labelled , however unjustly,as dumb.

        4. “It’s better to just understand,”.
        The PoC is a way of breaking down “understand what the other person says” into smaller steps, not .something entirely different, Treating your own mental processes as a black box that always delivers the right answer is a great way to stay in the grip of bias.

        5. “People apply it selectively.”
        They perhaps don’t have much choice. This objection doesn’t really show that the PoC doesn’t do what it is supposed to do, but rather, that i needs other heuristics to use where resources are constrained. Resource constraint is a real issue, unlike the other objections, so this is the best argument I’ve seen.

        6. “People should apply it selectively ”
        The resource-constrained version would be to interpret comments charitably once you have, for whatever reason, got into a discussion….with the corollary of reserving some space for “I might be wrong” where you haven’t had the resources to test the hypothesis. Use of the PoC indicates that people are generally overconfident about how dumb other people are, so the optimal way of not using it should also involve a downward shift in confidence.

        Like

      • “Arguing that it doesn’t have a technical sense at all, because it doesn’t have the one technical sense you are willing to guess at…”

        Again with the straw man. If you’re so sure about this, why haven’t you simply provided the precise, technical sense that you’re saying exists? Why, instead, do you settle for launching personal attacks against me? I can point you to maybe twenty people who’ve tried and failed to provide coherent technical definitions of charity – Scott Alexander included, incidentally. How many people can you point me to who’ve succeeded?

        “Well, its a heuristic, not an algorithm.”

        Ah, now we’re getting somewhere. So what are its limitations, then? Everybody knows by now that cognitive heuristics lead to predictable blind spots, overcommitments, and the like – could you, perhaps, describe how charity leads to some of these types of shortcomings? (Because it does.) Or have you never bothered to think about that?

        “It’s meant to improve communication”

        Really? Because earlier you described it as a way of finding “a better argument,” which is NOT synonymous with “improving communication.” Which of those goals do you actually endorse? When the two conflict with one another – which happens quite a lot – which will you prioritize? Or, again, have you never thought about that?

        “…echo chamber…”

        I just want to point out how incredibly ironic it is that you’ve invoked this argument while persistently failing to interact with the substance of my objections. It’s almost as if you’re actively trying to demonstrate my point about how charity is never applied to the arguments against charity. Anyway, moving on…

        “Treating your own mental processes as a black box that always delivers the right answer”

        Who says this? Link me, please, to anybody who has ever argued against charity on the grounds that we should treat our mental processes as black boxes that always deliver the right answer.

        “5. “People apply it selectively.”
        They perhaps don’t have much choice.”

        What a fantastically glib response! Have you even considered the implications of what you’ve just confessed? If not, go back and read my and Hallquist’s remarks on the subject, because we’ve identified a few problems that deserve more than just a passing remark.

        “The resource-constrained version would be to interpret comments charitably once you have, for whatever reason, got into a discussion”

        But how do you decide, then, which conversations to enter? I mean, talk about black boxes and echo chambers and whatnot – I can be as charitable as the next person if I’m allowed to pursue only those conversations in which I enjoy being charitable. (Maybe, oh I dunno, conversations that I find “interesting” or “subtle” for purely idiosyncratic, nonrational reasons?)

        Like

      • Hi Eli
        The technical meaning is;-
        “In philosophy and rhetoric, the principle of charity requires interpreting a speaker’s statements to be rational and, in the case of any argument, considering its best, strongest possible interpretation.[1] In its narrowest sense, the goal of this methodological principle is to avoid attributing irrationality, logical fallacies or falsehoods to the others’ statements, when a coherent, rational interpretation of the statements is available. According to Simon Blackburn[2] “it constrains the interpreter to maximize the truth or rationality in the subject’s sayings.” — WP

        If you’re really interested in the correct technical sense, why don’t you look that up yourself?

        Heuristics have limitations , and need to be supplemented with other things. That is why both Less Wrong and traditional rationality have multiple rules, not just one master rule. Needing to be supplemented doens’t make a rule wrong.

        “you described it as a way of finding “a better argument,” which is NOT synonymous with “improving communication.” Which of those goals do you actually endorse”

        You can improve communication by trying to find an interpretation of a claim by which it comes out as obvious:; and you can improve communication by taking a non obvious claim to mean what it qppear to say, but to be supported by a non obvious argument.

        “What a fantastically glib response! Have you even considered the implications of what you’ve just confessed? If not, go back and read my and Hallquist’s remarks on the subject, because we’ve identified a few problems that deserve more than just a passing remark.”

        Here’s my initial guess.
        The main theme seem to be that there is a list of things which are TRUE, and a list of things which are FALSE .. but being too charitable means you accept or invent arguments for one of the FALSE things.

        But how does anyone know what’s true or false absent arguments or evidence? There is no oracle that lists all the true propositions. Epistemology doesn’t start with truth, it ends with it.

        So lets try another guess. Maybe the problem with the PoC is that it means people become enamoured with one argument for A, and forget the many better arguments for not-A. That’s better epistemology, But it hasn’t got much to do with the PoC as a communicative principle. It’s bad epistemology to ignore the balance of arguments and weight of evidence..but the PoC doesn’t say to do that. . The argument boils down to the PoC being dangerous when misunderstood or misapplied, but that proves far too much, since everything else is.

        Third guess: Maybe the problem with the PoC is that it turns people into decadent epistemologists, who. value novel arguments or interesting arguments over strong ones. Again, that is not a normative application of PoC.

        Like

      • That’s wiki’s definition, I’ve already seen it. It’s not technical because it fails to provide a technical meaning for the extraordinarily vague words “best” and “strongest.” Without being more specific, that could mean almost anything at all. Blackburn’s definition isn’t coherent because he can’t make up his mind between “truth” and “rationality,” and, as it turns out, you end up with wildly divergent results depending on which of those you prefer. I’m telling you, I’ve seen all of this already. It doesn’t work.

        “Needing to be supplemented doens’t make a rule wrong.”

        No, but being incoherent does – or, rather, it makes it not really a rule at all. As happy as I am that you at least tried to provide a real definition for charity, you haven’t yet succeeded, so I won’t be addressing the rest of your comment. Try again, though, and maybe you’ll get somewhere.

        Like

      • The technical meaning that the PoC actually has, is enough to show that most the arguments against it are straw men. that’s nearly all that need be said.

        You are, again, holding it to an absolute standard, such that nothing counts as a technical defintion unless it can be applied infallibly and mindlessly, an algorithm in effect.

        Like

      • “The technical meaning that the PoC actually has, is enough to show that most the arguments against it are straw men.”

        Wow! How convenient for you that you don’t have to specifically or rigorously address any of your opponents’ criticisms! What an incredible coincidence!

        “You are, again, holding it to an absolute standard, such that nothing counts as a technical defintion unless it can be applied infallibly and mindlessly, an algorithm in effect.”

        Bullshit. Application is an entirely separate matter from definition, and I’m more than averagely aware of that fact. (Check my blog if you don’t believe me: I have an entire tag devoted to matters of “theory and practice.”) So far, you’ve failed to even DEFINE charity, let alone practice it. I might as well say that I have my own PoC, the Principle of Color, that states that every argument should be interpreted in its bluest form, i.e., either in as depressing or as jazzy a way as possible. The components are comprehensible enough on their own, but as a whole it’s nonsense. Likewise for your (really, wiki’s) definition of charity: it’s really two entirely different ideas (theories, definitions, etc.) that only appear to be one thing because you’ve naively lumped them together under a single word. Unless you can pick one – or, at the very least, establish a priori which part is relevant in which cases – it’s just not coherent, period.

        Like

      • Here, I’ll even rephrase my criticism to be more explicit so that your tiny brain can understand better: your definition of the PoC results in irremediably divergent consequences PRECISELY WHEN it’s “applied infallibly and mindlessly.” In other words, I’m not saying that your definition will be applied incorrectly because people are imperfect (although I still do hold you responsible for your wildly insufficient and glib response to the fact that any PoC will be applied incorrectly because people are imperfect; just saying “that goes for any principle” is deeply, profoundly insufficient). I’m saying that your definition, if applied CORRECTLY, will repeatedly lead down two different paths that have nothing to do with one another.

        Here, I’ll even give you an example in case you’re struggling to comprehend this in the abstract. Here’s a version of the zombie argument that Hallquist discusses above:
        1. According to physicalism, all that exists in our world (including consciousness) is physical.
        2. Thus, if physicalism is true, a metaphysically possible world in which all physical facts are the same as those of the actual world must contain everything that exists in our actual world. In particular, conscious experience must exist in such a possible world.
        3. In fact we can conceive of a world physically indistinguishable from our world but in which there is no consciousness (a zombie world).
        4. Therefore, physicalism is false.

        Truth-charity would have us “improve” the argument like so:
        1. According to physicalism, all that exists in our world (including consciousness) is physical.
        2. Thus, if physicalism is true, a metaphysically possible world in which all physical facts are the same as those of the actual world must contain everything that exists in our actual world. In particular, conscious experience must exist in such a possible world.
        3. In fact we can’t conceive of a world physically indistinguishable from our world but in which there is no consciousness (a zombie world).
        4. (there is no 4; nothing of interest follows from 1-3)

        Meanwhile, logic-charity would have us “improve” it like so:
        1. According to physicalism, all that exists in our world (including consciousness) is physical.
        2. Thus, if physicalism is true, any metaphysically possible world in which all physical facts are the same as those of the actual world must contain everything that exists in our actual world. (2a) In particular, if physicalism is true, conscious experience must exist in every such possible world.
        3. In fact we can conceive of a world physically indistinguishable from our world but in which there is no consciousness (a zombie world).
        4. If we can conceive of X, then X is metaphysically possible. (4a) In particular, if we can conceive of a zombie world, then a zombie world is metaphysically possible.
        5. Therefore, there is at least one metaphysically possible world in which all physical facts are the same as those of the actual world but in which consciousness does not exist.
        6. Therefore, not all metaphysically possible worlds in which all physical facts are the same as those of the actual world are worlds in which consciousness exists.
        7. Therefore, physicalism is false.

        Allow me to stress, again, that these reinterpreted arguments do NOT represent a failed attempt at charity by Blackburn’s stated definition. To the contrary, each perfectly represents the two divergent branches of that definition. But it should be overwhelmingly obvious that those two altered arguments are incompatible with one another at practically every level. The first one has no conclusion and does nothing to improve the logic of the original argument; the second, meanwhile, trades on falsehoods and therefore does nothing to improve the truth of the original argument. Furthermore, neither one represents a “stronger” or “better” version of the zombie position than my original phrasing of the argument: the first doesn’t even support the desired conclusion and the second is vulnerable to the same fatal objection as the original (i.e., that we can’t actually conceive of zombie worlds).

        So you tell me: are these BOTH charitable reinterpretations of the original zombie argument that I provided? Is neither of them charitable? Is only one of them charitable? And how, exactly, is either of them a “better” or “stronger” version of the original?

        Like

      • You are treating the PoC as failing to do things it is nowhere claimed or advertised to do. It does have a technical meaning in the sense that it is not about being “nice”. It doesn’t have a technical meaning in the sense that you are assuming, where it works like an algorithm ,where everything it depends on is perfectly defined, where it can be applied mindlessly, and where it always leads to a single determinate result. In fact, nothing in epistemology works that way….the baseline is nowhere near what you are assuming.

        If the PoC were some sort of algorithm that you were supposed to be able to plug into a formal system, then producing divergent results would be a highly undesirable. But it isn’t. It is supposed to be a heuristic for use in debates. If you get divergent results, you can ask the other person which is closest, or pick which one you think is best for some informal definition of best…like the informal definitions of “best” you have to use when picking a “best explanation”. Unfortunately this sort of thing is ubiquitous in epistemology.

        Why would you think otherwise? You wouldn’t if you were generalising from epistemology. You appear to be generalising from something else, maybe computers science.

        Like

      • “If you get divergent results, you can ask the other person which is closest”

        That’s exactly what I did! I got divergent results and then I asked you, the other person, which was closest. And yet, instead of answering, you decided to give me a bullshit lecture on the purely semantic distinction-without-a-difference between a “heuristic” and an “algorithm.”

        So now I’m going to hold you to your own sophistry. Either answer my question (as you yourself now say that you’re obliged to do) or admit that you have not the slightest idea of what you’re talking about. I’ve had enough of your ludicrous shell games. Stake yourself to a meaningful position or stop pretending that you have anything real to say.

        Like

  6. Then there’s the stuff about how leading scientists secretly know the neoreactionaries are right about the the inferiority of black people, but can’t say so because academic freedom is a sham. This comes with very little in the way of evidence attached.

    There are racial differences that are well-known by researchers, but which are difficult to discuss in the public press. Mainstream Science on Intelligence is a case where a large group of scientists made a statement that group differences in IQ are real, that they are partly heritable, and that these conclusions are mainstream in the field despite being denied by the media.

    Nowadays, it’s easy to find very detailed syntheses of literature on group differences in the HBD blogosphere (HBD stands for “human biodiversity”). I recommend this FAQ and HumanVarieties.org. Moldbug doesn’t explain this evidence, but it’s out there. The current state of the evidence would be difficult to discuss in the public press for political reasons.

    As for “The Cathedral,” it’s practically a cliche that academia is an ideological monoculture where academic freedom fades every day. Most professors are liberal, yet even liberal professors are afraid of liberal students. We could spend all day just discussing the articles about repression in academia that have come out in entirely mainstream media. Moldbug is not the first person to criticize academia for being corrupted by politics.

    Racial differences, and the political state of the university, are both areas of intense debate. Who the crackpots are depends on the outcome of these debates.

    Liked by 1 person

    • IQ is partly heritable, and there are measured differences between groups. It doesn’t follow that the differences between groups are the result of heredity. (Incidentally, I’ve read The Bell Curve, and I was surprised to find that it does a very good job of explaining why this doesn’t follow.)

      College professors tend to be more liberal than the general population, but this appears to be the result of self-selection. Conservative political scientist Matthew Woessner writes:

      Quite surprisingly, whatever impact college might have on students’ academic ambitions, left-leaning first-year students begin their education with a far greater interest in eventually pursuing a doctoral degree than their conservative counterparts. Whereas liberal and conservative students have very similar grades and nearly identical levels of satisfaction with their overall college experience, right-leaning students are far more likely to select “practical” majors that are less likely to lead to advanced degrees. Their emphasis on vocational fields such as business and criminal justice permits them to move directly into the workforce.

      Other researchers like Neil Gross have found the same thing. In spite of this tendency for conservatives to self-select out of academia, calling the result an “ideological monoculture” is an exaggeration. Economists, for example, tend to be even more pro-market than the average conservative in the US.

      Professors having to fear their students is a problem, but there’s no conspiracy here. The real source of the problem is universities shifting to treating students more like customers, and shift away from tenure-track faculty to adjunct professors with much less job security.

      Like

      • Since the Bell Curve was published we have a lot of new evidence showing that intelligence has a large genetic component, beyond heritability from twin studies. I apologize for mangling the links in order to get through through the spam filter.

        Half the variation in IQ is due to genes: http://blogs.discovermagazine .com/gnxp/2011/08/half-the-variation-in-i-q-is-due-to-genes/

        Genetics explains most of twin heritability estimates: http://pss.sagepub .com/content/24/4/562.full

        In summary, GCTA estimates confirmed about two thirds of twin-study estimates of heritability for cognitive abilities, using the same measures at the same age in the same sample. This finding implies that, with sufficiently large sample sizes, many genes associated with cognitive abilities can be identified using the common SNPs on current DNA arrays.

        Alleles related to IQ have different frequencies in different populations: http://www.ibc7 .org/article/journal_v.php?sid=317

        This study found partial support for the first prediction, that trait increasing alleles are present at higher frequencies among populations with higher trait values. This was confirmed only with regards to IQ plus educational attainment increasing alleles, where a significant difference between the allele frequencies for the three races was found.

        Although phenotypic group differences might theoretically not be the result of heredity, a non-hereditarian explanation is untenable given the current state of the evidence. Because different populations evolved in different locations under different selection pressures, we should not be surprised to see genetic differences between traits.

        nydwyracu’s link in his reply suggests that self-selection is not a sufficient explanation of the liberal-conservative gap in academia. Although there is variation in opinion in academia, the higher status opinions tend to converge. As Moldbug asks, what is the difference in the intellectual product between Harvard and Yale, especially in the social sciences? On what controversial political questions (e.g. diversity, immigration, oppression, feminism) do large factions within/between HYPS institutions disagree? If you look at their values in a historical perspective, then they are indeed an ideological monoculture. A few dissenting (and increasingly nervous) professors are not enough to show otherwise, and neither are free market economists, especially given how many of them support open borders. In Moldbug’s view, even presence of conservative professors doesn’t mean that academia isn’t leftist, because he views conservatives as time-lagged progressives.

        The problems of professors fearing students isn’t just about the shift to students as consumers. The government instills fear in universities of lawsuits or loss of federal funding, making them more eager to aggressively pursue student complaints. See Laura Kipnis running into a Title XI investigation: https://www.thefire .org/laura-kipniss-title-ix-inquisition-reveals-absurdity-of-the-current-campus-climate/

        As for “conspiracies,” nobody is alleging a conspiracy in the conventional sense of the word. Moldbug uses the term “distributed conspiracy,” but he also uses the term “self-organizing consensus”, “spontaneous coordination” or “synopsis”. He explains this notion of “The Cathedral” in more detail here.

        Liked by 1 person

      • Although phenotypic group differences might theoretically not be the result of heredity, a non-hereditarian explanation is untenable given the current state of the evidence.

        [Citation needed.]

        On what controversial political questions (e.g. diversity, immigration, oppression, feminism) do large factions within/between HYPS institutions disagree?

        Here’s an interesting question: if you polled random English professors on whether America should greatly increase the number of immigrants we allow in to the country, what would they say? I suspect they wouldn’t differ that much from the general population, but I don’t know. (Note that my choice of “English professors” as opposed to “economists” was deliberate here.)

        But lack of disagreement in itself isn’t evidence of anything. You may as well say, “on what controversial scientific questions (e.g. evolution) is there disagreement among the Ivy League professoriate?” This is another example of neoreactionaries using the same fucking arguments as creationists.

        Like

      • Wait, so you ignore all the citations and then say that citations are needed? I was surprised to hear you say that there is very little evidence of what you describe in very racist terms but actually is basic science that should be obvious (differences in the frequency of alleles between genetic populations). In my experience the evidence is pretty much only on one side, the other side just has PC protestations.

        Like

  7. “Like, one of the most prominent neoreactionary writers goes by Mencius Moldbug online, and the first time I tried reading him, I ran into glaring factual inaccuracies. ”

    Like what? Any evidence that any of what Moldbug says is factually inaccurate?

    Like

    • See here. Idiosyncratic example, but when something like that is the first thing I get whacked in the face with reading an author and I’m told there’s plenty more where that came from, I tend not to waste my time digging further.
      (Recently stopped trying to read Chomsky for somewhat similar reasons.)

      Like

      • Your example of why Moldbug isn’t worth reading is that he… reveals in a throwaway sentence, which could be cut without noticeably affecting the post, that he either didn’t read a particular Glenn Greenwald article or intends that “why” to operate on a higher level?

        If I say that people who have premarital sex do so because of a character flaw, do I really have any idea at all why they have premarital sex? I have an answer, but that’s not an understanding — and what if my political positions depend on my belief that premarital sex must be stopped and that the way to do this is to cultivate the character of the public? Would you respond to this scenario by saying that I have any idea at all why people engage in premarital sex?

        Does Greenwald say in that article that he has an idea why liberal public opinion stopped giving a damn about torture in 2008? In a sense — he chalks it up to “blind leader loyalty”, which is an “authoritarian follower trait” and “one of the worst toxins in our political culture … very antithesis of what a healthy political system requires (and what a healthy mind would produce)”. How is this any different from “premarital sex is caused by character flaws” — at least to Moldbug, who thinks that this loyalty is a natural human drive?

        But that’s beside the point. You are saying that reading Moldbug is a waste of time because you were “whacked in the face with” a perception that he didn’t read a particular Greenwald article. Let’s say that you’re completely right about that. What difference would it make? I’m starting to notice a pattern here: “this writer who I’m inclined to dislike made one mistake that I can identify, so he’s not worth taking seriously.” Are you overapplying the Gell-Mann effect, or are you looking for a prophet?

        Sure, it’s not just Greenwald — it’s also that, in a post where you posture as the “stodgy defender of old ideas” against the “bold contrarians”, you take as knockdown evidence against Moldbug unsubstantiated claims about his inaccuracy on the subject of communism from a capital-C Communist.

        Like

      • You have managed to make me revise my estimate of the probability that you are worth reading significantly downward.
        But seriously, I am beginning to suspect that you are charitable towards people you LIKE and not charitable towards people you DON’T LIKE

        Like

      • Okay, but it’s like – what if you do if you find an expert who says “I think cryo is a scam because ice crystals will totally destroy the cells”. If it’s in a publication, you can’t just go up to him and say “yeah that’s why they use cryoprotectant” – that’s why it’d be nice to have a forum where you could actually talk with experts about topics.

        Like

      • Cryoprotectants aren’t perfect and you still have flash freeze fast enough to move through the freezing point to the glassy transition fast enough to not cause ice formation.
        Also, you can find actual cryo bio people at most research universities that you can ask about these things.

        Like

  8. I agree about your general perspective on less wrong and yudkowski, and although I like most of alexander’s posts the things you point out are the things that usually bug me about him. Still I absolutely feel you’re throwing out the baby with the bathwater regarding the Principle Of Charity. It’s not about assuming whomever you’re discussing with is right or knows what he/she’s talking about. It’s about avoiding attacking strawmen by choosing to think about the strongest version of the opponents argument (and then usually being explicit about your interpretation). I see it as a shift in mentality more than anything else. Instead of trying to “win” by “attacking” the one you’re discussing with you try to salvage whatever interesting or useful ideas that you can. I’m here to learn.

    I believe, based on what I’ve heard about challenging beliefs from social psychology and therapeutic practice, that if you are to change someones mind you have to be willing to *actually engage* with them instead of just dismissing them or attacking them. It’s a sensitive process, but regardless of what you believe Alexander thinks about neoreactionaries you have to give him credit for being the guy who wrote the anti-neoreactionary FAQ that (from what I remember) definitively debunked neoreactionary canon. I believe that that kind of effort is way more important than just preaching to the choir, “these people are awful people.” Neoreactionaries already know that everyone thinks that and are just likely to dismiss those kind of critiques as empty/unscientific/not actually addressing their awesome arguments. The anti-neoreactionary FAQ can’t be dismissed like that.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s