LessWrong against scientific rationality

When I started writing about the crackpot tendencies in the LessWrong community, I approached them as, well, not quite incidental, but as a tumor that started off small and metastasized. Yet when I look back over Eliezer Yudkowsky’s “Sequences,” it’s clear that attacking mainstream scientific rationality was always the entire point.

I’m writing this having just spent a bunch of time lining up relevant citations from Yudkowsky’s writings, so now it looks really obvious, but of course it isn’t obvious to a lot of people encountering his writings for the first time. That’s because they aren’t framed as an attack on science, they’re framed as being about “rationality” and start off with lots of common-sense stuff about changing your beliefs in response to evidence.

It’s only after you’ve slogged through a long sequence of blog posts on interpretations of quantum mechanics that you’re told that the purpose of everything you just read was “to break your allegiance to Science.” And even then you might be tempted to assume he can’t be really saying anything all that radical, since he’s been saying all these nice things about rationality and displaying the superficial marks of science fandom, right?

In Tumblr discussion of LessWrong, someone commented:

I’ve definitely compared Yudkowsky to Rand before. Particularly in the senses of “some of your biggest flaws are things you’ve accurately diagnosed in other contexts” and “The best part of your writing explicitly repudiates a lot of behaviors that your median writing implicitly encourages.”

Yudkowsky actually has a fairly good post on Ayn Rand and how her philosophy of Objectivism became a cult in spite of paying lip-service to rationality. Notably, he says:

“Study science, not just me!” is probably the most important piece of advice Ayn Rand should’ve given her followers and didn’t. There’s no one human being who ever lived, whose shoulders were broad enough to bear all the weight of a true science with many contributors…

To be one more milestone in humanity’s road is the best that can be said of anyone; but this seemed too lowly to please Ayn Rand. And that is how she became a mere Ultimate Prophet.

But if you look at the rest of Yudkowsky’s writing, he sure seems more interested in becoming an Ultimate Prophet than encouraging his followers to study science.

Years ago in college I wrote a book debunking claims by Christian apologists claiming to “prove” that Jesus rose from the dead using historical evidence. At the very end of the book I included a little paragraph about how you shouldn’t just take my word for anything and should do your own research and form your own conclusions. In retrospect, that paragraph feels cheesy and obvious, but seeing the alternative makes me glad I included it.

Yudkowsky could have, after arguing at length for the many worlds interpretation of quantum mechanics, said, “I recommend going and studying the arguments of physicists who defend other interpretations, and when you do that I think you’ll see that physicists are screwing up.” That might have been reasonable. Many physicists accept many worlds, and I can accept that it’s sometimes reasonable for a dedicated amateur to have strong opinions on issues that divide the experts.

But instead he expects his readers to “break their allegiance to Science,” and switch to his brand of “rationalism” instead, based solely on reading one amateur’s account of the debate over interpretations of quantum mechanics. That’s some chutzpah. Obviously you should read more than one perspective before having a strong opinion on an issue where even experts disagree.

Or maybe it’s not obvious–sometimes I get asked what I think people should read as a substitute for Yudkowsky. And the answer is I don’t think you should go looking for one substitute if we’re talking about anything controversial. (On philosophy, I’ve toyed with recommending anthologies–poking around Amazon, this one looks pretty good.)

In point of fact, Yudkowsky’s quantum mechanics sequences appears to make mistakes that suggest he has not quite a full intro course worth of quantum mechanics under his belt. But in a way, that’s secondary. The bigger issue is that his readers have no way of knowing if he gets the physics right, or if gets the core facts right but misrepresents the views and arguments of physicists with different views.

Yudkowsky also takes the fact that physicists haven’t all agreed on many worlds by now as evidence that science is “too slow.” In another post, he imagines a “master rationalist” in the distant-ish future telling his students:

Eld scientists thought it was acceptable to take thirty years to solve a problem. Their entire social process of science was based on getting to the truth eventually. A wrong theory got discarded eventually—once the next generation of students grew up familiar with the replacement. Work expands to fill the time allotted, as the saying goes. But people can think important thoughts in far less than thirty years, if they expect speed of themselves.

In another post, Yudkowsky lodges a similar complaint about philosophy:

And: Philosophy is just not oriented to the outlook of someone who needs to resolve the issue, implement the corresponding solution, and then find out – possibly fatally – whether they got it right or wrong. Philosophy doesn’t resolve things, it compiles positions and arguments. And if the debate about zombies is still considered open, then I’m sorry, but as Jeffreyssai says: Too slow! It would be one matter if I could just look up the standard answer and find that, lo and behold, it is correct. But philosophy, which hasn’t come to conclusions and moved on from cognitive reductions that I regard as relatively simple, doesn’t seem very likely to build complex correct structures of conclusions.

I agree that progress in academic fields is sometimes slowed by the irrationality of the participants. People don’t like admitting being wrong. Unfortunately, knowing this isn’t much help unless you’ve discovered a magic formula for overcoming this flaw in human nature. More on that later. But thinking irrationality is the only reason why progress is slow ignores the fact that often, progress is slow because the questions are just really hard.

Standard histories of philosophy start in ancient Greece, but arguably this is misleading. Aristotle didn’t have our distinction between science and philosophy (his notion of “first philosophy,” in contemporary terms, maps to one sub-area of metaphysics). I don’t think it’s an accident that “modern” philosophy (Descartes et al.) happened shortly after the time of Galileo, but even in Newton’s time, “natural philosophy” was more than a figure of speech.

If I’m being really contrarian, I could argue that philosophy in our sense of the term was invented by Kant, because he was one of the first thinkers to really grapple with the shortcomings of metaphysics compared to science. Kant’s answer was to attempt a “Copernican revolution” in epistemology and metaphysics, analogous (in his mind) to not only the scientific revolution but to Euclid’s putting a firm foundation on mathematics.

I know of no one today who thinks he succeeded. Even self-described Kantians don’t think Kant managed to turn metaphysics into a science. But Kant wasn’t the only one to try. At the end of The History of Western Philosophy, Bertrand Russell says the following about “the philosophy of logical analysis” (roughly, logical positivism):

The aims of this school are less spectacular than those of most philosophers in the past, but some of its achievements are as solid as those of the men of science…

Some men, notably Carnap, have advanced the theory that all philosophical problems are really syntactical, and that, when errors in syntax are avoided, a philosophical problem is thereby either solved or shown to be insoluble. I think this is an overstatement, but there can be no doubt that the utility of philosophical syntax in relation to traditional problems is very great…

Modern analytical empiricism, of which I have been giving an outline, differs from that of Locke, Berkeley, and Hume by its incorporation of mathematics and its development of a powerful logical technique. It is thus able, in regard to certain problems, to achieve definite answers, which have the quality of science rather than of philosophy. It has the advantage, as compared with the philosophies of the systembuilders, of being able to tackle its problems one at a time, instead of having to invent at one stroke a block theory of the whole universe. Its methods, in this respect, resemble those of science. I have no doubt that, in so far as philosophical knowledge is possible, it is by such methods that it must be sought; I have also no doubt that, by these methods, many ancient problems are completely soluble.

There remains, however, a vast field, traditionally included in philosophy, where scientific methods are inadequate. This field includes ultimate questions of value; science alone, for example, cannot prove that it is bad to enjoy the infliction of cruelty. Whatever can be known, can be known by means of science; but things which are legitimately matters of feeling lie outside its province.

In spite of Russell’s attempt to present this philosophy as more humble than what came before it, it also suffers from the problem that few people today think it succeeded.

The reason I’m saying all this is because when philosophers act like they’re not really trying to resolve debates, it’s because they know such attempts have a track record of not working. That doesn’t mean we will never put philosophy on a solid footing, but it does mean that anyone who shows up claiming to have done so single-handedly deserves a fair dose of skepticism.

The zombie debate is as good an example as any of this, so let’s talk about that. David Chalmers’ claim is that there could exist (in other possible worlds with different psychophysical laws–not the actual world) beings that are physically identical to us, but who lack consciousness. The intuition that such “zombies” are possible leads Chalmers to a view that at least looks a lot like epiphenomenalism (the belief in a separate mental realm affected by, but which does not affect, the physical realm).

Epiphenomenalism strikes a lot of people as crazy–me included! But Chalmers realizes this. So in The Conscious Mind he tries to do two things (1) argue his view is not quite epiphenomenalism (2) argue that some of the apparent advantages of certain other views over epiphenomenalism are illusory.

Does he succeed? I don’t know. But what makes me sympathetic to Chamlers is the sense that what he calls the hard problem of consciousness is a real problem, and alternative solutions aren’t any better. And Yudkowsky, as far as I can tell, isn’t one of those people who says, “the so-called ‘hard problem’ is a fake problem.” He agrees that it’s real–and then claims to have a secret solution he’ll sell you for several thousand dollars.

I think it’s enormously unlikely that Yudkowsky has really found the secret solution to consciousness. But even if he had, I don’t think anyone could know, including him. It’s like an otherwise competent scientist refusing to submit their work for peer review. Even top experts are fallible–and the solution is to have other experts check their work. Ideas like that are part of what Yudkowsky is rejecting when he says he rejects science for an allegedly superior “rationality.”

(As an aside–the post quoted above was written after the main anti-zombie sequence, and also takes a strong stand against modal logic and possible worlds. But since the zombie claim is a claim about other possible worlds, why didn’t Yudkowsky lead with the attack on possible worlds? That later post makes the earlier ones look superfluous.)

An optimist might point out that insofar as physics was once considered to be literally “natural philosophy” (and a subject of great confusion), we do have a track record of turning seemingly intractable philosophical issues into tractable scientific ones. But as for how to do that in the future, that easier said than done. Kant and Russell already did the saying of things in that vein, but they came up short on the doing.

Extrapolating from what worked in the past, to assumptions about how we will solve our current conundrums, is dangerous business. With hindsight, Descartes’ faith in the a priori makes him look like a stuffy old metaphysician. But in his day, Descartes was a champion of Copernicanism, and an opponent of the old Aristotelianism. From the SEP:

In establishing the ground for science, Descartes was at the same time overthrowing a system of natural philosophy that had been established for centuries—a qualitative, Aristotelian physics. In a letter to Mersenne, dated 28 January 1641, Descartes says “these six meditations contain all the foundations of my physics. But please do not tell people, for that might make it harder for supporters of Aristotle to approve them. I hope that readers will gradually get used to my principles, and recognize their truth, before they notice that they destroy the principles of Aristotle.”

Given the state of knowledge in his day, trying to bring the certainty of mathematics to all knowledge must have felt perfectly reasonable.

Having said all this, I get the impression that Yudkowsky’s real beef with mainstream experts is not the live debates they’ve failed to resolve, but the fringe claims they dismiss. In a post called “Undiscriminating Skepticism”, he complains that many people who are skeptical about UFOs and astrology are just “in the habit of hanging out in moderately educated circles” and know that those beliefs “are not accepted beliefs of my tribe.”

Now, I don’t actually think “adopt the beliefs of other educated people” is a bad heuristic. Granted, there seem to be a lot of issues where a generic college degree doesn’t help much. Even a generic Ph.D. may not help, if you’re not inclined to trust other experts when you go outside your specialty. “Trust the experts” is the actual heuristic I advocate–but in general, I don’t think these kind of heuristics are inherently bad.

But back to Yudkowsky’s post. He thinks this is a problem, not because he believes in UFOs or astrology, but because he thinks other people are guilty of doing the same thing with regards to AI, nanotechnology, and cryonics. And looking at that part, one sentence really stood out: “Michael Shermer blew it by mocking molecular nanotechnology.”

This example caught my eye because Shermer–who’s a historian of science, longtime Scientific American columnist, and founder of the Skeptics Society–is one of today’s major faces of scientific skepticism. So I tried to figure out what Yudkowsky was talking about. This essay, “Nano Nonsense & Cryonics”, was the best I could come up with. Reading it, Shermer’s stance strikes me as pretty reasonable:

During freezing, the water within each cell expands, crystallizes, and ruptures the cell membranes…

Cryonicists recognize this detriment and turn to nanotechnology for a solution. Microscopic machines will be injected into the defrosting “patient” to repair the body molecule by molecule until the trillions of cells are restored and the person can be resuscitated…

I want to believe the cryonicists. Really I do. I gave up on religion in college, but I often slip back into my former evangelical fervor, now directed toward the wonders of science and nature. But this is precisely why I’m skeptical. It is too much like religion: it promises everything, delivers nothing (but hope) and is based almost entirely on faith in the future…

This is what I call “borderlands science,” because it dwells in that fuzzy region of claims that have yet to pass any tests but have some basis, however remote, in reality. It is not impossible for cryonics to succeed; it is just exceptionally unlikely. The rub in exploring the borderlands is finding that balance between being open-minded enough to accept radical new ideas but not so open-minded that your brains fall out. My credulity module is glad that some scientists are devoting themselves to the problem of mortality. My skepticism module, however, recognizes that transhumanistic-extropian cryonics is uncomfortably close to religion.

Full disclosure: I’m signed up for cryonics. But the idea that nanomachines will one day be able to repair frozen brains strikes me as highly unlikely. I think there’s a better chance that it will be possible to use frozen brains as the basis for whole brain emulation, but I’m not even sure about that. Too much depends on guesses both about the effects of current freezing techniques and about future technology.

Eliezer, meanwhile, is sure cryonics will work, based, as far as I can tell, on loose analogies with computer hard drives. Faced with such confident predictions, pointing out the lack of evidence and large element of wish-fulfillment (as Shermer does) is an eminently reasonable reason. A “rationalism” that condemns such caution isn’t worthy of the name.

I can’t stress enough the enormous difference between trying to do some informed speculation about what technologies might be possible in the future, and thinking you can know what technologies will be possible in the future based on just knowing a little physics. Take, for example, Richard Feynman’s talk “There’s Plenty of Room at the Bottom”, often cited as one of the foundational sources in the field of nanotechnology.

Today, the part of Feynman’s talk about computers looks prophetic, especially considering the talk was given several years before Gordon Moore made his famous observation about computer power doubling every couple of years. But other things he speculates about are, to say the least, a long way off. Do we blame Feynman for this?

No, because Feynman knew enough to include appropriate caveats. When he talks about the possibility of tiny medical robots, for example, he says it’s “very interesting possibility… although it is a very wild idea.” He doesn’t say that this will definitely happen and be the secret to immortality. And some futurists, like Ray Kurzweil, do say things like that. That’s the difference between having a grasp of the difficulty of the topic and your own fallibility, and well, not.

Yet Yudkowsky is so confident in his beliefs about things like cryonics that he’s willing to use them as a reason to distrust mainstream experts. Well, he doesn’t quite say that. The post I’m thinking of, titled “The Correct Contrarian Cluster,” doesn’t explicitly consider the option of trusting the experts. Instead, it’s framed as answering the question “why not just stick to majoritarianism?” –which presumably refers to the belief that you should always just believe what the majority believes.

Given how many people are creationists, that seems like clearly a bad idea. But the way Yudkowsky presents his answer suggests we should distrust experts too:

My primary personality, however, responds as follows:

  • Religion
  • Cryonics
  • Diet

In other words, even though you would in theory expect the Correct Contrarian Cluster to be a small fringe of the expansion of knowledge, of concern only to the leading scientists in the field, the actual fact of the matter is that the world is *#$%ing nuts and so there’s really important stuff in the Correct Contrarian Cluster. Dietary scientists ignoring their own experimental evidence have killed millions and condemned hundreds of millions more to obesity with high-fructose corn syrup. Not to mention that most people still believe in God. People are crazy, the world is mad. So, yes, if you don’t want to bloat up like a balloon and die, distinguishing the Correct Contrarian Cluster is important.

The use of the diet example is even more embarrassing than the other claims I’ve looked at so far. The line about “dietary scientists ignoring their own experimental evidence” links to an article by Gary Taubes. Taubes champions the diet claims of Robert Atkins, who literally claimed that you could eat unlimited amounts of fat and not gain weight, because you would pee out the excess calories. This, needless to say, is not true.

After reading two of Taubes’ books, I haven’t been able to find anywhere where he addresses the urine claim, but he’s very clear about claiming that no amount of dietary fat can cause weight gain. How Taubes thinks this is supposed to be true, I have no idea. His attempted explanations are, as far as I can tell, simply incoherent. (Atkins at least had the virtue of making a coherent wrong claim.)

Instead, one of the major threads running through Taubes’ writings is a false dichotomy between Atkins’ view that carbs were the enemy and fat is harmless, and a bizarre mirror-image of Atkins’ view saying fat is the enemy and carbs are harmless. Mainstream nutrition scientists, Taubes would have us believe, took the latter view. He even blames them for making people think sugary soft drinks are “intrinsically healthy” because they were low-fat.

This portrayal of mainstream nutrition science is as false as Atkins’ claim about peeing out excess calories. Besides the obvious–who on earth ever believed Coca-Cola was a health food?–Taubes own sources refute him. The government reports which Taubes blames for encouraging high sugar consumption consistently take the boring view that too many calories from any source cause weight gain, and repeatedly emphasize sugar as something to watch out for.

Meanwhile, the actual studies on low-carb diets find that while they may or may not lead to somewhat faster weight-loss in the short term, they suffer from the same major problem all diets have, namely that most people who diet gain the weight back eventually. (This is another club Taubes uses to beat mainstream nutrition science, but he never asks if it applies to his own low-carb solution.)

Remember, this is one of Yudkowsky’s go-to examples for why you shouldn’t trust the mainstream too much! And it’s not just wrong, it’s wrong in a way that could have been caught through common sense and basic fact-checking. But I guess common sense is just tribalistic bias, and who needs fact-checking when you’ve got superior rationality? The nicest thing you can say about this is that, when he encourages his followers to form strong opinions based on the writings of a single amateur, he’s only preaching what he practices.

Given recent discussion of the “Correct Contrarian Cluster” post, I should emphasize that I have no objection in principle to looking at people’s track record of accuracy to figure out who to trust. Yudkowsky’s choice of calibration questions are just epically ill-chosen. (A little after the part I quoted, Yudkowsky declares zombies and many worlds are even better calibration questions, because they’re “slam dunks.”)

Now that I’m thousands of words and about as many tangents into this post, let me circle back to something to something I said early in the post: pointing out the flaws in mainstream experts only gets you so far, unless you actually have a way to do better. This isn’t an original point. Robin Hanson has made it many times. (See here for just one example.) But I want to emphasize it anyway.

It’s the main reason I’m unimpressed with the material on LessWrong about how the rules of science aren’t the rules an ideal reasoner would follow. This is a huge chunk of Yudkowsky’s “Sequences”, but suppose that’s true, so what? We humans are observably non-ideal. Throwing out the rules of science because a hypothetical ideal reasoner wouldn’t need them is like advocating anarchism on the grounds that if Superman existed, we’d have no need for police.

I think this is more than a superficial analogy. To borrow another point from Hanson, most of us rely on peaceful societies rather than personal martial prowess for our safety. Similarly, we rely on the modern economy rather than personal survival skills for food and shelter. Given that, the fact that science is, to a large extent, a system of social rules and institutions doesn’t look like a flaw in science. It may be the only way for mere mortals to make progress on really hard questions.

Yudkowsky is aware of this argument, and his response appears to mostly depend on assuming the reader agrees with him that physicists are being stupid about quantum mechanics–that, combined with a large dose of flattery. “So, are you going to believe in faster-than-light quantum ‘collapse’ fairies after all? Or do you think you’re smarter than that?” asks one post.

This is combined with an even stranger argument, an apparent belief that it should be possible for amateurs to make progress faster than mainstream experts simply by deciding to make progress faster. Remember how the imagined future “master rationalist” complains “Eld scientists thought it was acceptable to take thirty years to solve a problem”? This is a strange thing to complain about. Either you have a way to make progress quickly or you don’t, and if you don’t, you don’t have much choice but to accept that fact.

Back in the real world, wishing away the difficulty of hard problems don’t make them stop being hard. This doesn’t mean progress is impossible, or that’s it’s not worth trying to improve on the current consensus of experts. It just means progress requires a lot of work, which most of the time includes first becoming an expert yourself, so you have a foundation to build on and a sense of what mistakes have already been made. There’s no way to skip out on the hard work by giving yourself superpowers.

One last thing: some of the “rules of Science” Yudkowsky complains about seem more like misunderstandings of science. It’s like the people who should “ad hominem fallacy!” every time they feel insulted in an argument. They’re wrong, but the fact that they’re wrong doesn’t refute logic. Similarly, it’s hard to take the fact that supposed “rules of science” are sometimes oversimplifications seriously as an objection to science itself.

One of the main things I’m thinking about here is a story Yudkowsky tells from when he was younger:

As a Traditional Rationalist, the young Eliezer was careful to ensure that his Mysterious Answer made a bold prediction of future experience. Namely, I expected future neurologists to discover that neurons were exploiting quantum gravity, a la Sir Roger Penrose. This required neurons to maintain a certain degree of quantum coherence, which was something you could look for, and find or not find. Either you observe that or you don’t, right?…

As a Traditional Rationalist, the young Eliezer was careful not to believe in magic, mysticism, carbon chauvinism, or anything of that sort. I proudly professed of my Mysterious Answer, “It is just physics like all the rest of physics!” As if you could save magic from being a cognitive isomorph of magic, by calling it quantum gravity. But I knew not the Way of Bayes, and did not see the level on which my idea was isomorphic to magic. I gave my allegiance to physics, but this did not save me; what does probability theory know of allegiances? I avoided everything that Traditional Rationality told me was forbidden, but what was left was still magic…

The way Traditional Rationality is designed, it would have been acceptable for me to spend 30 years on my silly idea, so long as I succeeded in falsifying it eventually, and was honest with myself about what my theory predicted, and accepted the disproof when it arrived, et cetera.

Wait a minute. Who on earth thinks that it’s good scientific practice to believe any crazy thing you want, even if there’s no evidence for it, as long as you could theoretically falsify it 30 years down the road? Maybe someone can dig up one instance of Feynman or whoever saying something like, “the beauty of science is you can have any idea you like as long as you can test it,” but I doubt such statements are ever meant to be taken literally.

In practice, when scientists talk about people like Roger Penrose, they don’t just shrug their shoulders and say, “well, maybe he’ll be falsified someday.” They want to know whether he has any good arguments for his ideas, and whether they’re plausible in light of both what we know about the brain and what we know about physics. The big difference between what they do and what Yudkowsky advocates is that probability theory is much less useful here than a good knowledge of cell biology.

If there’s a single take-away from all this, it’s that I don’t think Yudkowsky’s writings are a remotely good way to “learn rationality,” even if they do contain a few nice things about willingness to change your mind. The flaws aren’t just incidental rough edges. Rejection of mainstream scientific rationality was always the point.

17 thoughts on “LessWrong against scientific rationality

  1. I’ve noticed this before. Look at this sequence post for instance: http://lesswrong.com/lw/mj/rational_vs_scientific_evpsych/

    Yudkowsky says “here is an evo psych explanation, and a test YOU COULD DO.” As far as I can tell, he makes no effort to actually do the test, but insists that the abstract possible existence of a test makes his evo psych explanation credible. He confuses testable in principle for tested.

    This misconception of how science works (break problems down, test them in pieces, test them again,etc) is a large part of the reason that MIRI has produces such a shockingly little amount of technical work. They never hire people with any experience getting research done. Their interview isn’t “tell me about your research” it’s “tell me how many of the sequences you’ve read.”

    Liked by 1 person

  2. Totally agree with this problem–I’ve noticed some similar patterns. I actually think Eliezer is better about this than a lot of other LWers, too. This has been getting more and more annoying to me lately.

    (Nitpick: I actually don’t see any arguments in “You Only Live Twice” that Eliezer is sure cryonics will work. In fact, he says–“The second statement is that you have at least a little hope in the future. Not faith, not blind hope, not irrational hope – just, any hope at all.” Which implies to me that Eliezer, like you, thinks that it’s unlikely-but-possible, not a sure thing. I think he thinks the “slam dunk” is that the expected value is positive, not that it’s definitely going to work.)

    Like

    • My impression is that he feels basically certain about it working on a technical level, but is more unsure about e.g. whether we’ll avoid existential catastrophe, whether future people will make good on promises to resurrect crypo patients & treat them well.

      So he says (with italics for emphasis “Pumping someone full of cryoprotectant and gradually lowering their temperature until they can be stored in liquid nitrogen is not a secure way to erase a person”), and that cryonics isn’t a slam dunk because “since it involves social guesses and values, not just physicalism.” Which implies that if there weren’t the social uncertainty (about behavior of future people, I presume), physicalism would get you almost all the way there.

      Like

  3. Will:

    insists that the abstract possible existence of a test makes his evo psych explanation credible

    You’re reading the exact opposite of what Eliezer is saying in that post. He’s not saying the existence of a test actually makes the hypothesis rationally more credible; he’s saying that telling people that it’s testable makes it seem more credible to them because it makes it scientific.

    They never hire people with any experience getting research done. Their interview isn’t “tell me about your research” it’s “tell me how many of the sequences you’ve read.”

    This seems pretty snide and uncharitable. MIRI has at least one full PhD on staff, two more PhD students, and has worked closely with a lot of others. Have you actually interviewed there, or are you just taking potshots?

    Liked by 1 person

  4. Some points of agreement: Like you, I don’t see Eliezer as an especially trustworthy thinker. He strikes me as a smart guy with interesting ideas, who at the same time has blind spots and calibration issues. (“Crackpot” seems very unfair.) I recommend *Thinking Fast and Slow* over the Sequences for new rationalists. I think your critiques of the Sequences are some of the best and I appreciate your taking the time to write them.

    Now for where I disagree.

    This criticism seems targeted at blog posts written by Eliezer Yudkowsky, not Less Wrong in general. Less Wrong is an online forum with different contributors and different views. For example, the all-time highest scoring post on Less Wrong is this critique of the Singularity Institute (now MIRI): http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/ And despite Eliezer being “sure” that cryonics will work, most of the people who have read all his writing are far from sure: http://lesswrong.com/lw/jjd/rationalists_are_less_credulous_but_better_at/ So it’d be nice if you could change the title of your post to implicate EY or the Sequences instead of LW in general.

    I say this because I strongly agree with Robin Hanson in his “Rational Me or We?” post http://lesswrong.com/lw/36/rational_me_or_we/ and quality online forums seem like a promising means for improving group rationality. Surveys indicate Less Wrong users are highly educated and do very well on various intellectual aptitude tests: http://lesswrong.com/lw/jj0/2013_survey_results/ It seems really valuable to have places where smart people can post random things for other smart people to read, without having to build an audience the way Scott Alexander and Tyler Cowen do. And Paul Graham’s essay “How to Do Philosophy” makes a good case for low-hanging fruit in philosophical inquiry outside academia: http://www.paulgraham.com/philosophy.html So I’d prefer not to see Less Wrong as a whole maligned because of a single contributor who doesn’t really even post there any more.

    I do think there is something to criticize in the Shermer quote you give. He explicitly makes a superficial judgement of cryonics: “[Cryonics] is too much like religion: it promises everything, delivers nothing (but hope) and is based almost entirely on faith in the future”. This is reasoning by analogy: religion promises a longer life and it’s BS; cryonics promises a longer life and by analogy it must be BS too. Reasoning by analogy is a stronger argument than people give it credit for, but it’s still pretty weak. And this “reasoning by analogy to other things we know are BS” does seem overused in skeptic circles. (E.g. the evidence base for acupuncture seems a lot stronger than the evidence base for homeopathy, but I’ll bet lots of skeptics automatically write off acupuncture as “BS alternative medicine”.)

    I also feel there’s an element of the Worst Argument in the World (http://lesswrong.com/lw/e95/the_noncentral_fallacy_the_worst_argument_in_the/) in calling Less Wrong “against scientific rationality”. The word “science” has an extremely strong brand in our culture. (That’s why people are always trying to tack it on to the names of things: “Scientology”, “Management Science”, etc.) This brand is well-deserved: the processes we refer to as “science” have a really strong track record of success. That suggests that making these processes even better could be extremely high-leverage. And that’s what I see Eliezer doing in his “Science or Bayes?” essay. It doesn’t look to me like he’s trying to throw out all of the processes we refer to as “science”; it looks to me like he’s trying to argue that they aren’t sacred and in at least one case they could benefit from a tweak. Of course, Chesterton’s Fence applies–maybe the processes of science work well for reasons we don’t understand. And maybe, as you suggest here, the tweak he suggests has already been implemented. But characterizing his position as “against scientific rationality” seems like a misrepresentation to me. (To be fair, he made it easy for you by framing the issue as “Science or Bayes”; Eliezer is bad at PR.)

    Like

    • John–I don’t think Eliezer is advocating for just a tweak in the process of science. Yeah, it’s tempting to read it that way, which probably explains why his writings don’t immediately jump out as people as radical. But on a close reading, I think he’s clearly arguing for throwing out key parts of the scientific process, such as the the social process of science, and the sort of cautious, skeptical attitude Shermer represents. I think that’s perfectly fair to describe as “against scientific rationality.”

      Liked by 1 person

  5. I know people who have interviewed with them and with CFAR, though I have not personally. I’ve also heard people who would be involved in staffing decisions say (paraphrasing) that the sequences are more important to the development of an AGI researcher than an advanced degrees in AI.

    If you had to guess, what percentage of the sequences has the average MIRI partner read? Do you think anyone who has worked with MIRI didn’t read any of them?

    Also, who is the full time phd on staff? Are they relatively new, because if they’ve been there awhile why aren’t they publishing with them? I can see one published paper with LaVictoire and it looks like 2 with Hibbard. If MIRI has worked closely with all these PhDs why has nothing come of it?

    Have you perused their actual technical work? Their per-researcher output is pretty dismal.

    Like

  6. “Trust the experts” is good advise, but should be expanded imo: “or study the subject yourself.” As for “borderland science” the reasonable position is to postpone your judgment. Of course it’s fun to speculate (and then philosophy can be very handy) or even to bet on an outcome, but such bets run a considerable risk of losing. A few years ago my bet was that quantum mechanical probabilism would allow for free will (indeed, a la Roger Penrose) when neurobiologists would settle on a model of the human brain, but that bet doesn’t look good anymore (though I still have some hope). I have a habit of losing such bets and am happy to admit it.
    So I never felt the sympathy for Yudkowsky you had.

    Like

  7. You say:

    I agree that progress in academic fields is sometimes slowed by the irrationality of the participants. People don’t like admitting being wrong. Unfortunately, knowing this isn’t much help unless you’ve discovered a magic formula for overcoming this flaw in human nature.

    I don’t think Eliezer’s claim is so much about rationality than it is about incentives and mindsets.

    When writing papers, I’ve sometimes had the thought of “I could make claim X; I don’t if it’s really true and I’m actually kind of sceptical about it myself, but I could still make a convincing enough case for it that it’d pass peer review”. I’ve also had a senior scientist suggest to me that it might be better to not actually make the best argument you can in a paper. Rather, it can be better to make an argument which you know to be subtly flawed, so that your paper will gather more citations from people who notice those flaws and point them out. *Especially* with the “publish or perish” pressure in today’s academia, it can be hard to resist the temptation to do those kinds of things.

    I’ve been in situations where I’ve really genuinely wanted to do good work, and I’ve been in situations where I’ve just wanted to do a good enough work that someone else doesn’t get upset with me. One of the big themes in Eliezer’s writing is that there’s a big difference between following rationality as a social convention and doing just what’s expected of you, and really really actually caring about the truth. I see his writings on the speed of science cautioning against adopting the kind of a mindset that academia sometimes implicitly encourages, where you don’t actually care about the truth, you just care about getting publications. And the thing with implicitly encouraged mindsets is that because they’re never quite said explicitly, people might not realize when they’ve internalized one.

    So I think that it’s a reasonable claim to make that, if more people really cared about getting the right answers and the incentive structures of academia were different, it would be possible to make scientific progress a lot faster. And it’s an important thing to point out, because it’s easy to fall into a mindset of not making your best effort, without even realizing it.

    Like

  8. Having followed LessWrong for several years, I tentatively agree. You focused here on Yudkowsky, but there certainly a large subset of LessWrong that encourages thinking along the lines of “the mainstream experts are stupid, here is my theoretical ideal system that is obviously better”, for example this poster:

    http://lesswrong.com/r/lesswrong/lw/me1/the_unfriendly_superintelligence_next_door/

    Part of the thesis being:

    In general, medical researchers should not be doing statistics. That is a job for the tech industry.

    Not all LessWrong members are guilty of this though, I think Yvain is pretty good at avoiding this sort of thinking. And there are some actual experts.

    But I’ve very gradually come to agree with Gilbert who said:

    the “clusters in thingspace” idea is an instance of a more general failure mode that is fairly common in Less Wrong style arguments. The steps to reproduce the problem on other questions are (1) hand-wavingly map your question to a mathematical structure that isn’t well-defined, (2) use that mapping to transfer intuitions, and (3) pretend that settles it. Note that doing steps 1&2 without step 3 is a fine way to generate ideas. But those ideas can still be wrong. If you want them to be right, you either need to replace step 1 by something much more rigorous or restate the ideas without the mathematical analogy and check if they still make sense.

    Major examples of the Less Wrong groupthink falling into this particular trap include their vulgar utilitarianism, where the individual utility functions and their sums turn out not to be well-definable, and their radical Bayesianism, which basically assumes a universal probability measure that has no sample space or σ-algebra to live on.

    Like

  9. Cryonics operates in a progressing technological frontier, and I have said for years that cryonicists have to make it work. Some neuroscientists and cryobiologists think that cryonics deserves exploration as a strategy to try to turn death from a permanent off-state into a temporary and reversible off-state by approaching the problem as a challenge in applied neuroscience. They have set up the Brain Preservation Foundation to educate the public about this prospect and to raise money for incentive prizes to encourage scientists to push hard on the envelope of current and reachable brain preservation techniques. ‘

    As for Michael Shermer, his thinking about cryonics seems to have evolved from what he wrote several years ago. He and the fellow skeptic Susan Blackmore have associated with this foundation as advisers, so they apparently consider the foundation’s premise scientifically defensible:

    http://brainpreservation.org/

    http://brainpreservation.org/content/advisors

    Like

  10. Hey guys, I’m just here because I wanted to know how close any of you lot are to gathering the Deathly Hallows and Mastering Death……

    Anyway, I just want to chip in 2 bits that it seems you’re all perfectly well-versed in the Sequences, and yet Not A One of You is bringing up HPMOR.

    I just want to say that HPMOR says it very loudly and clearly: DISAGREE WITH THE EXPERTS AT YOUR OWN RISK.

    Eliezer obviously still thinks that Learning to Disagree with Experts is the most important thing you can learn how to do— but the reason he says that is for the obvious reason that if you never learn how to Disagree with an Expert, you can never move……anything forward…..at all, really. Stagnation sets in and junk. People start assuming the problems they haven’t solved yet are unsolveable or that they always have to take a long time to solve because everyone who’s spent a long time on them can’t solve them yet.

    On the one hand, you probably should never *expect that you yourself are that genius/Superman*; On the other hand– there are such things as Supermen/geniuses, and they need to be taught how to *not hold back when they clearly need to use their Superpower*.

    But, even for the Superpowered Ultra Rationalist Geniuses— if you Oppose Authority for what seem like Really Really Good Reasons to you it In Fact Can and Probably Will Blow Up In Your Face.

    “Do Not Mess With Time”– Harry understood better than McGonagall what he had around his neck– *and he still played games with it because he was curious*. Idiot. When the entire body of speculative knowledge of Time Machines doesn’t say very much, but it DOES say this much: THEY ARE VERY DANGEROUS DO NOT GO POKING AROUND TIME.

    Eliezer understands the importance of Scientific Rationality as a body of Authoritative Tradition better than it seems a lot of you want to give him credit for, but he is Not interested in being stopped from testing any and every boundary he can find. Although that does seem to indicate he’s willing to chance the Probability of the World Being Destroyed vs the Probability That I’m The Smartest Guy Ever In History. Since the World is going to be destroyed by the Sun Going Out or something else ANYWAY, I guess that’s why he doesn’t think it’s all that much of a risk but has a high expected return value?

    I like him though. I think I’m the Smartest Guy Ever In History, Much Smarter Than Him and I Should Be In Charge of MIRI. But I like him.

    Like

Leave a comment