When I started writing about the crackpot tendencies in the LessWrong community, I approached them as, well, not quite incidental, but as a tumor that started off small and metastasized. Yet when I look back over Eliezer Yudkowsky’s “Sequences,” it’s clear that attacking mainstream scientific rationality was always the entire point.
I’m writing this having just spent a bunch of time lining up relevant citations from Yudkowsky’s writings, so now it looks really obvious, but of course it isn’t obvious to a lot of people encountering his writings for the first time. That’s because they aren’t framed as an attack on science, they’re framed as being about “rationality” and start off with lots of common-sense stuff about changing your beliefs in response to evidence.
It’s only after you’ve slogged through a long sequence of blog posts on interpretations of quantum mechanics that you’re told that the purpose of everything you just read was “to break your allegiance to Science.” And even then you might be tempted to assume he can’t be really saying anything all that radical, since he’s been saying all these nice things about rationality and displaying the superficial marks of science fandom, right?
In Tumblr discussion of LessWrong, someone commented:
I’ve definitely compared Yudkowsky to Rand before. Particularly in the senses of “some of your biggest flaws are things you’ve accurately diagnosed in other contexts” and “The best part of your writing explicitly repudiates a lot of behaviors that your median writing implicitly encourages.”
Yudkowsky actually has a fairly good post on Ayn Rand and how her philosophy of Objectivism became a cult in spite of paying lip-service to rationality. Notably, he says:
“Study science, not just me!” is probably the most important piece of advice Ayn Rand should’ve given her followers and didn’t. There’s no one human being who ever lived, whose shoulders were broad enough to bear all the weight of a true science with many contributors…
To be one more milestone in humanity’s road is the best that can be said of anyone; but this seemed too lowly to please Ayn Rand. And that is how she became a mere Ultimate Prophet.
But if you look at the rest of Yudkowsky’s writing, he sure seems more interested in becoming an Ultimate Prophet than encouraging his followers to study science.
Years ago in college I wrote a book debunking claims by Christian apologists claiming to “prove” that Jesus rose from the dead using historical evidence. At the very end of the book I included a little paragraph about how you shouldn’t just take my word for anything and should do your own research and form your own conclusions. In retrospect, that paragraph feels cheesy and obvious, but seeing the alternative makes me glad I included it.
Yudkowsky could have, after arguing at length for the many worlds interpretation of quantum mechanics, said, “I recommend going and studying the arguments of physicists who defend other interpretations, and when you do that I think you’ll see that physicists are screwing up.” That might have been reasonable. Many physicists accept many worlds, and I can accept that it’s sometimes reasonable for a dedicated amateur to have strong opinions on issues that divide the experts.
But instead he expects his readers to “break their allegiance to Science,” and switch to his brand of “rationalism” instead, based solely on reading one amateur’s account of the debate over interpretations of quantum mechanics. That’s some chutzpah. Obviously you should read more than one perspective before having a strong opinion on an issue where even experts disagree.
Or maybe it’s not obvious–sometimes I get asked what I think people should read as a substitute for Yudkowsky. And the answer is I don’t think you should go looking for one substitute if we’re talking about anything controversial. (On philosophy, I’ve toyed with recommending anthologies–poking around Amazon, this one looks pretty good.)
In point of fact, Yudkowsky’s quantum mechanics sequences appears to make mistakes that suggest he has not quite a full intro course worth of quantum mechanics under his belt. But in a way, that’s secondary. The bigger issue is that his readers have no way of knowing if he gets the physics right, or if gets the core facts right but misrepresents the views and arguments of physicists with different views.
Yudkowsky also takes the fact that physicists haven’t all agreed on many worlds by now as evidence that science is “too slow.” In another post, he imagines a “master rationalist” in the distant-ish future telling his students:
Eld scientists thought it was acceptable to take thirty years to solve a problem. Their entire social process of science was based on getting to the truth eventually. A wrong theory got discarded eventually—once the next generation of students grew up familiar with the replacement. Work expands to fill the time allotted, as the saying goes. But people can think important thoughts in far less than thirty years, if they expect speed of themselves.
In another post, Yudkowsky lodges a similar complaint about philosophy:
And: Philosophy is just not oriented to the outlook of someone who needs to resolve the issue, implement the corresponding solution, and then find out – possibly fatally – whether they got it right or wrong. Philosophy doesn’t resolve things, it compiles positions and arguments. And if the debate about zombies is still considered open, then I’m sorry, but as Jeffreyssai says: Too slow! It would be one matter if I could just look up the standard answer and find that, lo and behold, it is correct. But philosophy, which hasn’t come to conclusions and moved on from cognitive reductions that I regard as relatively simple, doesn’t seem very likely to build complex correct structures of conclusions.
I agree that progress in academic fields is sometimes slowed by the irrationality of the participants. People don’t like admitting being wrong. Unfortunately, knowing this isn’t much help unless you’ve discovered a magic formula for overcoming this flaw in human nature. More on that later. But thinking irrationality is the only reason why progress is slow ignores the fact that often, progress is slow because the questions are just really hard.
Standard histories of philosophy start in ancient Greece, but arguably this is misleading. Aristotle didn’t have our distinction between science and philosophy (his notion of “first philosophy,” in contemporary terms, maps to one sub-area of metaphysics). I don’t think it’s an accident that “modern” philosophy (Descartes et al.) happened shortly after the time of Galileo, but even in Newton’s time, “natural philosophy” was more than a figure of speech.
If I’m being really contrarian, I could argue that philosophy in our sense of the term was invented by Kant, because he was one of the first thinkers to really grapple with the shortcomings of metaphysics compared to science. Kant’s answer was to attempt a “Copernican revolution” in epistemology and metaphysics, analogous (in his mind) to not only the scientific revolution but to Euclid’s putting a firm foundation on mathematics.
I know of no one today who thinks he succeeded. Even self-described Kantians don’t think Kant managed to turn metaphysics into a science. But Kant wasn’t the only one to try. At the end of The History of Western Philosophy, Bertrand Russell says the following about “the philosophy of logical analysis” (roughly, logical positivism):
The aims of this school are less spectacular than those of most philosophers in the past, but some of its achievements are as solid as those of the men of science…
Some men, notably Carnap, have advanced the theory that all philosophical problems are really syntactical, and that, when errors in syntax are avoided, a philosophical problem is thereby either solved or shown to be insoluble. I think this is an overstatement, but there can be no doubt that the utility of philosophical syntax in relation to traditional problems is very great…
Modern analytical empiricism, of which I have been giving an outline, differs from that of Locke, Berkeley, and Hume by its incorporation of mathematics and its development of a powerful logical technique. It is thus able, in regard to certain problems, to achieve definite answers, which have the quality of science rather than of philosophy. It has the advantage, as compared with the philosophies of the systembuilders, of being able to tackle its problems one at a time, instead of having to invent at one stroke a block theory of the whole universe. Its methods, in this respect, resemble those of science. I have no doubt that, in so far as philosophical knowledge is possible, it is by such methods that it must be sought; I have also no doubt that, by these methods, many ancient problems are completely soluble.
There remains, however, a vast field, traditionally included in philosophy, where scientific methods are inadequate. This field includes ultimate questions of value; science alone, for example, cannot prove that it is bad to enjoy the infliction of cruelty. Whatever can be known, can be known by means of science; but things which are legitimately matters of feeling lie outside its province.
In spite of Russell’s attempt to present this philosophy as more humble than what came before it, it also suffers from the problem that few people today think it succeeded.
The reason I’m saying all this is because when philosophers act like they’re not really trying to resolve debates, it’s because they know such attempts have a track record of not working. That doesn’t mean we will never put philosophy on a solid footing, but it does mean that anyone who shows up claiming to have done so single-handedly deserves a fair dose of skepticism.
The zombie debate is as good an example as any of this, so let’s talk about that. David Chalmers’ claim is that there could exist (in other possible worlds with different psychophysical laws–not the actual world) beings that are physically identical to us, but who lack consciousness. The intuition that such “zombies” are possible leads Chalmers to a view that at least looks a lot like epiphenomenalism (the belief in a separate mental realm affected by, but which does not affect, the physical realm).
Epiphenomenalism strikes a lot of people as crazy–me included! But Chalmers realizes this. So in The Conscious Mind he tries to do two things (1) argue his view is not quite epiphenomenalism (2) argue that some of the apparent advantages of certain other views over epiphenomenalism are illusory.
Does he succeed? I don’t know. But what makes me sympathetic to Chamlers is the sense that what he calls the hard problem of consciousness is a real problem, and alternative solutions aren’t any better. And Yudkowsky, as far as I can tell, isn’t one of those people who says, “the so-called ‘hard problem’ is a fake problem.” He agrees that it’s real–and then claims to have a secret solution he’ll sell you for several thousand dollars.
I think it’s enormously unlikely that Yudkowsky has really found the secret solution to consciousness. But even if he had, I don’t think anyone could know, including him. It’s like an otherwise competent scientist refusing to submit their work for peer review. Even top experts are fallible–and the solution is to have other experts check their work. Ideas like that are part of what Yudkowsky is rejecting when he says he rejects science for an allegedly superior “rationality.”
(As an aside–the post quoted above was written after the main anti-zombie sequence, and also takes a strong stand against modal logic and possible worlds. But since the zombie claim is a claim about other possible worlds, why didn’t Yudkowsky lead with the attack on possible worlds? That later post makes the earlier ones look superfluous.)
An optimist might point out that insofar as physics was once considered to be literally “natural philosophy” (and a subject of great confusion), we do have a track record of turning seemingly intractable philosophical issues into tractable scientific ones. But as for how to do that in the future, that easier said than done. Kant and Russell already did the saying of things in that vein, but they came up short on the doing.
Extrapolating from what worked in the past, to assumptions about how we will solve our current conundrums, is dangerous business. With hindsight, Descartes’ faith in the a priori makes him look like a stuffy old metaphysician. But in his day, Descartes was a champion of Copernicanism, and an opponent of the old Aristotelianism. From the SEP:
In establishing the ground for science, Descartes was at the same time overthrowing a system of natural philosophy that had been established for centuries—a qualitative, Aristotelian physics. In a letter to Mersenne, dated 28 January 1641, Descartes says “these six meditations contain all the foundations of my physics. But please do not tell people, for that might make it harder for supporters of Aristotle to approve them. I hope that readers will gradually get used to my principles, and recognize their truth, before they notice that they destroy the principles of Aristotle.”
Given the state of knowledge in his day, trying to bring the certainty of mathematics to all knowledge must have felt perfectly reasonable.
Having said all this, I get the impression that Yudkowsky’s real beef with mainstream experts is not the live debates they’ve failed to resolve, but the fringe claims they dismiss. In a post called “Undiscriminating Skepticism”, he complains that many people who are skeptical about UFOs and astrology are just “in the habit of hanging out in moderately educated circles” and know that those beliefs “are not accepted beliefs of my tribe.”
Now, I don’t actually think “adopt the beliefs of other educated people” is a bad heuristic. Granted, there seem to be a lot of issues where a generic college degree doesn’t help much. Even a generic Ph.D. may not help, if you’re not inclined to trust other experts when you go outside your specialty. “Trust the experts” is the actual heuristic I advocate–but in general, I don’t think these kind of heuristics are inherently bad.
But back to Yudkowsky’s post. He thinks this is a problem, not because he believes in UFOs or astrology, but because he thinks other people are guilty of doing the same thing with regards to AI, nanotechnology, and cryonics. And looking at that part, one sentence really stood out: “Michael Shermer blew it by mocking molecular nanotechnology.”
This example caught my eye because Shermer–who’s a historian of science, longtime Scientific American columnist, and founder of the Skeptics Society–is one of today’s major faces of scientific skepticism. So I tried to figure out what Yudkowsky was talking about. This essay, “Nano Nonsense & Cryonics”, was the best I could come up with. Reading it, Shermer’s stance strikes me as pretty reasonable:
During freezing, the water within each cell expands, crystallizes, and ruptures the cell membranes…
Cryonicists recognize this detriment and turn to nanotechnology for a solution. Microscopic machines will be injected into the defrosting “patient” to repair the body molecule by molecule until the trillions of cells are restored and the person can be resuscitated…
I want to believe the cryonicists. Really I do. I gave up on religion in college, but I often slip back into my former evangelical fervor, now directed toward the wonders of science and nature. But this is precisely why I’m skeptical. It is too much like religion: it promises everything, delivers nothing (but hope) and is based almost entirely on faith in the future…
This is what I call “borderlands science,” because it dwells in that fuzzy region of claims that have yet to pass any tests but have some basis, however remote, in reality. It is not impossible for cryonics to succeed; it is just exceptionally unlikely. The rub in exploring the borderlands is finding that balance between being open-minded enough to accept radical new ideas but not so open-minded that your brains fall out. My credulity module is glad that some scientists are devoting themselves to the problem of mortality. My skepticism module, however, recognizes that transhumanistic-extropian cryonics is uncomfortably close to religion.
Full disclosure: I’m signed up for cryonics. But the idea that nanomachines will one day be able to repair frozen brains strikes me as highly unlikely. I think there’s a better chance that it will be possible to use frozen brains as the basis for whole brain emulation, but I’m not even sure about that. Too much depends on guesses both about the effects of current freezing techniques and about future technology.
Eliezer, meanwhile, is sure cryonics will work, based, as far as I can tell, on loose analogies with computer hard drives. Faced with such confident predictions, pointing out the lack of evidence and large element of wish-fulfillment (as Shermer does) is an eminently reasonable reason. A “rationalism” that condemns such caution isn’t worthy of the name.
I can’t stress enough the enormous difference between trying to do some informed speculation about what technologies might be possible in the future, and thinking you can know what technologies will be possible in the future based on just knowing a little physics. Take, for example, Richard Feynman’s talk “There’s Plenty of Room at the Bottom”, often cited as one of the foundational sources in the field of nanotechnology.
Today, the part of Feynman’s talk about computers looks prophetic, especially considering the talk was given several years before Gordon Moore made his famous observation about computer power doubling every couple of years. But other things he speculates about are, to say the least, a long way off. Do we blame Feynman for this?
No, because Feynman knew enough to include appropriate caveats. When he talks about the possibility of tiny medical robots, for example, he says it’s “very interesting possibility… although it is a very wild idea.” He doesn’t say that this will definitely happen and be the secret to immortality. And some futurists, like Ray Kurzweil, do say things like that. That’s the difference between having a grasp of the difficulty of the topic and your own fallibility, and well, not.
Yet Yudkowsky is so confident in his beliefs about things like cryonics that he’s willing to use them as a reason to distrust mainstream experts. Well, he doesn’t quite say that. The post I’m thinking of, titled “The Correct Contrarian Cluster,” doesn’t explicitly consider the option of trusting the experts. Instead, it’s framed as answering the question “why not just stick to majoritarianism?” –which presumably refers to the belief that you should always just believe what the majority believes.
Given how many people are creationists, that seems like clearly a bad idea. But the way Yudkowsky presents his answer suggests we should distrust experts too:
My primary personality, however, responds as follows:
In other words, even though you would in theory expect the Correct Contrarian Cluster to be a small fringe of the expansion of knowledge, of concern only to the leading scientists in the field, the actual fact of the matter is that the world is *#$%ing nuts and so there’s really important stuff in the Correct Contrarian Cluster. Dietary scientists ignoring their own experimental evidence have killed millions and condemned hundreds of millions more to obesity with high-fructose corn syrup. Not to mention that most people still believe in God. People are crazy, the world is mad. So, yes, if you don’t want to bloat up like a balloon and die, distinguishing the Correct Contrarian Cluster is important.
The use of the diet example is even more embarrassing than the other claims I’ve looked at so far. The line about “dietary scientists ignoring their own experimental evidence” links to an article by Gary Taubes. Taubes champions the diet claims of Robert Atkins, who literally claimed that you could eat unlimited amounts of fat and not gain weight, because you would pee out the excess calories. This, needless to say, is not true.
After reading two of Taubes’ books, I haven’t been able to find anywhere where he addresses the urine claim, but he’s very clear about claiming that no amount of dietary fat can cause weight gain. How Taubes thinks this is supposed to be true, I have no idea. His attempted explanations are, as far as I can tell, simply incoherent. (Atkins at least had the virtue of making a coherent wrong claim.)
Instead, one of the major threads running through Taubes’ writings is a false dichotomy between Atkins’ view that carbs were the enemy and fat is harmless, and a bizarre mirror-image of Atkins’ view saying fat is the enemy and carbs are harmless. Mainstream nutrition scientists, Taubes would have us believe, took the latter view. He even blames them for making people think sugary soft drinks are “intrinsically healthy” because they were low-fat.
This portrayal of mainstream nutrition science is as false as Atkins’ claim about peeing out excess calories. Besides the obvious–who on earth ever believed Coca-Cola was a health food?–Taubes own sources refute him. The government reports which Taubes blames for encouraging high sugar consumption consistently take the boring view that too many calories from any source cause weight gain, and repeatedly emphasize sugar as something to watch out for.
Meanwhile, the actual studies on low-carb diets find that while they may or may not lead to somewhat faster weight-loss in the short term, they suffer from the same major problem all diets have, namely that most people who diet gain the weight back eventually. (This is another club Taubes uses to beat mainstream nutrition science, but he never asks if it applies to his own low-carb solution.)
Remember, this is one of Yudkowsky’s go-to examples for why you shouldn’t trust the mainstream too much! And it’s not just wrong, it’s wrong in a way that could have been caught through common sense and basic fact-checking. But I guess common sense is just tribalistic bias, and who needs fact-checking when you’ve got superior rationality? The nicest thing you can say about this is that, when he encourages his followers to form strong opinions based on the writings of a single amateur, he’s only preaching what he practices.
Given recent discussion of the “Correct Contrarian Cluster” post, I should emphasize that I have no objection in principle to looking at people’s track record of accuracy to figure out who to trust. Yudkowsky’s choice of calibration questions are just epically ill-chosen. (A little after the part I quoted, Yudkowsky declares zombies and many worlds are even better calibration questions, because they’re “slam dunks.”)
Now that I’m thousands of words and about as many tangents into this post, let me circle back to something to something I said early in the post: pointing out the flaws in mainstream experts only gets you so far, unless you actually have a way to do better. This isn’t an original point. Robin Hanson has made it many times. (See here for just one example.) But I want to emphasize it anyway.
It’s the main reason I’m unimpressed with the material on LessWrong about how the rules of science aren’t the rules an ideal reasoner would follow. This is a huge chunk of Yudkowsky’s “Sequences”, but suppose that’s true, so what? We humans are observably non-ideal. Throwing out the rules of science because a hypothetical ideal reasoner wouldn’t need them is like advocating anarchism on the grounds that if Superman existed, we’d have no need for police.
I think this is more than a superficial analogy. To borrow another point from Hanson, most of us rely on peaceful societies rather than personal martial prowess for our safety. Similarly, we rely on the modern economy rather than personal survival skills for food and shelter. Given that, the fact that science is, to a large extent, a system of social rules and institutions doesn’t look like a flaw in science. It may be the only way for mere mortals to make progress on really hard questions.
Yudkowsky is aware of this argument, and his response appears to mostly depend on assuming the reader agrees with him that physicists are being stupid about quantum mechanics–that, combined with a large dose of flattery. “So, are you going to believe in faster-than-light quantum ‘collapse’ fairies after all? Or do you think you’re smarter than that?” asks one post.
This is combined with an even stranger argument, an apparent belief that it should be possible for amateurs to make progress faster than mainstream experts simply by deciding to make progress faster. Remember how the imagined future “master rationalist” complains “Eld scientists thought it was acceptable to take thirty years to solve a problem”? This is a strange thing to complain about. Either you have a way to make progress quickly or you don’t, and if you don’t, you don’t have much choice but to accept that fact.
Back in the real world, wishing away the difficulty of hard problems don’t make them stop being hard. This doesn’t mean progress is impossible, or that’s it’s not worth trying to improve on the current consensus of experts. It just means progress requires a lot of work, which most of the time includes first becoming an expert yourself, so you have a foundation to build on and a sense of what mistakes have already been made. There’s no way to skip out on the hard work by giving yourself superpowers.
One last thing: some of the “rules of Science” Yudkowsky complains about seem more like misunderstandings of science. It’s like the people who should “ad hominem fallacy!” every time they feel insulted in an argument. They’re wrong, but the fact that they’re wrong doesn’t refute logic. Similarly, it’s hard to take the fact that supposed “rules of science” are sometimes oversimplifications seriously as an objection to science itself.
One of the main things I’m thinking about here is a story Yudkowsky tells from when he was younger:
As a Traditional Rationalist, the young Eliezer was careful to ensure that his Mysterious Answer made a bold prediction of future experience. Namely, I expected future neurologists to discover that neurons were exploiting quantum gravity, a la Sir Roger Penrose. This required neurons to maintain a certain degree of quantum coherence, which was something you could look for, and find or not find. Either you observe that or you don’t, right?…
As a Traditional Rationalist, the young Eliezer was careful not to believe in magic, mysticism, carbon chauvinism, or anything of that sort. I proudly professed of my Mysterious Answer, “It is just physics like all the rest of physics!” As if you could save magic from being a cognitive isomorph of magic, by calling it quantum gravity. But I knew not the Way of Bayes, and did not see the level on which my idea was isomorphic to magic. I gave my allegiance to physics, but this did not save me; what does probability theory know of allegiances? I avoided everything that Traditional Rationality told me was forbidden, but what was left was still magic…
The way Traditional Rationality is designed, it would have been acceptable for me to spend 30 years on my silly idea, so long as I succeeded in falsifying it eventually, and was honest with myself about what my theory predicted, and accepted the disproof when it arrived, et cetera.
Wait a minute. Who on earth thinks that it’s good scientific practice to believe any crazy thing you want, even if there’s no evidence for it, as long as you could theoretically falsify it 30 years down the road? Maybe someone can dig up one instance of Feynman or whoever saying something like, “the beauty of science is you can have any idea you like as long as you can test it,” but I doubt such statements are ever meant to be taken literally.
In practice, when scientists talk about people like Roger Penrose, they don’t just shrug their shoulders and say, “well, maybe he’ll be falsified someday.” They want to know whether he has any good arguments for his ideas, and whether they’re plausible in light of both what we know about the brain and what we know about physics. The big difference between what they do and what Yudkowsky advocates is that probability theory is much less useful here than a good knowledge of cell biology.
If there’s a single take-away from all this, it’s that I don’t think Yudkowsky’s writings are a remotely good way to “learn rationality,” even if they do contain a few nice things about willingness to change your mind. The flaws aren’t just incidental rough edges. Rejection of mainstream scientific rationality was always the point.