Nate Soares has a post up on the MIRI blog about “the claims that are in the background whenever I assert that our mission is of critical importance.” Right now, I want to just briefly comment on the first one:
Claim #1: Humans have a very general ability to solve problems and achieve goals across diverse domains.
We call this ability “intelligence,” or “general intelligence.” This isn’t a formal definition — if we knew exactly what general intelligence was, we’d be better able to program it into a computer — but we do think that there’s a real phenomenon of general intelligence that we cannot yet replicate in code.
Alternative view: There is no such thing as general intelligence. Instead, humans have a collection of disparate special-purpose modules. Computers will keep getting better at narrowly defined tasks such as chess or driving, but at no point will they acquire “generality” and become significantly more useful, because there is no generality to acquire. (Robin Hanson has argued for versions of this position.)
Short response: I find the “disparate modules” hypothesis implausible in light of how readily humans can gain mastery in domains that are utterly foreign to our ancestors. That’s not to say that general intelligence is some irreducible occult property; it presumably comprises a number of different cognitive faculties and the interactions between them. The whole, however, has the effect of making humans much more cognitively versatile and adaptable than (say) chimpanzees.
Why this claim matters: Humans have achieved a dominant position over other species not by being stronger or more agile, but by being more intelligent. If some key part of this general intelligence was able to evolve in the few million years since our common ancestor with chimpanzees lived, this suggests there may exist a relatively short list of key insights that would allow human engineers to build powerful generally intelligent AI systems.
Further reading: Salamon et al., “How Intelligible is Intelligence?”
This seems obviously wrong to me. Humans are actually generally pretty shit at things we didn’t have to do in the environment of evolutionary adaptedness. We’re impressed with our accomplishments in those areas because they’re hard for us. I mean, I’m relatively good at mental math but if you ask me to multiply two two-digit numbers, I can do it but it takes me 30 seconds before I’m sure of my answer. See also this post by Robin Hanson.
I’m a little surprised to see this making into the list of key background assumptions behind AI-risk. Human-like AI would still be a big deal even if it required painstakingly implementing a bunch of special purpose mental modules that evolution just happens to have endowed us with. But I suppose my skepticism on this point is a major reason that I don’t worry about superhuman AI coming about all of a sudden, giving us no time to prepare or react.