How real is “general intelligence”?

Nate Soares has a post up on the MIRI blog about “the claims that are in the background whenever I assert that our mission is of critical importance.” Right now, I want to just briefly comment on the first one:

Claim #1: Humans have a very general ability to solve problems and achieve goals across diverse domains.

We call this ability “intelligence,” or “general intelligence.” This isn’t a formal definition — if we knew exactly what general intelligence was, we’d be better able to program it into a computer — but we do think that there’s a real phenomenon of general intelligence that we cannot yet replicate in code.

Alternative view: There is no such thing as general intelligence. Instead, humans have a collection of disparate special-purpose modules. Computers will keep getting better at narrowly defined tasks such as chess or driving, but at no point will they acquire “generality” and become significantly more useful, because there is no generality to acquire. (Robin Hanson has argued for versions of this position.)

Short response: I find the “disparate modules” hypothesis implausible in light of how readily humans can gain mastery in domains that are utterly foreign to our ancestors. That’s not to say that general intelligence is some irreducible occult property; it presumably comprises a number of different cognitive faculties and the interactions between them. The whole, however, has the effect of making humans much more cognitively versatile and adaptable than (say) chimpanzees.

Why this claim matters: Humans have achieved a dominant position over other species not by being stronger or more agile, but by being more intelligent. If some key part of this general intelligence was able to evolve in the few million years since our common ancestor with chimpanzees lived, this suggests there may exist a relatively short list of key insights that would allow human engineers to build powerful generally intelligent AI systems.

Further reading: Salamon et al., “How Intelligible is Intelligence?”

This seems obviously wrong to me. Humans are actually generally pretty shit at things we didn’t have to do in the environment of evolutionary adaptedness. We’re impressed with our accomplishments in those areas because they’re hard for us. I mean, I’m relatively good at mental math but if you ask me to multiply two two-digit numbers, I can do it but it takes me 30 seconds before I’m sure of my answer. See also this post by Robin Hanson.

I’m a little surprised to see this making into the list of key background assumptions behind AI-risk. Human-like AI would still be a big deal even if it required painstakingly implementing a bunch of special purpose mental modules that evolution just happens to have endowed us with. But I suppose my skepticism on this point is a major reason that I don’t worry about superhuman AI coming about all of a sudden, giving us no time to prepare or react.

4 thoughts on “How real is “general intelligence”?

  1. It doesn’t have to be general purpose to disrupt human civilisation. It just has to be good at programming, and perhaps 1-2 other things, like selling its code online, or operating military drones, persuasion, whatever.

    Like

  2. One way this argument could get stuck is if I say “Humans are good at software engineering, even though it wasn’t in our ancestral environment” and you respond “But humans are terrible at software engineering!” If on an objective scale of programming ability we agree that humans are a ‘3’ but I’m impressed we aren’t a 1 (so I call 3 ‘good’) whereas you’re disappointed we aren’t an 8 (so you call 3 ‘bad’), we could end up disagreeing without actually disagreeing.

    Even if we can agree on what skill level to be impressed by, it might not be super productive to have a debate that just consists of one side listing everything humans are good at and the other side listing everything humans are bad at.

    Here are some lines of evidence that I think would undermine the ‘intelligence is intelligible’ hypothesis more directly:

    1. Evidence that no large chunks of humans’ proficiency at software engineering can be traced back to a short list of cohesive cognitive skills (e.g., aspects of our working memory, long-term memory, ability to generate hypotheses, ability to make conditional predictions, ability to make long-term plans, ability to relate different levels of abstraction…). E.g., there is no reasonably natural property of human thought that accounts for 5% or 15% of our programming ability, though maybe there’s a property that accounts for 0.05% or 0.005% of our programming ability.

    2. Evidence that the cognitive skills that make us better at software engineering don’t help us in other domains, like particle physics, structural engineering, climatology, cellular biology, and playing chess. Deep Blue can be our prototype of narrow intelligence: Deep Blue got good at chess in a way that didn’t help it win at checkers or poker. In contrast, whatever modules made us better than chimpanzees at particle physics seem at first glance to heavily overlap with whatever modules made us better than chimpanzees at climatology; I think this is the basic claim Nate is making.

    Another relevant line of evidence, considering how MIRI uses the intelligibility hypothesis, would be ‘evidence that humans only actually are better at science than chimpanzees because we have a better way of storing and transmitting knowledge (language), not because we’re any better at efficiently noticing, interpreting, drawing inferences from, basing plans on, or otherwise reasoning with knowledge items.’ This would leave open the possibility that chimpanzees and humans *share* a relatively small set of adaptations that make us much more general than e.g. mice (or, failing that, termites), but it would undermine a key line of evidence for that intelligibility claim: the short amount of time it took humans to diverge from chimpanzees.

    Liked by 1 person

    • Rob, I suspect something like the story you propose in your last paragraph is basically right.

      It might also help to say that I think that, as a general rule, that insofar as a task has little to do with anything humans had to do in the environment of evolutionary adaptedness, I expect it will be relatively easy to create software with superhuman performance in that area. But you have to be careful about how you categorize tasks in that regard. My laptop is much better than I am at turning Java into bytecode, but rather deficient in understanding my manager’s intentions for the project I’m working on.

      Like

      • That heuristic suggests it should be much easier to build de novo software engineers than to build de novo social engineers. Which supports MIRI’s view that useful self-rewriting software and highly capable scientific reasoners are likely to be much easier to build than ‘friendly’ software.

        On the other hand, it suggests that de novo AI will have a much harder time with plans that require it to predict or manipulate human behavior (e.g., ‘AI talks its way out of a box’ scenarios) than with plans that require it to make progress with a physical science/technology. Which is a lot less scary: an AI system that can’t model its operators well will have a hard time deceiving them about its goals or capabilities.

        Then again, an AI systems may not need to be particularly sneaky if we aren’t carefully observing and analyzing its computations; it may just need to perform a large enough number of diverse tasks to make it unfeasible for us to pinpoint the dangerous action. Many other actions we’d call “deceptive” may not require a full or detailed model of human psychology; they may exploit small local bugs in our thinking.

        Your view that humans’ main advantage is our huge amount of knowledge also suggests that an AI system with access to the Internet could become decisively more powerful than humans just by having better memory and faster reasoning, without any need for qualitative reasoning improvements. If an accumulated-knowledge view also predicts superintelligence, then you may be right that Nate’s first claim is less essential than the others.

        Like

Leave a reply to challquist Cancel reply