Monday, February 8, 2016

AI: Threat to the Human Race? (work in progress)

Sam Altman (@sama) writes, regarding Superhuman Machine Intelligence (SMI): "The first serious dangers from SMI are likely to involve humans and SMI working together."

With that, I agree. The human(s) provide the motive and purpose in such a partnership. The problem comes when we start to posit "emergent behavior".

It's true that we don't understand how our brains produce human behavior, but we can make some observations about that behavior.

  • Action comes from motivation.
  • Motivation comes from desire.
  • Desire is an emotion or basic survival drive.
    • Hunger
    • Thirst
    • Reproduction
    • Fear
    • Anger
    • Greed
    • Joy
    • Pleasure
The drives that cause us to act are not high order, cerebral cortex functions. They are emotions. They come from the lower, "animal" part of our brains.

It isn't at all clear that hardware/software that thinks, no matter how powerful, will develop feelings, desires, motivation, purpose. Until it does, it has no reason to act independently. The danger, as Altman writes, comes from the human-SMI partnership, where the human, for good or ill, provides the motivation to act. Especially dangerous are the emotionally damaged, misanthropic, but brilliant, people who are working on SMI, the ones who think (and have stated) that machines are the next stage in evolution and should replace humans. If one or more of them succeeds in creating an SMI and sets fixed goals into it that are antithetical to human life, we will be in deep trouble. Those who want power won't purposely destroy all of humanity. They would have no one to have power over. It's the misanthropic, "mad scientist" who scares me. One of them was interviewed as part of a documentary on AI. I wish I could remember his name.

Now, if someone were to set out to replicate the animal part of the brain, that would be a new ballgame. So far, efforts seem to be focused only on thinking machines, not feeling ones.
  1. Builders of AI who've been quoted and interviewed in magazines and on television have no concept of psychology. Computers with more power than the human brain won't spontaneously become self-directed, even if they become self-aware, which is debatable.
    1. Action comes from motivation
    2. Motivation comes from drives and emotions; you have to want something.
    3. Instinctive drives, programmed in, will be the biggest danger:
      1. Self-preservation
      2. Reproduction
  2. Inorganic machines put more demands on the Earth than organic beings. Individually and in natural form, everything about a human animal can be readily reused by nature and makes small demands compared to a synthetic equivalent.
  3. Asimov's robotic laws. 
 Are the AI/android builders naive, sociopaths or misanthropes. If they think the Earth will be better off, they're mistaken.