Sunday, September 20, 2015

Stephen Hawking Thinks The Robots Are Going To Kill Us All

And that is certainly alarming. But what exactly should we be alarmed about: artificial intelligence, or Hawking's state of mind?

I've already written about how Hawking has illustrated Bollinger's Axiom on another topic: a few years ago he declared that philosophy was dead. Now he's illustrating it again with his Terminator-Matrix-type fantasies.

Let's look at some other cases. Remember Linus Pauling? He's one of only 4 people ever two have won 2 Nobel Prizes, and the only 1 of those 4 who didn't share either prize with anyone else. He was, like Hawking, undeniably extraordinarily brilliant, and yet, his greatest effect in the short term, still in effect now 21 years after his death, may well be due to some nonsense which he energetically plugged: he urged people with no serious health problems to take massive amounts of vitamins. In real life, using real science, doing so, as Sheldon explained to Penny on "The big Bang Theory," only produces "very expensive urine." Pauling, with no scientific justification and no corroboration from any serious physicians or biologists, said that taking dozens or hundreds of times the recommended daily dosage of Vitamin C prevented colds, and that massive doses of C and other vitamins also had other health benefits such as the prevention of cancer.

Rudyard Kipling was sort of smart in some ways, I suppose -- they gave him a Nobel -- but he insisted that

"East is East, and West is West, and never the twain shall meet,"

while the meeting was well underway all around him.

Hawking, Pauling, Kipling -- 4 Nobel Prizes between them, many good ideas, some bad ones.

Hey, all of their surnames end with -ing. Maybe we just need to be wary of eccentric statements made by Nobel laureates whose last names end in -ing.

Eh? Huh? See what I did there? I'm very bright, and I just suggested a perfectly cuckoo idea. Of course I don't think that surnames ending in -ing are any cause for alarm, I only pretended to think so in order to illustrate what it would be like if I were to commit a blunder which illustrated Bollinger's Axiom.

And I'm sure I do make such blunders fairly frequently without realizing it, what with my being human and all, not to mention being autistic and dealing with a 99% neurologically-typical general population. Hopefully my particularly bad ideas aren't very influential right now, because I'm a nobody, and people will tend to judge my ideas more or less objectively, appreciating the good ones and rejecting the ridiculous ones. But if I win a Nobel or two, if Hawking, Pauling and Kipling are any indication, people might suddenly lose all of their critical faculties when it comes to my every utterance, despite my own warnings not to do so with me or anyone else, and simply assume that everything I say is pure gold. That would be a disaster in my case, it's a disaster in Hawking's case, it's always a disaster when critical judgment is suspended in response to any authority or for any other reason.

Anyway, all I came here to say is: fears of AI are ridiculous, even if Hawking suffers from them. AI doesn't even exist yet -- computers trouncing humans at chess isn't AI, it's just a combination of math and electronics. Cute little gadgets that vacuum the floor and self-driving cars don't qualify either -- and there's no rational reason to believe that, if and when AI ever is created, it'll go all Terminator-Matrix on us. One thing which IS dangerous is neo-Luddite mentality, and it's extremely ironic that this mentality is currently being fed by someone who is able to communicate with us and go about his daily life thanks to some pretty sophisticated technology.

Sir Stephen, some of the people who are heeding your call to be afraid, to be very afraid of the robots are themselves very intelligent, again illustrating Bollinger's Axiom. Some of the simpler folks heeding your call to panic think YOU're a robot.

No comments:

Post a Comment