I was watching something on Discovery about "Ten Ways the World Will End". I only caught part of it, but number six or seven had us threatened by super-intelligent machines. I think they had Stephen Hawking saying something (probably something related to his comments about machines dominating our world and someone I didn't recognize waxing on about how computers could become hundreds of times more intelligent than humans. Stephen's not the only one to raise this spectre; Bill Joy and others have expressed similar concerns.
Yeah, yeah. We're a long, long way from that. I'm not saying there's nothing to worry about: the use of computers in weaponry will surely produce much more danger in warfare, but that's threats from other humans wielding advanced technology, not our machines turning against us. The danger there isn't really intelligence at all; it's just clever and fast algorithms.
Question: would you rather be stuck in cage with an angry lion or an angry man? If you'd rather have the human to deal with, would you rather deal with a stupid person or someone quite bright? You might answer that you'll take the less intelligent choice in hope of "out-smarting" them, but let's change that parameters a bit: now you have a choice of the two people, one quite bright and one not so bright, but you don't know whether they are angry or mean you any harm. Which would you choose then? I'd bet most of us would prefer to take our chances with the brighter person: they might be less likely to be a threat to us, and if they are, they might be willing to listen to reason. That's my feeling about intelligent machines: if they ever were "hundreds of times more intelligent", I doubt we'd have anything to fear.
However, that doesn't mean Bill Joy and the others are wrong. Self replicating weapons or even self replicating devices that accidentally turn into threats are extremely dangerous, but they are more dangerous because of their stupidity: a weapon that simply replicates, seeks and destroys could be extraordinarily effective with nearly no "intelligence". That's dangerous; a reasoning robot with an IQ of several thousand probably isn't.
If you found something useful today, please consider a small donation.
Got something to add? Send me email.
More Articles by Anthony Lawrence © 2011-05-02 Anthony Lawrence
Write a paper promising salvation, make it a 'structured' something or a 'virtual' something, or 'abstract', 'distributed' or 'higher-order' or 'applicative' and you can almost be certain of having started a new cult. (Edsger W. Dijkstra)