I’m afraid of this: http://www.nationaldefensemagazine.org/blog/Lists/Posts/Post.aspx?ID=1101
It isn’t clear to me from this article what stage DARPA is at in its pursuit of building an artificial brain. But the end goal is clear: autonomous, learning artificial intelligence. And the first use it will be put to is likely the pursuit, capture and killing of human beings.
Killer robots have been the stuff of sci-fi since the genre appeared. Strangely, while the fantasy scenario of the dead rising from the grave, zombies, has huge traction in popular culture, the entirely possible scenario of robots turning on their masters is comparatively unexplored.
This is worrisome because it strikes me we’re not having an important conversation. Or rather, the public isn’t. Credible experts already know the potential threat is real: http://www.dailymail.co.uk/news/article-2238152/Cambridge-University-open-Terminator-centre-study-threat-humans-artificial-intelligence.html
Ray Kurzweil has been writing on this topic for years. In his book The Singularity is Near, he draws attention to a number of things, among them a clear, trackable progression in computing capability, as well as the point at which AI will launch itself from being a tool of humans to being our superiors in every intellectual and physical sense.
The way this all works is like this: Humans strive for decades to make a robot capable of thinking like a biological brain. Now, this process may be nearing completion or not. What is clear is that our brains are subject to physical laws and characteristics, composed of atoms and tangible components. In other words, there is nothing magical about our ability to think – it’s a product of materials being placed in certain configurations.
That doesn’t mean it will be easy to build a brain. But what Kurzweil points to, and DARPA’s research indicates, is that sooner or later, there’s a good chance we’ll get there.
OK, so we have a computer that can think like a human. One interesting philosophical question is whether it has a sense of self – an identity. And how would we prove that it did? My guess is we can’t, anymore than we can prove the person sitting across from us has a sense of self. Our ‘selfs’ are trapped in our skulls. All we can evidence to the outside world is our behavior. And using that test, this imagined robot will be a being in the same sense we are.
This raises a million questions. Can a robot have rights? Does it have emotions? What is the difference between this robot and us?
All interesting stuff, I think, but secondary to the danger I’m writing about today.
This robot, in addition to appearing to think like a human, will have certain advantages. For example, it could be patched directly into the Internet. Its knowledge could span all human knowledge and do it in an instant. Its mathematical abilities would be on a scale the most talented savant couldn’t begin to emulate.
This mighty brain wouldn’t be the final version of itself, either.It would help build its own superior offspring which, in turn would build their own superior next generation – and on and on.
Its physical capabilities would improve by leaps and bounds, too. Already it could be given superhuman strength and run of the mill technologies such as night or infrared vision. Put the brain in a drone, and it has flight and missiles. Place it in a tank, and it has heavy armour and crushing power. And again, its advanced brain will bend itself to the task of creating ever more potent weapons and stronger metals.
In short, an artificial race of super beings could exist side by side with us: smarter, faster, stronger, tougher.
The question is: will they be content to serve us, and will we design these technologies with failsafe controls that will keep them tethered to our wills? It presents an interesting moral conundrum. If we ever come to the conclusion these brains are autonomous beings in the way humans are, we’ll be left with two unpleasant options: maintain our control and keep them as slaves, or release our control and immediately become second-class citizens.
This all sounds like sci-fi foolishness, of course. Fears of nuclear missiles or even pandemics are more credible. But nuclear missiles are actually a good comparison. Prior to the invention of the atom bomb, the idea of planet-wide annihilation was preposterous. And the scientists who explored the theories that led to splitting the atom were engaged in science, not war-mongering. Yet their desire to push human development led to ICBMs.
Science is working on AI, yet the safety of AI is contingent on the genie never getting out of the bottle. If the genie gets out, all bets are off. The Terminator movies, far from being pulp fiction fantasy, may be a best-case scenario. A machine intelligence would be expected to exponentially increase its power every decade.
Forget warfare between man and machine. We’d be facing extermination. In additon to wielding potent weapons, a computer could deal a crushing blow moments into an uprising without resorting to Skynet's nuclear strike in the James Cameron film: Our hi-tech society relies on computers for our finances, energy and communications; seize control of these and their enemy, us, would instantly fall into chaos. We'd be cavemen using clubs to fight gods.
None of this is a foregone conclusion. In fact, it’s an unlikely outcome. But when the stakes are this high, unlikely consequences are still worth considering. DARPA wants to build a titan. Let’s hope they also build a straitjacket strong enough to contain it.