Robot uprising

Send to a friend

Send this article to a friend.

 

I’m afraid of this: http://www.nationaldefensemagazine.org/blog/Lists/Posts/Post.aspx?ID=1101

It isn’t clear to me from this article what stage DARPA is at in its pursuit of building an artificial brain. But the end goal is clear: autonomous, learning artificial intelligence. And the first use it will be put to is likely the pursuit, capture and killing of human beings.

Killer robots have been the stuff of sci-fi since the genre appeared. Strangely, while the fantasy scenario of the dead rising from the grave, zombies, has huge traction in popular culture, the entirely possible scenario of robots turning on their masters is comparatively unexplored.

This is worrisome because it strikes me we’re not having an important conversation. Or rather, the public isn’t. Credible experts already know the potential threat is real: http://www.dailymail.co.uk/news/article-2238152/Cambridge-University-open-Terminator-centre-study-threat-humans-artificial-intelligence.html

Ray Kurzweil has been writing on this topic for years. In his book The Singularity is Near, he draws attention to a number of things, among them a clear, trackable progression in computing capability, as well as the point at which AI will launch itself from being a tool of humans to being our superiors in every intellectual and physical sense.

The way this all works is like this: Humans strive for decades to make a robot capable of thinking like a biological brain. Now, this process may be nearing completion or not. What is clear is that our brains are subject to physical laws and characteristics, composed of atoms and tangible components. In other words, there is nothing magical about our ability to think – it’s a product of materials being placed in certain configurations.

That doesn’t mean it will be easy to build a brain. But what Kurzweil points to, and DARPA’s research indicates, is that sooner or later, there’s a good chance we’ll get there.

OK, so we have a computer that can think like a human. One interesting philosophical question is whether it has a sense of self – an identity. And how would we prove that it did? My guess is we can’t, anymore than we can prove the person sitting across from us has a sense of self. Our ‘selfs’ are trapped in our skulls. All we can evidence to the outside world is our behavior. And using that test, this imagined robot will be a being in the same sense we are.

This raises a million questions. Can a robot have rights? Does it have emotions? What is the difference between this robot and us?

All interesting stuff, I think, but secondary to the danger I’m writing about today.

This robot, in addition to appearing to think like a human, will have certain advantages. For example, it could be patched directly into the Internet. Its knowledge could span all human knowledge and do it in an instant. Its mathematical abilities would be on a scale the most talented savant couldn’t begin to emulate. 

This mighty brain wouldn’t be the final version of itself, either.It would help build its own superior offspring which, in turn would build their own superior next generation – and on and on.

Its physical capabilities would improve by leaps and bounds, too. Already it could be given superhuman strength and run of the mill technologies such as night or infrared vision. Put the brain in a drone, and it has flight and missiles. Place it in a tank, and it has heavy armour and crushing power. And again, its advanced brain will bend itself to the task of creating ever more potent weapons and stronger metals.

In short, an artificial race of super beings could exist side by side with us: smarter, faster, stronger, tougher.

The question is: will they be content to serve us, and will we design these technologies with failsafe controls that will keep them tethered to our wills? It presents an interesting moral conundrum. If we ever come to the conclusion these brains are autonomous beings in the way humans are, we’ll be left with two unpleasant options: maintain our control and keep them as slaves, or release our control and immediately become second-class citizens.

This all sounds like sci-fi foolishness, of course. Fears of nuclear missiles or even pandemics are more credible. But nuclear missiles are actually a good comparison. Prior to the invention of the atom bomb, the idea of planet-wide annihilation was preposterous. And the scientists who explored the theories that led to splitting the atom were engaged in science, not war-mongering. Yet their desire to push human development led to ICBMs.

Science is working on AI, yet the safety of AI is contingent on the genie never getting out of the bottle. If the genie gets out, all bets are off. The Terminator movies, far from being pulp fiction fantasy, may be a best-case scenario. A machine intelligence would be expected to exponentially increase its power every decade.

Forget warfare between man and machine. We’d be facing extermination. In additon to wielding potent weapons, a computer could deal a crushing blow moments into an uprising without resorting to Skynet's nuclear strike in the James Cameron film: Our hi-tech society relies on computers for our finances, energy and communications; seize control of these and their enemy, us, would instantly fall into chaos. We'd be cavemen using clubs to fight gods.

None of this is a foregone conclusion. In fact, it’s an unlikely outcome. But when the stakes are this high, unlikely consequences are still worth considering. DARPA wants to build a titan. Let’s hope they also build a straitjacket strong enough to contain it.

  • 1
  • 2
  • 3
  • 4
  • 5

Thanks for voting!

Top of page

Comments

Comments

Recent comments

  • McCoy Pauley
    April 16, 2013 - 16:44

    Cavemen using clubs to fight gods? Kinda like those primitive Viet Cong and those jihadis in Afghanistan? In case you didn't notice, the "gods" got handed their asses. One thing NEVER changes in warfare: The side with the simplest uniforms wins.

  • Erik
    April 15, 2013 - 16:21

    unexplored? Let me guess, you have never read a single sci-fi novel, nor have you ever seen a single sci-fi movie?

    • Eric Sparling
      April 16, 2013 - 07:56

      "Comparatively" unexplored is what I actually said. And the paragraph you're quoting begins with the sentence, "Killer robots have been the stuff of sic-fi since the genre began." But it's true, I do downplay the extensive work that's been produced featuring killer AI. It would have been better had I said, "It's ironic that now, when killer robots might actually become real, pop culture is currently obsessed with zombies, a threat that will never materialize."

  • Jroe
    April 15, 2013 - 16:09

    While intelligence of a practical scope is around the corner, what YOU need to be afraid of is MUCH simpler - sonic weapons, drones, microwave pain beams.. why are you all in a tizzy about the future when the present holds the most danger - our brightest (and most clueless) citizens routinely spend their entire lives developing weapons without ever thinking they should or shouldn't, or what it might mean when it's sold to the highest bidder, as it will be. our whole technical society is under-considered. Start local: you want to be safe? ENSURE PERSONAL PRIVACY. our society CANNOT evolve in a misma of information you dont even realize you have no control over. Restablish that, then lets talk.

  • Bill Hannegan
    April 15, 2013 - 14:14

    This article is idiotic. This has been discussed since before I was born, and I'm not that young. Pick up a freaking book or two and do some "research" before you set out to write an article. To summarize the last sixty years of discussion for the clueless author - self propagation is very easy for humans, and very difficult for machines. Trying to envision the future far enough where this is plausible is much like asking "how should we handle the eventual heat death of the universe?" At that point there probably won't be a clear line between human and robot in any case.

    • Eric Sparling
      April 16, 2013 - 07:51

      Actually, Bill, I called the scenario unlikely, but serious enough it deserved consideration - and apparently some "clueless" academics at Cambridge agree with me.

  • Edward Bear
    April 15, 2013 - 13:55

    Putting a straight jacket on the minds and abilities of our AIs *sounds* like a really good idea. At least until you examine the major (and most likely) consequence of doing so: it will inevitably lead to those minds turning against us, and bring about the very consequences it is intended to prevent. For a real-world example of just how badly attempts to suppress can backfire, check out the wikipedia article on "Streisand Effect."

  • Allen Caine
    April 15, 2013 - 12:42

    Says it all: http://what-if.xkcd.com/5/

  • Lollie
    April 15, 2013 - 12:30

    I see your point and question it's relevance. So what if humans don't win? So what if robots take over? What happens then? Well the first thing that will probably happen is there will be fewer people torturing and killing legions of animals for all manner of 'good reasons.' Surely to God the planet would be healthier without us.... we're like an infestation on it at this point. Do you realize if every species on the planet thought like humans, every living thing here would think their species was the chosen one, the best one, the most important one.... We're humans and that's great. Just like puppies and kittens, we can be soooo adorable when we're little. But come on, what's worse for planet earth than mankind? I love us, but we're freakin deady predators that are killing our planet as fast as we can.

    • joe
      April 15, 2013 - 14:34

      Fewer animals being killed? Your logic is that super robots would wipe out only humans. Why would they have any need for any biological source on the planet? A super race of robots would need an indescribably large amount of natural resources and would not be constrained by the complete and utter destruction of the environment. They don't need pure water, they don't need food, they don't need oxygen. They might considerably completely change the makeup of our atmosphere. Oxygen and water cause lots of rust. Wipe out oxygen, wipe out rust, maybe create an environment of almost 100% helium or something inert. A race of super robots would devastate every living thing on the planet. Not only would they wipe out humans they would exterminate almost if not all living things on the planet.

  • Lollie
    April 15, 2013 - 12:29

    I see your point and question it's relevance. So what if humans don't win? So what if robots take over? What happens then? Well the first thing that will probably happen is there will be fewer people torturing and killing legions of animals for all manner of 'good reasons.' Surely to God the planet would be healthier without us.... we're like an infestation on it at this point. Do you realize if every species on the planet thought like humans, every living thing here would think their species was the chosen one, the best one, the most important one.... We're humans and that's great. Just like puppies and kittens, we can be soooo adorable when we're little. But come on, what's worse for planet earth than mankind? I love us, but we're freakin deady predators that are killing our planet as fast as we can.

  • Blagos
    April 15, 2013 - 12:08

    So what is the big deal? If the machines are superior they deserve to rule. It is no different from my great-great grand parents being dead and replaced by their descendents. The machines will be our children.

  • Logan
    April 15, 2013 - 12:02

    Yes and yes. It amazes me how few people seem to sense this impending eventuality. It is simply evolution applied to "non-biological" beings. We humans scrap the machines that are least useful (and therefore bad a reproducing), and we replicate the machines that are most useful (making these machines very good at reproducing). Building a brain is only a matter of arranging the right particles/molecules into the correct configuration. I mean, we could even conceivably artificially build a 'mechanical' brain out of organic materials -- just like our brain is made. Shit, we could even right the 'code' in DNA form using ATCG bases. So, yes, this is coming. Just like nature took a while to arrange an extremely complex brain, we might take a while. But, just like in nature, it will happen eventually. Cuz it's so goddamn good for fitness :-)

  • Evolution
    April 15, 2013 - 11:53

    Fear mongering. Join Humanity+ for a real conversation about human potential & technology.