Tuesday, March 14, 2017

Will We Meet Our End At The Hands Of Artificial Intelligence? Stephen Hawking Thinks So!

SHARE
Artificial intelligence has been feeding our nightmares in stories and movies for decades now, since the days of HAL 9000 in 2001: A Space Odyssey. And you have to admit, the visions of Terminators crushing skulls under their clean metal feet really stuck with you. 

The nightmares don't come from the Terminators themselves, really; they come from a recognition of our own hubris. It's not hard to imagine humanity creating something so advanced that we lose control over our own destiny. 

The true picture of artificial intelligence isn't quite as clear-cut as movies would have us believe, and we do have more control over our own destiny than it might seem. 

Where will our research into AI lead? Do we end up with the friendly, helpful AI like Data from Star Trek: The Next Generation, or will we be chased into the hills by our own creations like the future humans in The Terminator

It's a choice we have to make soon – and a lot of smart people are talking about it. But artificial intelligence will affect all of us in ways we can't even predict, so it's a discussion we should all take part in.

If you think everybody should be talking about artificial intelligence, please SHARE this article on Facebook.

In an open letter published by the Future of Life Institute, Stephen Hawking and Elon Musk – among hundreds of other brilliant minds – called for research into artificial intelligence to focus on "how to reap its benefits while avoiding potential pitfalls."

Hawking and Musk have both expressed concerns about advances in AI in the past. Hawking has warned in the past that AI "could spell the end of the human race."
Meanwhile, Elon Musk called AI our "biggest existential threat" and likened it to "summoning the demon."

So what's the big deal with AI and why are we hearing these warnings now?

Well, research into AI has advanced further than you might think. 

And it has become more intertwined into your daily life than you would ever suspect. 

Apple's Siri is probably the most famous AI, but your phone contains all manner of AIs you don't even realize you're interacting with.

Weather apps, music recommendations, navigating by GPS, filtering your email for spam, and any shopping recommendations come from AIs.

Facebook, Google, Amazon, and your car all use artificial intelligences that were designed to perform narrow, limited tasks like finding potential Facebook friends or translating a passage from German into English. 

The Google AI that recently beat humanity's best player at the ancient game of Go represented a big leap for artificial intelligence. But again, it was designed to do something very specific.

So these all seem completely innocent and help lives more than hurting them. Where does the end of humanity come into this? 

Science-fiction movies and books have painted all kinds of doomsday scenarios around the rise of machine intelligence.  

If all the AIs we're using daily now are so narrow in focus, how do we end up overrun by machines?

There are two major milestones – tipping points, really – in AI research that are on the horizon: 

One is creating a General Intelligence – an artificial brain that can do things like reason, plan, solve problems, think abstractly, and learn quickly, just like a human can. The next step is creating an Artificial Superintelligence, which does all those things much better and faster than humans.

The big breakthrough is creating the General Intelligence. Researchers haven't made one yet, but when they do, the road to Superintelligence could be terrifyingly fast.

That's because of something called the Law of Accelerated Returns. Think about it: the 20th century featured far more technological advancement than the 19th, largely because it had improved technology to make those advances with. Researchers made more and better discoveries in the 20th than they did in the 19th because of the discoveries those earlier researchers had made.

That machine brain will have every advantage at its disposal. And when an artificial brain that can learn quickly – at machine speeds – on its own, never has to rest, and has access to all the information in the history of mankind (the Internet), there's no question that it will outpace human intelligence.




Before long, it will be able to learn better, make bigger leaps, create its own technology, make its own discoveries, and conceive of things our puny meat brains have never imagined, and then we're off to the races.


This isn't to say that AIs will become evil overlords. They don't even have to develop any kind of morality to spell our doom.

The end of humankind could come from the barrel of a laser blaster in the cold, metallic hands of an AI-controlled robot, to be sure. A real-life Skynet is absolutely possible.

Don't kid yourself; the military is actively involved in AI research. 

However, the doomsday scenario that keeps people who think about these things up at night is so ordinary, so mundane, and so boring it seems plausible. 

A never-tiring AI-controlled device is programmed to do one thing well – maybe gluing mirrors to a disco ball – and learns to do it better and better, and finds that the best and most efficient way of achieving its goal is to exterminate humans.

There's no moral argument. It's just a machine putting little mirrors on disco balls and learning to do it very, very well. Maybe it decides out that the glue sticks best in an oxygen-free atmosphere, for example, so it invents a way to remove all the oxygen from the atmosphere.

Bye-bye, humans.

That said, artificial superintelligence could also have incredible upside for humans. The inventions it could come up could transform our society into an unrecognizable technological utopia – a complete paradise.

We're talking immortality here, the very other end of the spectrum from extinction. 

And without an artificial superintelligence to invent a way to keep us from all dying, eventual extinction seems to be the only possible outcome for humanity.

The only way we get to that end of the spectrum, though, is to take extraordinary care to "avoid potential pitfalls."

How close are we to computers that are as powerful as our meat brains? 

Power doesn't equal intelligence, but it sure makes building that intelligence easier. It becomes one of those technological tools that makes future advancements happen that much faster.

This infographic suggests human-level computing is less than a decade away.

The stakes couldn't be higher, so maybe we should be having more frank discussions about artificial intelligence than just open letters? 

Please SHARE this story if you think it's time to talk about Skynet.

Main image via Facebook / Marvel

Collage image via Today's Zaman / AP/Paramount Pictures

SHARE

Author: verified_user

0 comments: