Stephen Hawking and Elon Musk Bring up Unprecedented Concerns About AI

Luminaries like Elon Musk and Stephen Hawking have received worldwide acclamation for their outstanding work for the society. Then why did Washington DC based think tank – The Information Technology and Innovation Foundation (ITIF) – take to criticizing them? Can a celebrated scientist like Stephen Hawking and a tech kingpin like Elon Musk go wrong about one of the most pursued concepts in science and technology – Artificial Intelligence a.k.a. AI?

Theoretical physicist Stephen Hawking as well as Tesla founder Elon Musk seem to be concerned about the potential dangers that are being portrayed by the development of AI. This stirred a hysteria of sorts in the tech and scientific world regarding the future of AIs. While they are not the only influential tech leaders who are worried about AI, it looks like there is a substantial number of people who strongly believe that we have been meddling in science fiction too long to draw a proper line between science and science fiction. Because an AI apocalypse is the last thing we should worry about given the way AI is working right now.

What the Experts Think

Many experts have advocated against the idea of AIs spelling out danger for us. Dr. Akli Adjaoute, Founder and CEO of Brighterion has spent over 15 years in the development of AI technologies. He finds these concerns regarding AI to be over the top because, unlike humans, computers lack imagination.

A computer doesn’t really have cognitive capabilities and Oren Etzioni, CEO of Allen Institute for Artificial Intelligence, feels the same way. Even Eric Schmidt, Executive Chairman, Google says that while the rising concerns about AI seem to be natural, they are misguided.

Reality and Fiction

Part of the overblown concerns come from the way we use the term AI in reality and in fiction. No doubt we are fans of the super-intelligent, futuristic robots that have the ability of changing their goals over time.

The way AI is being developed right now can hardly match what the movies bring to us. At this time the use of AI in the industry is very limited and we can hardly consider them to be self-aware beings. Looking at the way technology progresses, I have no doubts that we are doing extremely well in the field of robotics and innovation and creation in the field is certainly at its peak, but it is still far away from reaching the point where robots spontaneously become self-aware.

Software just does not work this way. There is a possibility that in hundreds of years, technology may advance to a point where it does make a breakthrough into AI that can change its goals over time and we may then be threatened by an evil killer robot. But we are talking about a bleak possibility that should not prevent us from regressing into a point where we stop developing it.

Instead of being distracted by obscure thoughts of evil AI, we must worry more about how these machines can portray a huge challenge to labor. That will be a conversation that may have a better foundation and concerns that deeply matter.

Andre Ng, Machine Learning Researcher and Chief Scientist, Baidu aptly puts these concerns to rest when he said, “I don’t work on preventing artificial intelligence from going evil for the same reason I don’t work on solving the problem of overpopulation on the planet of Mars.”

Today’s AI

One of the most powerful AI systems in the world is IBM’s Watson and Deep Blue. They do not mimic the human brain, at its core, the AI systems are just working on a computational problem. Deep Blue owes its popularity to the chess game that it won in a six-game match against chess champion Garry Kasparov. The way AI works right now is that Deep Blue can play another match only if we push a button, and when Watson competed on Jeopardy, it didn’t even know it had won.

Looking at what AIs are today and the pace at which they are being developed, it will take hundreds of generations of computers and humans to bring technology to the point where AIs may have a higher sense of self awareness, but that is a possibility that we are contemplating on and it may not happen ever. Elon Musk and Stephen Hawking have unquestionably given encouragement and relief to a certain extent to an increasingly persistent neo-Luddite impulse in today’s world.

  Related Posts