The AI Apocalypse
Last week, the same headline was all over the news: “Facebook AI creates its own language”! This resulted in a lot of discussion and debate, bringing an old question to minds: is AI going to take over the world? With recent movies and prior cultural fear the power of AI is being questioned more and more. Most importantly, it is thought that in terms of morality and ethics AI poses a great threat to humankind. Therefore, in this article I will be focusing on how to survive the AI apocalypse that apparently is closer than we think it is.
There were two sides to the same story; amazing and horrifying.  At first glance, the fact that an AI chatbot started talking in a made-up language that only other bots could understand, was alarming. If an AI reaches beyond its instruction set and we cannot understand what it is conducting, things may get out of hand. However, it is a bit more complicated when it comes to taking over the world. [2, 3]
It is important to distinguish between your daily use AI like Siri and Alexa versus machine learning AI like IBM’s Watson.  We would expect the source of an apocalypse to be the tools that can make autonomous decisions, not personal assistants. Self-driving cars, robot nurses and robot machine guns that are directly connected to people’s lives carry the greatest risk.  The recent debate between Elon Musk and Mark Zuckerberg represent the two poles of the discussion on AI. This demonstrates how some aspects will outweigh others in order to determine the future of AI. For example, Tesla’s self-driving cars and Musk’s new company Neuralink will result in major improvements in AI that is far more dangerous than Facebook’s chatbot.
AI’s ability to learn and evolve quickly in ways that are invisible to us , raises the issue of having a good versus bad AI. There have been many developments in educating artificial intelligences, including two big research: GoodAI and Ron Arkin. Arkin uses a guilt mechanism to “simulate human emotions, rather than emulate human behaviour” and so achieving an ethical AI. On the other hand, GoodAI “views AI as a child, a blank slate onto which basic values can be inscribed”. This allows the AI to apply previous knowledge in more complex and unforeseen environments.  Educating AIs and teaching them to distinguish between good or bad, and right or wrong is our best chance to survive the upcoming apocalypse.
As Alan Turing said, “A computer would deserve to be called intelligent if it could deceive a human into believing that it was human.”  So, would it be reasonable to think an AI that we don’t even understand is intelligent and more powerful than us? Would we be aware if there were to be an AI apocalypse? Is it more threatening for AI to take over control or to not know if it is AI that has the power? These questions will remain unanswered as the line between humanised computers and computerised humans continue to get thinner.
- Baraniuk, Chris. “The ‘creepy Facebook AI’ Story That Captivated the Media.” BBC News. BBC, 01 Aug. 2017. Web. 02 Aug. 2017.
- Bradley, Tony. “Facebook AI Creates Its Own Language In Creepy Preview Of Our Potential Future.” Forbes. Forbes Magazine, 31 July 2017. Web. 01 Aug. 2017.
- Koulopoulos, Thomas. “Corrected: Facebook AI Bot Scare Debunked.” Magazine. N.p., 31 July 2017. Web. 3 Aug. 2017.
- Parkin, Simon. “Teaching Robots Right from Wrong.” 1843. The Economist, 13 May 2017. Web. 02 Aug. 2017.
Cover Image: http://www.cs4fn.org/ai/thesingularity.php