The risks of AI: Reality or Science Fiction?

In his guest lecture about the fundamentals of Artificial Intelligence, Craig Martell, Head of Science and Engineering at LinkedIn, pointed out that Artificial Intelligence is often presented in a wrong way. Scenarios in which machines get super intelligent and pose a serious risk to human beings seemed to be rather unlikely to him.

This blog post aims at decomposing the whole discussion about risks associated with AI. Starting with an introduction into the predominating research opinion, this article also aims to give an understanding into the other perspective, represented by know scientists like Elon Musk who really identify a risk in the development of superintelligence in the context of AI.

To create a better understanding of this opposing view, a framework of the most popular risks associated with AI is developed on the basis of Scherer (2015).

The predominating research opinion

Analysing the current scientific research about the future of Artificial Intelligence, it needs to be admitted that de disillusioning opinion of Craig Martell, represents what most AI experts think today. As Martell pointed out, AI is nothing more than a combination of programming and statistics to solve statistical problems. An interesting line of argumentation why a threatening AI scenario is, thus, rather unlikely is provided by Rob Smith (2014), CEO of Pecabu. By providing five arguments, he is explaining why the dangers of superintelligence as propagated by Elon Musk are foolish. His first argument is similar to Craig Martell’s position. He argues that AI is basically a program for solving very specific, limited human tasks (Smith, 2014). However, he expresses that this does not mean that they compete with human beings since they would not be alive as humans are. In his view, the contradicting research view oversees that AI programs cannot have feeling, desires or show conscientiousness (Smith, 2014). Thus, even if they are smart on their specific tasks they will never work outside of their boundaries without being programmed to do so. Moreover, many people assume AI would be a single program or entity but this would not be the case. As Smith (2014) points out AI will most likely be constructed out of multiple sub-systems. Thus, even if a program is out of control it is still dependent on the other programs.  Ultimately, as Martell, he points out that human beings are the ones having control about AI. We are the ones labeling the data and are responsible for it in action in a certain way (Smith, 2014). Of course, human beings could also create malicious AI for instance due to wrong labeling or statistical biases (either intended or unintended) but as he points out this is not the fault of the science itself (Smith, 2014).

The contradicting research opinion

The most famous advocate of the opposing research view is for sure Elon Musk. With sentences like ” AI is the biggest risk, we face as a civilization” he clearly influences the way AI is perceived by society (Titcomb, 2017). He supports the theory that artificial intelligence, as the ability of machines performing human tasks, is developing exponentially (Sulleyman, 2017). For regulation, he, thus concludes, that any regulation would need to be proactive as at the point of identifying the threat it will be too late for regulation (Titcomb, 2017). But would are these risks, he is talking about and how can they really threaten our civilization. The next paragraph intends to give a  systematic research framework for the research regarding risks associated with AI to better understand Elon Musk’s viewpoint.

Bildergebnis für risks ai

Source image: meritalk.com

 A framework of the risks associated with AI

What are the risks associated with artificial intelligence? A good framework in this context is provided by Matthew U. Scherer (2015) in the Harvard Journal of Law and Technology. As he points out, there are three “problematic characteristics” associated with AI. The first challenge would be autonomy. As he points out, the proportion and scope of complex tasks performed by machines will significantly increase in the next years (Scherer, 2015). One research trend in this environment would be to let AI make unforeseeable decisions. In his view, this poses a significant challenge since it does not only give away task autonomy but the ultimate autonomy about the decision (Scherer, 2015). In my view, this point of giving away autonomy is one of the central reasons for people fearing Artificial intelligence.

Second, he emphasizes that the level control humans have about the AI might be in danger. He agrees that in normal operation, this risk is rather low. However, he points out that human mistakes or malfunctions can lead to humans losing control over the algorithm (Scherer, 2015). This threat is commonly raised by supporters of Musk’s viewpoint.

Thirdly, the R&D processes are difficult to control. He points out that nuclear weapons, for instance, can be easily located, thus, the development can be regulated. For AI, the development could be diffused across the whole globe a process which is very difficult to regulate and thus poses an even higher level of felt risk (Scherer, 2015).

 Personal thoughts

In my view, the truth of the risks regarding Artificial Intelligence is probably somewhere in the middle between the two described approaches. Craig Martell is totally right that just because statistical models get better we will not see computers coming alive. However, they will be able to perform highly complex tasks.  If not setting their boundaries properly there is an inherent risk of the technology getting out of control. In my view, the highest risk is in this context is human error or malicious intent. Creating smart algorithms always requires to develop them responsibly and within regulated boundaries.

In my view, Elon Musk is, thus, not completely wrong. We have to proactively think about regulation. What are your thoughts?

Sources:

Scherer, M. U. (2015). Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies. Harv. JL & Tech., 29, 353.

Smith, R. (2014). What Artificial Intelligence Is Not; accessed from: https://techcrunch.com/2014/12/13/what-artificial-intelligence-is-not/; 15th July 2018

Sulleyman, A. (2017). AI IS HIGHLY LIKELY TO DESTROY HUMANS, ELON MUSK WARNS; accessed from: https://www.independent.co.uk/life-style/gadgets-and-tech/news/elon-musk-artificial-intelligence-openai-neuralink-ai-warning-a8074821.html; 17th July 2018

Titcomb, J. (2017). AI is the biggest risk we face as a civilisation, Elon Musk says; accessed from: https://www.telegraph.co.uk/technology/2017/07/17/ai-biggest-risk-face-civilisation-elon-musk-says/; 16th July 2018

0

One comment on “The risks of AI: Reality or Science Fiction?”

  1. Hi, Meyer.
    Thanks for your interesting post. Mertell’s view about AI risks was new to me as well. Nowadays, we see a bunch of news regarding AI, while some are scared about this advanced technology. And I’m one of them. But I think it’s just because I don’t know much about AI. It should be the same as when we feel anxious about countries we’ve never been to. So we need to learn more and more bout AI if we try to find an answer.

    0

Comments are closed.