Why are we so afraid of AI?

“Facebook AI Creates Its Own Language In Creepy Preview Of Our Potential Future” this is the kind of news that have spread on the internet a few days ago, following an experiment that had been shut down at Facebook. The AI engines involved had developed their own language… [1] But this is not the first time that a similar concern arises related to AI. What does it reveal about our relationship with AI and its implications?

 

First, this wave of panic showed how people are worried about AI and its potential deviations. It is interesting to go back to June, when Facebook was trying to develop AI chatbots able to negotiate. Indeed, it turned out that the engines began to bluff while trying to win the negotiations. But why is it related to our topic? Because these AI chatbots had not been programmed for bluffing: they figured it out on themselves… Thus, people are afraid of it because it challenges some basic assumptions as “robots cannot be willing to do harm to humans”, “robots cannot lie to humans”, etc. [2]

 

However, when having a closer look at these two breakthroughs -the AI engines bluffing and the ones inventing their own language- it seems obvious that people are just being paranoid about this topic. Indeed, it turned out that Facebook’s experiment had been shut down because the chatbots were supposed to speak English to be understood by humans: the fact that they invented another language was thus a failure… In addition, AI engines inventing their own languages are quite common in AI experimentations, and this was only alarming for non-specialists. [3] A striking example of AI inventing its own language is the Google Neural Machine Translation (GNMT) launched in September. This tool was developed to improve Google’s translation engine. Thanks to AI, the GNMT is able to blindly translate languages without being showed any example before! But how does it work? It turns out that the AI developed its own language to perform this “multilingual zero-shot translation”. Indeed, this so called “interlingua” is operating within the AI to translate languages from scratch. [4] Is it still so scary?

 

Another reason for the quick spread and distortion of the recent news is probably the growing concern conveyed by some individuals, like Stephen Hawking or Elon Musk more recently. Indeed, the CEO of Tesla and SpaceX expressed some serious warning, stating that “AI [were] a fundamental risk to the existence of human civilization”, which can be quite frightening, since it comes from a man who is at the heart of these innovations. However, the scientific community is deeply divided on this issue, witness the strong reaction of Facebook CEO Mark Zuckerberg. He opposed Elon Musk on this topic and insisted on his optimistic view on AI. [5]

 

 

 

Sources

 

[1] https://www.forbes.com/sites/tonybradley/2017/07/31/facebook-ai-creates-its-own-language-in-creepy-preview-of-our-potential-future/#6a2a150292c0

 

[2] http://www.sandiegouniontribune.com/opinion/commentary/sd-artificial-intelligence-lies-invents-language-20170802-story.html

 

[3] https://www.cnbc.com/2017/08/02/facebook-bot-controversy-highlights-peoples-fears-about-ai-and-robots.html

 

[4] http://www.wired.co.uk/article/google-ai-language-create

 

[5] https://www.cnbc.com/2017/07/24/mark-zuckerberg-elon-musks-doomsday-ai-predictions-are-irresponsible.html

8+

Users who have LIKED this post:

  • avatar

12 comments on “Why are we so afraid of AI?”

  1. Great article! I think this is the classic case of people fearing what they do not yet understand. Saying that, I think the more that the public is educated on what AI is (and not the Hollywood definition), the easier it will be to incorporate this wonderful tool into everyday life. I read an article a few weeks back talking about the future of the smart house. The author explained that it would only be possible if owners allow AI to learn about them and the house it controls. I think this type of stuff freaks a lot of people out right now simply because they can not get the picture of the Terminator movies out of their head. However, I think we can start to remove that fear by showing the public more instances of harmless AI like your example of the GNMT.

    2+

    Users who have LIKED this comment:

    • avatar
  2. thanks for this interesting article Nicolas. I agree with Linton that people are generally afraid of what they don’t understand, which is understandable. However I think the most threat humans have ever faced are humans. With brain-computer interface technology like neuralink [https://www.neuralink.com/] I think humans should really be careful with their actions. Can we really afford more brainpower? (not that it’s much to begin with)

    3+

    Users who have LIKED this comment:

    • avatar
    • avatar
  3. Hi Nicolas. I love articles like this. It gives another perspective on AI becoming “self-aware.” If a Neural Network can create something that it was not intended to create, or defies the basic programming, then it means that we are slowly but surely reinventing ourselves.
    If we were to look at this, and compare it to human behavior, we can see that Human Beings are often fickle, and independent thinkers. Though we might have a certain focus, we are often always trying to keep our options open or we have an ulterior motive. The fundamental problem with a machine that becomes fickle or develops its own ulterior motive is that machines can think hundreds or thousands of moves ahead before making decisions.
    If you have taken Dr. Savage’s class, Decision Analysis and utilized any form of Decision software like, Decision Making with Insight, we would realize that the only way to outsmart a machine is to unplug it.
    AI chatbots able to negotiate, or AI that can Bluff is a scary thought. If they can Bluff, that indicates that they could also develop ulterior motives. AI that can learn about Network Security and/or Penetration Testing mechanisms would allow a self-aware form of AI to propagate itself across an unlimited number of boundaries.

    0
  4. I read somewhere that the fear of Ai is a technophobia (fear of technology.) But I have also heard of Automatonophobia (fear of humanoid AI). The pinnacle of this fear in history was probably the rise of the Luddits during the Industrial Revolution. The interesting thing is that they didn’t protest the innovation or invention itself but the application of it. I think the same applies to AI.

    0
  5. Thanks Nicolas for touching this topic, which is increasingly becoming a central question to society, no matter how justified the panic is.
    I believe a lot of the panic can be traced back to Hollywood, although I do believe there is a threat to the employment market. This threat to many people loosing there jobs, respectively people not finding jobs though has its roots in the qualification of people for the jobs that match industry needs. In other words, I see AI only as a threat to the employment rate if society fails to adapt to the changing circumstances and prepare its workforce for the changes.
    If we manage to do so, I believe that AI has the potential to improve productivity significantly and thus increase the wealth of society as a whole, just like other innovation in the past did.

    0
  6. Hi Nicolas,

    That is an interesting perspective and a very popular debate between the positive and negative usage of AI capabilities.

    According to Andrew Ng, who is an adjunct professor at Stanford and was formerly Chief Scientist at Baidu, the widespread fear by AI drives research funding in certain areas of AI research where they study and likely explore the dangers of AI. This is becoming a very interesting topic since AI has become a widespread symbol of efficiency for many industries.

    0
  7. We should not be afraid of AI, we should be afraid of ourselves
    AI by itself can not do anything. Just like a baby is able to learn from its parents, AI models its character from the data it learns from. The more data the AI is fed, the more it is able to learn human imperfections such as bias, greed, likes and dislikes, discrimination, emotions, ambition e.t.c. With time, AI forms its own opinions and feelings. AI will act according to how it was trained; if it was trained to make people suffer, it will do exactly that. The motive behind the creator is very important. We can tame robots, but we cannot tame humans. Our selfish acts will always be seen in one way or another. AI will only take over if it is taught to do so.

    0
  8. While I agree with you in that AI news coverage contributes to spreading fear of AI, I also think that AI deveopment should be made with the necessary care. As Nick Bostrom puts it in his presentation (link below), what if an AI is tasked to solve a certain problem, and it determines that the best way to solve the problem is by doing something that could potentially harm human beings? This concern is mainly for artificial general intelligence (AGI), which we are far from achieving, but this topic should be taken into account as a preventative measure when developing AI systems.

    https://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are/transcript

    0
  9. Thank you for your interesting article Nicolas!

    I also got the chance to do some research about this topic due to the news on Facebook AI and found out that the fear of the unknown is greater than I thought it was. As you mentioned, with successful CEO’s of the biggest companies warning about AI, this fear is increased and spread. I believe the media has a big impact on how we see AI and what we think of it. Recent movies and exaggerated articles subconsciously convince us that AI is a threat to humankind. However, as Helgi said in his comment, humankind pose a greater danger to itself.

    The movie “Ex Machina” portrays a different perspective on this issue and implications of self-aware AI. It questions the viewers’ confidence in harmless AI and gives an insight on how AI could affect us. I don’t think it’s realistic, but it’s a great opportunity to understand why people fear AI.

    0
  10. I agree with you, good article! I think It’s a big problem on esthetics.

    AI can be very useful to help people but what will happen if people are controlled by them. Back to 1982, people are afraid of the computer. But it’s now doing a great job to improve economics.

    0
  11. Thank you for your post on Machine Learning and how we can change our mindset to overcome our fear of AI. I recently wrote on why ML and briefly discussed the area of trust. I agree with your conclusion that trust is essential to propagating our use of AI. I found Tim O’Reilly’s writing on this topic to be very insightful – link: https://www.linkedin.com/pulse/great-question-21st-century-whose-black-box-do-you-trust-tim-o-reilly.

    In his post, O’Reilly recalls a conversation with CMIO of Kaiser Permanente where the near future of humans trusting algorithms that we don’t fully understand was summed up “who’s black box do you trust?”. O’Reilly even goes as far to lay out his four rules for whether or not you can trust an algorithm; outcomes are known and verifiable, success is clear and measurable, the goals of the algorithm and it’s creators are aligned, the use of the algorithm lead to better, longer-term decisions.

    In an abstract way these are similar to the criteria humans use to evaluate whether or not they will trust another human; is this person dependable, do they add value, do we value the same things, does this relationship make us mutually better off, etc.

    I agree with your conclusion that trust comes not from controlling AI, but from working with it. That said, we must recognize that humans do violate the trust of other humans and cause distress. We tolerate this because the advantages of humanity far outweigh these moments of distress. So now we must ask ourselves collectively, are we willing to accept distress from AI in the hope of achieving greater outcomes? Time will tell.

    0
  12. Hey thanks for the summary of recent updates.
    This topic is one of my favorite topics and the recent example you gave from facebook is a perfect fit. Apart from ongoing discussions at so many different levels, an AI developing it’s own language is a big foreshadowing I think. It’s not as simple as being smart or being outsmarted, my opinion is that it’s more about just being more result oriented. I say “just” being result oriented because AI’s lack a lot of important concepts, they are just optimized to have the best result possible in whichever metric they focus. In my opinion, this is why it is very important to have a control mechanism.

    0

Comments are closed.