How Do We Control the Tigers?

Elon Musk came out at the National Governor’s Association meeting admonishing the audience that AI is “the biggest risk that we face as a civilization” (4). This strong sentiment expressed by the technological expert Elon Musk worries me about the potential dangers of AI. So what exactly is AI capable of?  First of all, AI is defined as the thought process behind a machine that allows it to learn and understand the tasks it is prompted with to become more efficient (3). The danger of becoming too efficient is neglecting the certain side effects which occur as a result of the AI’s inherent nature to get the job done as efficiently as possible. For example, can AI

First of all, AI is defined as the thought process behind a machine that allows it to learn and understand the tasks it is prompted with to become more efficient (3). The danger of becoming too efficient is neglecting the certain side effects which occur as a result of the AI’s inherent nature to get the job done as efficiently as possible. For example, can AI used in war differentiate between innocent civilians and the enemy? Especially if that AI’s sole purpose is to eliminate the enemy and the enemy was hiding in a school with children, the AI may ignore or be blind to the kids because it remains focused on its goal. On another note, AI that could eliminate the operating costs for companies by doing the company’s routine tasks can lead to the replacement of millions of jobs. Mckinsey has estimated this problem is already happening, as there already exists today the technology to complete 60% of the tasks performed by 45% of the job market in today’s economy (2). This study by Mckinsey demonstrates what people have been preaching for years; the machine is becoming increasingly capable and more efficient than the human. My final point is humans possess an uncanny ability to sense guilt, danger, benevolence in others and situations. If for instance, AI was being used in a court setting to declare the sentence of a criminal, could it decipher the unruly behavior of an innocent man like a jury may be capable of? Most likely not. AI would only be capable of making a final decision because it is not fully conscious, it would go by the facts and tools at its disposal by mapping the key of the crime to the value of punishment for the man being tried. When AI is involved in making emotional decisions versus logical decisions, it would be unable to operate on the emotional side of the argument. AI operates by the reading of symbols like binary code, but it does not understand the symbols meaning it could never have a subjective experience or a conscience. As a neuroscientist puts it “there is a fundamental difference between the simulation of a physical process and the physical process itself” (5). AI lacks the sub conscious to fully understand the reasoning behind why it is doing something in difference to what it is doing.

After listing all of the potential dangers of AI, I ask the reader: where is the balance in optimizing AI’s potential to improve the quality of life versus allowing it to become a threat to us? All I know is that as the ‘Future of Life’ organization explained to its audience, “intelligence enables control: we control tigers by being smarter” (1). How do we not become the tigers and tame this beast?

 

Sources:

  1. https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/
  2. http://www.mckinsey.com/business-functions/digital-mckinsey/our-insights/where-machines-could-replace-humans-and-where-they-cant-yet
  3. https://en.wikipedia.org/wiki/Artificial_intelligence
  4. https://www.technologyreview.com/s/608296/elon-musk-urges-us-governors-to-regulate-ai-before-its-too-late/
  5. http://www.rawstory.com/2016/03/a-neuroscientist-explains-why-artificially-intelligent-robots-will-never-have-consciousness-like-humans/
1+

Users who have LIKED this post:

  • avatar

7 comments on “How Do We Control the Tigers?”

  1. Hey Foster! Great article, I really love how you bring in a more philosophical reflexion to the AI challenges. I am actually taking at Stanford the ‘Machines as media’ class which’s main subject is precisely to analyze the relation that men and machines had throughout time, and more precisely how men (a swell as governing and ecclesiastic bodies of society) fight against evolution but still feel the need to research and always seek for evolution. If you look at the pastoral way of living of the 1800’s in America, you could bet that if you explained to people how the world would be now they would be scared. I don’t say that AI does not have any risk of alienating human beings through the intelligence of machines’ evolution, but I just wanted to point out the fact that it has since ever been an intrinsic instinct of most men to be scared of evolution and the unknown behind them. Such innovations can truly disrupt human interactions and therefor give a new direction to our evolution. If evolution is the right path, The question is about knowing which one is the good direction, and men have always been very scared of this ‘bigger’ idea.

    0
  2. Hi Foster,
    Thank you for sharing your views on AI and its potential threats to us in the future.
    As someone being far from an expert or even a well-read person on AI, I would like to make some points of general reasoning on the matter of real AI:

    1. We are soooooooo far away, we don’t even have a clue of how the Brain works.
    For arguably all human inventions we look at nature (conscious or subconsciously) and try to understand and merge different concepts to “invent” something. That being said it is most likely the smartest thing to look at brains and try to build AI based on our understanding of the brain or natural intelligence rather then just trying to do it. I mean it would be rather paradoxical if we would manage to build Intelligence but not actually understand it, right?! The thing is: we have absolutely no idea how the brain actually works yet and it doesn’t look like it is going to change any time soon.

    2. Intelligent humans can be good and bad, what will AI be?
    Lets assume we manage to build an Artificial Intelligence. History of Humans shows us, that their is not really a correlation between intelligence and goodness and if there was it is usually skewed towards “evil”, as intelligence increases. There have been extremely intelligent monsters and extremely intelligent saints. What that means to me is, that and AI’s intentions will very much depend on what we expose to the AI and how we teach it and that will be a very difficult thing to do, especially when a certain intelligence is reached.

    3. What comes with an IQ of 300, or 500, or even 1.000.000?
    We know what people with an IQ below 200 are like because they actually walk among us, but I would argue that no one can predict what a higher IQ would mean. Assuming that actual AI is created the physical limits of it would not be limited to 20cmx20xm so it is very possible that it could reach an intelligence which is further away from us than anything we can imagine.

    My points are all in regards to actual Artificial Intelligence and not just something that appears to show some kind of intelligence. That being said many problems and opportunities will come with the improvement of Machine Learning and other smart algorithms and their use in our society, which you touched on. But I personally believe, that we are still decades if not centuries away from real AI. Nevertheless we should try to work on understanding the implications of it now so that we can prepare for everything that AI will bring to us and the planet.

    Cheers,

    Dean

    0
  3. Interesting post, as I agree that when Elon made headlines with this (and other similar) statements I began to wonder just how worried I should be.

    You bring up a number of interesting points and different views on this issue. One being the idea of whether we want computers to actually make decisions by themselves, or just present a human with the decision they would make and then the human pushes the button. In your example of AI acting as a judge, are we ok with the AI coalescing evidence into an argument and then a human judge deciding if they agree with the ruling ? Would it really make a difference or would it just make us feel better (would people just end up doing whatever the computer said). This kind of thing is trivial at a lower level, where our society has made it clear that we accept computers making decisions all the time. However as AI becomes more able the questions get bigger and more impactful. Take for example the difference between automated targeting systems in wide use all over the world, and these newer automated turrets on the Korean DMZ that are designed to shoot on sight without a human giving “permission”.

    http://newatlas.com/korea-dodamm-super-aegis-autonomos-robot-gun-turret/17198/

    The concept is easily applied to other new, auto targeting technology like lasers designed by Boeing to target and shoot down drones.

    https://www.wired.com/2015/08/welcome-world-drone-killing-laser-cannon/

    Some very intelligent and influential people are trying to prevent this “third evolution” in warfare

    http://time.com/3973500/elon-musk-stephen-hawking-ai-weapons/

    but I’m not sure how much success we can hope for on that front. The arguments are very similar to when the world considered restrictions on air warfare after WW1, or nuclear weapons later on. In many cases, especially those centered around revolutionary new technologies, self preservation and distrust seem to have outweighed the risk to humanity as a collective and I’m not convinced we should expect this evolution of warfare to happen any differently (though I am certainly not arguing that we should not try, just that we should be careful not to repeat the past and expect different results).

    0
  4. Thank you for your insightful post Foster! Elon Musk actually appears to get many of his more controversial views on artificial intelligence and the future of humanity from Oxford philisophy professor Nick Bostrom https://qz.com/699518/we-talked-to-the-oxford-philosopher-who-gave-elon-musk-the-theory-that-we-are-all-computer-simulations/, You can find more information about Nick Bostrom here: https://en.wikipedia.org/wiki/Nick_Bostrom, and https://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are. Nick Bostrom believes that humanity is not appropriately dealing with the possibility of human extinction. He has, for example, been warning about the potential catastrophalical consequences of artificial general intelligence and artificial super intelligence. He believes that humans tend to underestimate existential risk, and that more action should be taken to prevent that could negatively affect humanity future worldwide. Most of Elon Musk’s views on the future of artificial intelligence seem to be based on Bostrom’s book “Superintelligence: Paths, Dangers, Strategies” https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742. Bostrom also gave Elon Musk the idea that we are living in a simulation. Elon discussed in several interviews that life is just a simulation https://www.youtube.com/watch?v=J0KHiiTtt4w. He expects that as VR gets more realistic we will not be able to distinguish it from reality anymore. You can find Bostrom’s paper about the simulation hypothesis here if you are interested: http://www.simulation-argument.com/simulation.pdf

    0
  5. I think what we should fear is the potential power that we could lose without AI. Can you imagine not being able to develop applications such as self driven cars, medical research on diseases like cancer, market insights, customer segmentation, crime detection, traffic control, smart manufacture, scientific experiments and breakthroughs….the list is endless. Whenever technology comes up, it is first faced with fear, then with time, we all adjust and move on; and cannot imagine life without this technology.
    For instance, when trains were first developed, people were scared that the human body may not survive such speeds (http://www.techradar.com/news/world-of-tech/12-technologies-that-scared-the-world-senseless-1249053). When computers came in, people were scared of touching the computers, some were intimidated, some were left in awe (https://www.theatlantic.com/technology/archive/2015/03/when-people-feared-computers/388919/). When cellphones came in, people thought they cause cancer after prolonged use. When wi-fi became wide spread, we have heard of people talking about dangerous radiations. We are currently going through the same phase. AI is not here to kill us; it is here to help us. If we act on fear, we may never grow. We just have to embrace the technology and research on how we can make it better for humanity. Bad people will always exist, we cant just keep hiding because bad people exist and can harm us. Eventually we will adapt and find the best way to use AI.

    0
  6. To pick on a specific example – using AI to decide the result of a jury trial, lets try to actually imagine how AI might be used.

    Much of today’s AI is what is known as “supervised learning”, that is, we give a model a bunch of input, and ask it to provide an output. In this case, it could look something like this: we provide the model with a transcript of a suspect’s interrogation, key details of a crime, and then ask the model to predict if someone is guilty or not. The model learns how to do it by processing large amounts of data, and then predicts the most statistically likely output given a certain input.

    In this case, it is highly unlikely that we would use AI to actually determine the length of a person’s sentence. Even the prediction of whether someone is guilty or unlikely to be left up to the AI. Much like how radiologists might use AI to identify key images from a medical scan, is is more likely that AI will be used to augment the decision making process, rather than replace it. We would also have to be careful on providing the model with sufficient and appropriate data if we were to use it in this case.

    A more pressing issue is the displacement of jobs that will arise, as you’ve pointed out. In the light of such developments, it seems like retraining will become ever more critical in the future. We may also have to explore ideas like Universal Income, which has found proponents in prominent figures like Mark Zuckerberg and Elon Musk.

    0
  7. I agree with you when you said that AI might be more of a threat right now.
    Not only Ellon Musk, but also Stephen Hawking and Bill Gates are concern about the direction AI could get. For this reason, several billionaires, including Musk, are among an organization called OpenAI, which is determined to develop AI that will benefit people.
    Hopefully they can find a way that AI develops only towards our benefit without being a risk to us.

    0

Comments are closed.