Artificial Intelligence and it’s risks

From Siri to self-driving cars, AI is improving rapidly. It is important to be informed because like Craig Martell, Head of Science and Engineering at LinkedIn, said in his lecture; AI is often misunderstood.


What is Artificial Intelligence?

Simply put, it’s a machine simulating human intelligence: understanding natural language, recognizing faces in photos, driving a car, or guessing what other books we might like based on what we have previously enjoyed reading. [3]


What are the benefits?

The leading approach to AI is machine learning. [3] This technique can be applied to all sorts of problems, such as getting computers to spot patterns in medical images, for example. Google’s artificial intelligence company DeepMind are collaborating with the UK’s National Health Service in a handful of projects, including ones in which their software is being taught to diagnose cancer and eye disease from patient scans. [3] Others are using machine learning to catch early signs of conditions such as heart disease and Alzheimers. [3]


Why it is important to research AI

It is important to research what will happen if the AI succeeds and the AI becomes better than humans. Some who question if advanced AI will ever be achieved, and others who insist that the creation of super intelligent AI is guaranteed to be beneficial. Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, and many other big names in science and technology have warned in the media and via open letters that we should be more concerned about possible dangerous outcomes of super smart AI. [3]


How can AI be dangerous?

A super intelligent AI is unlikely to exhibit human emotions like love or hate, and that there is no reason to expect AI to become intentionally benevolent or malevolent. [2]

  • The AI is programmed to do something devastating [2]
  • The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal [2]
  • Phishing scams could get even worse [1]
  • Hackers start using AI like financial firms [1]
  • Fake news and more propaganda could be manipulated [1]


The problem about advanced AI isn’t that it could be malevolent but it can become too  competent for our own good. AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we have a problem. [2] Rather than worrying about an AI takeover, the risk is that we can’t put too much trust in the smart systems that we are building. [3]



[1] 4 Things Everyone Should Fear About Artificial Intelligence and the Future. (n.d.). Retrieved from

[2] Benefits & Risks of Artificial Intelligence. (n.d.). Retrieved from

[3] Nogrady, B. (2016, November 10). Future – The real risks of artificial intelligence. Retrieved from 


2 comments on “Artificial Intelligence and it’s risks”

  1. Hi Sebastian, nice post that get right to the point!

    I want just to stress what you mentioned in the last paragraph. You wrote that AI “can become too competent for our own good” and that if its “goals aren’t aligned with ours, we will have a problem”. In my opinion – and according to what Craig Martell explained during his lecture as well – humans cover a substantial role in AI, since it’s up to us make AI “understand” what kind of data set it must consider, what it concretely must do, what are its goals. AI cannot do anything, without humans’ will. So, we will have problems only if we won’t be able to properly manage AI – and not because it’s going to become too competent or its aims don’t match with ours.

    Glad to hear your opinion!

  2. I enjoyed the post. Thank you.

    The post points out a very cool view of AI. What about AI does the wrong thing? How AI can differentiate right or wrong by “themselves”? I think we are not there yet, however, it is the future. Just like the intelligent human being in the past, AI may design/create with/without human involve their own “law” to guide them. Cannot imagine how the world will be at that time. Hope it will be good. 🙂


Comments are closed.