Google’s AI: From DeepMind to One Model

One of the most memorable events in AI history was IBM’s Deep Blue beating chess champion Garry Kasparov. In March 2016 history was rewritten as Google’s AlphaGo beat Go master Lee Sedol, then in May 2017 beating Go champion Ke Jie. Although AlphaGo’s win was all over the news and reached many people with its exceptional success, it is in fact much more than it seems. AlphaGo is a product of a bigger brand DeepMind, which Google acquired in 2014 and is currently working on projects on a much larger scale. AlphaGo has caught a lot of attention and will have many impacts, but DeepMind and its works are being overlooked.

AlphaGo’s future is very bright due to its complex algorithms of deep neural network. The technologies behind it, is actually used by Google in a lot of its products including search, ad business, self-driving cars and healthcare division. By using the algorithm of AlphaGo, a computer can check hundreds of radiology images each day to look for signs of cancer. This would give a chance for the physicians to work on improving radiology technologies. Moreover, in the area of healthcare, new opportunities will arise to patients managing their disease, physicians treating patients and the business opportunities for insurance and pharmaceutical companies. If AlphaGo is trained on a large dataset of disease cases, it could compete with versions of itself to determine the most accurate diagnosis. With sufficient training, it might even be able to narrow down the next spreading location of specific types of cancer (e.g. Leukaemia), which have infinite possible regions that it could spread. This way, more efficient treatment can be provided. In addition, the same process can be used to choose between potential treatment options and to form combinations of them, so that the best outcomes are observed. [1, 3, 5]

DeepMind has been continuing to extend itself since AlphaGo’s achievements. They have released three new papers on July 10, 2017 about “Producing flexible behaviours in simulated environments”. This project suggest that they have been working on physical intelligence, which is “a crucial part of AI research”. [2] Despite previous methods of using “hand-crafted objectives” or “motion capture data”, DeepMind has decided to start from scratch to allow the artificial system to learn by itself and be able to conduct a larger variety of behaviours.  According to their article, the biggest challenge was to define the process of movement as opposed to having a clear purpose such as winning. Their research illustrates that with some high-level objectives, using a trained policy network to imitate human motion and a trained neural network, their agents are able to develop flexible and natural behaviours. [2] The implications of this research are not very clear at this stage but it is a revolutionary improvement towards reaching the complexity of human mind.

 

 

Google’s AI research is not limited to DeepMind but also has a whole research department and especially a specific division called Google Brain. Researchers at Google Brain teamed up with University of Toronto and other Google Research team members in order to come up with “One Model To Learn Them All”. The paper itself is far more exciting than Lord of the Rings, as the title implies. With recent research, deep learning has almost reached human accuracy in many tasks including image classification, speech recognition and computer vision. For each of these tasks a different deep neural network architecture and task specific algorithms are used to maximize their accuracy. However, this is not how human brain actually works, because we are capable of learning a wide range of tasks and using our knowledge in between them. With this new approach, a single deep learning model is “trained concurrently on ImageNet, multiple translation tasks, image captioning (COCO dataset), a speech recognition corpus, and an English parsing task”. [4] This allows the model to incorporate different “blocks” used for different tasks in order to use less data and enable transfer learning. [4]

Google’s recent work and its future research is proof of the start of an AI revolution. The focus has shifted from just mastering strategy games to imitating the complex nature and learning mechanisms of the human brain. AI is advancing drastically day by day and although we are still far from creating anything like the real human brain, we are closer than it seems.

 

References:

  1. Furness, Dyllan. “What AlphaGo’s Victory Might Mean for AI in Healthcare.” com. 15 May 2017. Web. 13 July 2017.
  2. Heess, Nicolas, Josh Merel, and Ziyu Wang. “Producing Flexible Behaviours in Simulated Environments.” DeepMind. 10 July 2017. Web. 13 July 2017.
  3. Jones, Brad. “First DeepMind AI Conquered Go. Now It’s Time to Stop Playing Games.” Digital Trends. Digital Trends, 25 June 2017. Web. 13 July 2017.
  4. Kaiser, Lukasz, et al. “One Model To Learn Them All.” arXiv preprint arXiv:1706.05137(2017).
  5. “AlphaGo Can Shape The Future Of Healthcare.” The Medical Futurist. 22 July 2016. Web. 12 July 2017.
  6. https://www.youtube.com/watch?v=hx_bgoTF7bs

Cover image: http://www.clickode.com/en/2016/02/01/google-rilascia-gratis-lezioni-di-deep-learning/

 

5+

Users who have LIKED this post:

  • avatar
  • avatar
  • avatar
  • avatar

3 comments on “Google’s AI: From DeepMind to One Model”

  1. Hi Romi,
    I found your post very interesting and it made me consider further the direction that AI is heading. As a Go and Chess player myself, I was hardly surprised by Ke Jie’s defeat to AlphaGO; Deep Blue’s triumph over Kasparov showed that it was a matter of time before similar developments in other strategy games. I can clearly envision AI being a dominant force in science, and particularly the neural network as you mention, but I wanted to ask your thoughts on whether it will ever flourish more broadly. Deep Blue doesn’t have the human satisfaction of physically moving pieces and looking into your opponent’s eyes. As a humanities student myself, I feel that literature and philosophy will never be conquered by AI and struggle to imagine the relationship they will have alongside fields that will be, if they are not already, dominated by AI. Some of the greatest literature we have today cannot be matched by AI, for such works are based on human experiences and imaginative thinking. For example, how could a computer ever develop an idea such as Plato’s parable of the cave? How do you see the role of arts in society as AI continues to evolve, and can they coexist?

    Shiv – MS&E 238A

    1+

    Users who have LIKED this comment:

    • avatar
    1. Hi Shiv – I think you’re right about AI being far from conquering the humanities. For one thing, the most complex neural networks are still very far from the complexity of the human brain. The charts on pages 24 and 27 in Goodfellow et al’s Deep Learning book ( http://www.deeplearningbook.org/contents/intro.html) show that we have neural networks with the level of neural connectivity of the human brain, but that we are about 5 orders of magnitude away from the simulating the number of neurons in the human brain.

      Also, with today’s approaches, there isn’t true ‘understanding’ of words, images, etc. Instead, we are optimizing very complex models to meet an objective (e.g. winning a game of Go, correctly identifying an image, etc.). We can simulate memory in these models to create neural networks that are Turing Complete, but the models can’t ‘imagine’ and don’t experience the world like humans do. It seems that we are a long way from computers being able to task the ‘big questions’ of philosophy and the humanities.

      2+
  2. Very interesting post. Deep Blue and AlphaGo have proved that developing AI systems capable of outperforming humans is a goal that is achievable. However, developing an artificial general intelligence could be a real challenge, as in some human activities there is no clear distinction between what’s considered a good outcome and a bad one, unlike a board game where you either win or you lose. Creating a “model to rule them all” is definitely an important step towards an artificial general intelligence, but I wonder if the current computing power available is enough for a model like this to work or if we will have to wait possibly a few decades to see this happen.

    0

Comments are closed.