The path of Artificial Intelligence

Cognitive computing or artificial intelligence (AI) was first idealized in the late 19th century through the work of George Boole in his literature The Laws of Thought. The british mathematician was also an educator and philosopher. Besides cognitive computing his energy was dedicated to comprehend the challenges of differential equations and algebraic logic. A few decades later, during the 20th century, the Stanford professor John McCarthy minted the expression Artificial Intelligence in 1955 [1]. Currently the term is broadly used and defined by Google as:

“the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using the rules to reach approximate or definite conclusions), and self-correction. Particular applications of AI include expert systems, speech recognition and machine vision. [3]”

It didn’t take much time for humans to start competing against machines. During the 1990’s IBM worked on a cognitive computer called Deep Blue. The machine’s development started in 1985 with the goal of building a powerful AI computer capable of outsmart the human brain. In 1997 Deep Blue competed for the first time with Garry Kasparov – the Russian is considered the world’s most famous chess grandmaster. On the first match the human brain did not disappoint, Kasparov won the dispute by a score of 4 -2. A few months later, after being strongly upgraded, IBM’s Deep Blue got a rematch and and defeated the Russian grandmaster by a score of 3½–2½. That was the first record of a machine being capable of executing human activities better than actual humans [4].

May 11, 1997. REUTERS/Peter Morgan

Artificial intelligence and machine learning walk hand in hand. The North-American scientist Arthur Samuel defined machine learning as “giving computers the ability to learn without being explicitly programmed”. And that is true. Computers are getting ~smarter~ over time. John Donovan AT&T’s CEO attended a lecture as a guest speaker at Stanford this summer, during the talk he explained AT&T’s difficult task to remains ~invisible~ as a data carrier. Donovan brought some insightful information on how we are leaving an era of obscene amount of data production, such as the fact that only 15% of all stored data is actually processed and used for decision making. Another amazing statement is that we generate data two times faster than we are able to transfer it, so even if we had the will to backup all generated data there would not be enough broadband.

Now I can hear you asking yourself: Ok, nice. But how all of that connects with AI? Well, humans are not able to memorize or interpret all that data but computers are. When you combine artificial intelligence (the ability of decision making and human activities, such as facial recognition) with machine learning (enrich a computer system with information so it can get better at its activities) you can get amazing results. High tech companies, like Google, Apple and Facebook are currently using costumer generated data for machine learning applied on AI. These efforts are truly capable of enhancing user and provider experience by lowering error rates not only on facial and speech recognition but also on health care trough more precise and predictive diagnosis.

Read more about how Google is using machine learning and AI on its Endeavors.

Google has slashed its speech recognition word error rate by more than 30% since 2012

Sources:

[1] http://www.dataversity.net/brief-history-cognitive-computing/

[2] “Who is George Boole: the mathematician behind the Google doodle“. Sydney Morning Herald. 2 November   2015.

[3] Conversations On the Leading Edge of Knowledge and Discovery. John McCarthy.

[4] “Who is the Strongest Chess Player?”. Bill Wall. Chess.com. 27 October 2008. Retrieved 2 March 2009.

0

3 comments on “The path of Artificial Intelligence”

  1. A really interesting read Arthur! Machine Learning is definitely very interesting, since it minimises the human-engineering of the platform. One particular aspect that I find very promising is Deep Learning, which is about making machines think in the same process as the human brain – often using artificial neural networks. I’d be keen to hear your thoughts on which industries/areas you think will benefit the most from AI/ML?

    0
  2. Great job, Arthur!

    I’ve been wondering about the increasing complexity of AI, especially in the wake of the Zuckerberg vs Musk debate over how dangerous AI can be.

    What do you recon the impact over AI would be over the workforce, particularly the one dependent on repetitive labour?

    0
  3. The development of AI is improving day by day. The ultimate goal would be machines to learn everything naturally, just like humans do; which brings us to Artificial General Intelligence AGI, strong AI or full AI. This technology will develop machines that can be able to do a wide range of activities and solve different problems through reasoning. This technology is being developed by Google Deep Mind. The AI agents are taught from scratch without any supervision. Learning is accomplished through maximizing rewarding, a technology called reinforcement learning. Once we can make computers achieve AGI, that is when we will be approaching singularity. This could be an advantage or disadvantage, depending on what intent we teach the machines. ‘Artificial general intelligence (AGI) is the intelligence of a machine that could successfully perform any intellectual task that a human being can’ (https://en.wikipedia.org/wiki/Artificial_general_intelligence)

    0

Comments are closed.