Do Machines have Consciousness?
Do machines have consciousness? One of last week’s speakers, Craig Martell, Head of Science and Engineering at LinkedIn, doesn’t think so. According to Mr. Martell, artificial intelligence (AI) is just a buzzword, purely named for its ability to raise money from the government or investors. The goal of AI is to build systems that act under uncertainty so humans don’t have to. He doesn’t believe AI machines will imitate humans, but are simply statistical models. He said, “It [AI] is tedious, boring, sweaty, and dirty. It’s not sexy.” I thought his view on AI was very practical and straightforward, stripping away the mystery behind AI. But I also think he was too dismissive about the lively and academic debate surrounding machines and their consciousness or lack thereof.
So first off, what is consciousness? Researchers and academics in neuroscience, psychology, philosophy, and computer science have tackled this question, but there are still many unknowns. However, as humans, we have a general understanding of consciousness. Conscious is what you experience. A Scientific American article beautifully described consciousness as “the tune stuck in your head, the sweetness of chocolate mousse, the throbbing pain of a toothache, the fierce love for your child and the bitter knowledge that eventually all feelings will end.”  Scientists describe human consciousness using three different levels: C0, C1, and C2. C0 is the subconscious processing of information, such as the processing used in facial recognition.  Most AI operate at C0.  C1 is the deliberate processing in response to certain stimuli. . Finally, C2 is the ability to self-correct and explore the unknown.  Scientists argue that some AI can evaluate their actions and react accordingly, exhibiting some component of C2.
Currently, machines do not exhibit consciousness because they are missing one crucial feature – intention. Edith Elkind, a Computer Science Professor at the University of Oxford, said, “Machines will become conscious when they start to set their own goals and act according to these goals rather than do what they were programmed to do. This is different from autonomy: Even a fully autonomous car would still drive from A to B as told.”  We live in the era of Weak AI – computers can only simulate the brain, and a simulation of consciousness is not the real thing.  In 1980, John Searle, an American philosopher, conducted a thought experiment to prove this exact point. “The Chinese Room Argument” showed that “syntax is not efficient for semantics” – although the machine is able to translate Chinese into English, it does not mean that the machine understands either Chinese or English. 
However, some visionaries in the technology space, such as Stephen Hawking, Elon Musk, and Bill Gates, fear that as the machines become more intelligent they could take on lives of their own.  The Turing Test, developed by Alan Turing in 1950, says that if we can’t differentiate between a computer and a human, then a computer is intelligent.  So if a computer is intelligent, does consciousness follow? This is called Strong AI – the belief that machines possess the full range of human cognitive abilities, such as self-awareness, and sentience. .
Although AI researchers claim to be making progress towards Strong AI, we are still decades away. So for now, we can let science fiction, television, and film explore the idea. Just think of recent movies, such as “Her” and “Ex-Machina.”
If you are curious how the media is portraying Strong AI, here is the movie trailer for “Ex-Machina”:
Users who have LIKED this post: