Public perception of artificial intelligence
As is the case with anything new, it can be difficult for the general public to trust new technology, especially when they can’t fully comprehend how it works. Over the past several decades, more and more of the inner workings of technology are becoming hidden “in the box” – in other words, it’s now harder for the average person to understand how things work without specialized knowledge. The brainpower of consumer electronics has long been shifting away from board level analog devices and more towards digital integrated circuits; cars that we used to be able to service and work on in our own garages now require the expertise of a dealership’s service department. Artificial intelligence is no different – the bleeding edge of technology is trying to pass some of the analysis and decision making process of problem solving onto computers, and as expected, society is wary of it.
A particularly eminent example of this hesitance occurred with IBM Watson’s attempt at assisting doctors with cancer diagnoses . Part of the problem with public distrust in AI is that the details of the algorithms are difficult to understand without an education background in statistics, machine learning, data science, or a related engineering field. Even with a highly educated group of users like oncologists, Watson’s predictions weren’t particularly useful: AI’s conclusions either corroborated with the doctors’ conclusions (meaning it didn’t bring anything new to the table in terms of finding correlations between training data and the presence of tumors), or it ended up disagreeing with doctors’ hypotheses, leading to further distrust (and also preventing Watson from acquiring novel data) . In other cases, mathematical models aren’t provided with sufficiently diverse training data to reach sound conclusions. As Richard Rogers discussed in his guest lecture , facial recognition algorithms often failed to correctly identify minorities because they were trained on feature sets from caucasian males. When it comes to complex social issues like cancer diagnoses and technology that could be used to identify criminals, it’s understandable that the public is unwilling to place their lives in the hands of “a robot”.
How could this problem be solved? In my opinion, one effective way would be to use artificial intelligence in specific applications where the technology is already “in the box” and away from the eyes of the public. Rogers also discussed the use of AI for optimizing traffic flow and improving congestion , which I think is a great idea – it allows for the current sensor systems to be improved upon, while minimizing public knowledge that their daily commutes are being affected by AI. Commercial aircraft have been using “AI-like” systems for years in order to deal with the complicated technological logistics of getting a plane into the air and landing it safely. We also see it being applied in situations that are decidedly lower risk because they impact the “wants” (rather than the needs) of the general public. Examples include Spotify’s recommendation engine  and Amazon’s use of targeted advertising based on previous browsing history on their websites . Now, does this mean I think that “ignorance is bliss” holds true when it comes to public adoption of AI? Not necessarily. What I will say is that we still have quite a long way to go until the general public realizes that automation is (and has been) a very normal part of society, but every bit of successful AI application helps, whether we are consciously aware of it or not.
 Richard Rogers, 7/13/18.