AI’s Greatest Trick

Popular culture offers many examples of what AI’s future maybe. The greatest dangers usually take the form of sentient beings turning against their creators. However, I do not believe this is the most likely negative end state for AI. Rather, to paraphrase from the Usual Suspects, I think the greatest trick AI will ever pull will be convincing the world it doesn’t exist. It will not be the Skynet’s and Ava’s of the world that do the most damage, rather seemingly invisible advertising algorithms that change our very desires. This is where I think we need to be focusing more of our concern. Into the more nuanced and nefarious ways AI can affect our lives without us being keenly aware. This case is brought home all too clear in what I would consider the greatest weaponization of AI to-date which is the influence Cambridge Analytica was able to bring to bear on the US election. The firm was able to leverage the founding principles of Kosinski’s Psychometrics Center that he researched while at Stanford [1] in order to predict which people were most susceptible to being swung on their vote [2]. From this point, targeted ads bomboraded those users to the point that many would say they work played a substantial role in the historic outcome.

Beyond this analysis and respond means of AI manipulation, there is also a growing concern over much more interactive manipulation of human users by intelligent bots. A recent article in MIT’s Technology Review touches on the potential for chatbots and AI assistants (Siri/Echo) to gain strong suggestive footholds with consumers to encourage them to make decisions they would not otherwise without impetus [3]. The unsettling example is of take-away food ordering. The dialogue would follow that the intelligent agent knows when a subject usually eats dinner. It additionally knows that the user is currently watching TV (or doing another sedentary task) right before that known time making the likelihood that if it recommends delivery food the user is much more likely to respond in the affirmative. It can in the background search for the best nearby discount and quickest delivery to further lower barriers to entry. This pattern can be repeated until the user can be shifted to a new more unhealthy diet by the intelligent bot. Such an example can be played out in much more consequential areas of life to show the true underlying danger that come along with AI for everyday consumer use.

Tech writer Tristan Greene sums the real threat of AI as: “people without ethics using literal propaganda machines for planet-scale social engineering efforts [4]”. I would echo his sentiment and add that such use cases are the ones I think deserve and must receive the political and social focus for reform more directly than the proverbial “killer-robots.”

[1] Grassegger, H., & Krogerus, M. (2017, January 28). The Data That Turned the World Upside Down. Retrieved from https://motherboard.vice.com/en_us/article/mg9vvn/how-our-likes-helped-trump-win

[2] Polonski, V. (2018, July 17). How artificial intelligence conquered democracy. Retrieved from https://theconversation.com/how-artificial-intelligence-conquered-democracy-77675

[3] Yearsley, L. (2017, June 05). We need to talk about the power of AI to manipulate us. Retrieved from https://www.technologyreview.com/s/608036/we-need-to-talk-about-the-power-of-ai-to-manipulate-humans/

[4] Greene, T. (2018, March 21). Killer robots? Cambridge Analytica and Facebook show us the real danger of AI. Retrieved from https://thenextweb.com/artificial-intelligence/2018/03/21/killer-robots-cambridge-analytica-and-facebook-show-us-the-real-danger-of-ai/

 

0

One comment on “AI’s Greatest Trick”

  1. Great post Zac, and the connection to the Usual Suspects really brought it home. I wonder if we can use AI to help users understand when they are being targeted by AI. Can you imagine a browser plugin/bot that would recognize the ads a user is seeing and help them understand why?

    0

Comments are closed.