True or False.

Last week, our guest speaker Craig Martell spoke of how AI is exaggerated in movies and is highly unlikely to come in the form of the Terminator or Skynet taking over control of your vehicle and driving it off the road. In reality, AI is just advanced statistics, using algorithms like logistic regression and xgboost to fit data points so that AI can make accurate predictions in a given scenario. While this may help some people worried about robots taking over the world relax, the truth may be even more daunting. If AI was going to be used to attack humanity, it wouldn’t need a clunky, heavy, and slow body. It would strike at the place humans spend most of their time, where humans communicate the most, and where your emotions are the most vulnerable to manipulation. It would strike you in the comfort and security of your home, on one of your many digital screens that keep you connected to the world as you know it, or the world as you think you know it anyway.

In 2017, the University of Oxford published a set of reports that revealed the use of social media bots in nine major countries to manipulate public opinion around the world. These countries included Poland, Brazil, Canada, Germany, Ukraine, Taiwan, China, Russia, and the U.S. [1]. The results were disturbing, in Russia alone over 45% of highly active Twitter accounts were bots. For those that are unfamiliar with how bots work, most internet bots are software applications that run automated scripts to perform simple and repetitive tasks such as repeatedly making a Facebook post or retweeting Twitter posts. Although this may sound harmless, when you string together these bots you can easily drown out real, reasonable news and give the illusion of large scale support for fabricated news. As one US report puts it, “The illusion of online support for a candidate can spur actual support through a bandwagon effect. Trump made Twitter center stage in this election, and voters paid attention [2].” In Russia, a majority of the digital propaganda is used to deal with internal threats to the government’s stability and limit freedom by drowning out opposing voices to Putin’s regime, providing an illusion of overwhelming consensus for support of Putin. Moreover, the report on Russia demonstrates how dangerous internet propaganda tools can be and they can be used to control people.

However, China has been reported as the world’s worst abuser of internet freedom through its use of strict online censorship, limited online anonymity, and imprisonment of dissidents found online. In the Philippines, a “keyboard army” was constructed of minimum wage employees to censor and manipulate information online in a semi-autonomous way and to provide the illusion of support for harsh government programs [3].

Some countries such as Germany and the Ukraine are attempting to take a stance against the bot invasion and widespread propaganda. They have put in place preemptive laws making social networks responsible for what information is posted on their websites and counter-bot algorithms to prevent online manipulation of opinion [2]. However, with efficient and more human-like AI becoming increasingly prevalent, especially as demonstrated by Google’s new voice “robot”, online articles published by bots are becoming even more difficult to weed out [4].  If most of your news for the day comes from Facebook or Twitter posts, do your due diligence and make sure its legitimate, and hopefully, from a real human source or reliable bot from a trusted source.

 

[1] http://comprop.oii.ox.ac.uk/research/working-papers/computational-propaganda-worldwide-executive-summary/

[2] https://www.theguardian.com/technology/2017/jun/19/social-media-proganda-manipulating-public-opinion-bots-accounts-facebook-twitter

[3] https://www.japantimes.co.jp/news/2017/11/14/world/governments-manipulating-media-bots-trolls-study-finds/#.W1D1z9JKiUk

[4] https://www.npr.org/sections/thetwo-way/2018/05/09/609820627/googles-new-voice-bot-sounds-um-maybe-too-real

0