AI “assistants”: What do they know? Do they know things? Let’s find out!

When Apple first introduced Siri back in 2011 alongside the iPhone 4s, reception ranged from excitement for the prospect of virtualized assistants, to the fear of an Ellison-esque dystopian future where our AI overlords run rampant. Regardless of which camp you fell into, this marked a major shift in the zeitgeist of the average tech user. A future where we interact with our computers as we would our friends over the phone: by using our voices.

Although the version of Siri that shipped with the then new phone was still in Beta, it didn’t take long for users to realize that the responses were all preplanned. The natural language understanding and the speed of the responses deemed to be the more interesting aspect of the technology that went into Apple’s virtual assistant, who wasn’t quite an assistant yet. Siri wasn’t able to automate tasks, predict what you would need based on contextual cues, or really anything other than a simple I/O; Ask question, get response.

Although Apple was first out the gate with a consumer ready voice assistant, they unknowingly kickstarted a digital assistant arms race, with all the major players investing time, money, and manpower into coming out with their own variation on the voice assistant. Google came out with their eponymous “Google Assistant”, Amazon with their “Alexa” fleet of devices, Samsung and “Bixby”, based on the work of the team that brought us Siri, and Microsoft’s “Cortana”. While all these assistants operate functionally the same, they all work with varying degrees of success. Each service offers their own collaborations with third party software and hardware companies eager to get into the booming marketplace. The inclusion of their service helping establish a more ubiquitous and ethereal experience. Hardware acting solely as a conduit to communicate with these AI assistants began popping up. Amazon has their echo line of devices that come in all shapes and sizes, some with or without interactive displays. Google has released their home line of speakers, focusing on connecting the Google ecosystem around the home. Although Apple was the first company to kickstart the entire AI assistant race, they were among the last of the main competitors to build their own standalone home speaker. 

The approach that these companies are taking towards having an ever-present virtual speaker is the first step into making AI assistants widely accepted by those who are more luddite then early adopter. As soon as it hits the mainstream, you’re golden.

It hasn’t been until recently that these AI assistants have started branching out into less input oriented, more predictive based services. By using a variety of machine learning techniques, assistants are able to predict the users needs and can start recognizing patterns in behaviours. If a user connects to their car’s bluetooth system, the assistant assumes that they’re driving; adjusting their phones settings and preferences accordingly. If the user frequents a specific spot often, their phone can predict the users next actions and offer recommendations for what applications to open next. The folly of this is the fact that many people see this as tech companies infringing on their personal information and privacy, but they always have the means and abilities to opt out.

The conversation regarding privacy and how much information these AI entities have on users hit a boiling point when an internal video made for Google X leaked, showing the possibility of a device that’s “biologically trained” to actively absorb as much information on it’s user as possible.

“The Selfish Ledger” is worth a watch for a “what if” scenario regarding the technological equivalent of the “selfish gene”, asking what would happen if an AI ledger was capable of changing a user’s decisions for it’s own self survival.

Siri doesn’t seem so terrifying now, does she?




2 comments on “AI “assistants”: What do they know? Do they know things? Let’s find out!”

  1. I appreciated your post about assistants and wanted to go further into the privacy aspect. It is unmistakably scary how much information they have, and how it is used. For instance, whenever I get into my car Apple recognizes how long it will take to drive to my next location. It uses the GPS data and time of day to predict where I will be driving, and it’s surprisingly accurate. According to The Atlantic, “Apple stores Siri requests with device IDs for six months, and then deletes the ID and keeps the audio for another 18 months.” It is scary the data that they have on us, especially since it is kept for so long. Another privacy matter is Google’s new feature Allo which is being released this summer. Instead of waiting to be summoned is reacts in real time. If you are chatting with a friend and they ask where you should go to dinner, it will hop in and give ideas. Now companies have a strong control over our lives. They will soon choose where we eat, what content we view, and start to have an expanded control over our lives. We need to be aware of how we interact so that we don’t give control to algorithms of companies such as Google or Apple.


Comments are closed.