AI vs. AI: Terminators of the Modern day

I was recently reading an article about Google pitting two AI machines against each other, which can be read here. I specifically sought out articles like this, because I was curious about a world that is driven towards automation without human interaction.  How will two different machines behave with each other?

This quote from the article made me chuckle a bit – “What the researchers found was interesting, but perhaps not surprising: the AI agents altered their behavior, becoming more cooperative or antagonistic, depending on the context.

For example, with the Gathering game, when apples were in plentiful supply, the agents didn’t really bother zapping one another with the laser beam. But, when stocks dwindled, the amount of zapping increased. Most interestingly, perhaps, was when a more computationally-powerful agent was introduced into the mix, it tended to zap the other player regardless of how many apples there were. That is to say, the cleverer AI decided it was better to be aggressive in all situations.”

I could just imagine a world of increasing AI and the more intelligent systems deciding that it was better to wipe out all others to ensure survival; A modern day Terminator movie. But it did highlight a very real concern. How will our systems handle each other? Will dominance of a market or our security be decided in a matter of seconds based on the most intelligent system? Is compassion a weak point in intelligence? It’s funny how an AI game can bring up these questions.

But it does bring up an interesting topic about another area of social responsibility that I felt applies to the modern day – impulse control. I have noticed a trend in technologies that look to predict what people will like to buy, how to market towards them, & what discounts are appropriate to push them towards making a transaction.  We offer programs to help people fight their addictions (such as Alcoholics/Gamblers/Narcotics/Shopaholics Anonymous), but have we considered how AI fights against these things we try to protect people from?

Even little things like this invoke an area of concern; Everyone has limitations on impulse control – so will these systems ever develop with a compassion towards the consumer? Maybe knowing someone will purchase at a certain price point, but understanding that it will financially impair them in the long run should be considered. If a homeless man had 5 dollars, but your marketing skills convinced him to buy something he didn’t need, how would you feel? Maybe in this day & age, it is not about building the most intelligent AI, but one that takes into consideration the persons wellbeing.

0