AI’s morality dillema…

Dan Brown’s latest book, Origin, brings Robert Langdon back into the thick of the age-old battle of religion vs science. Without revealing too much of the plot nor devolving into a book review, Brown places at Langdon’s side his most resourceful companion yet, an AI named Winston.

Langdon, with Winston’s aid, attempts to solve the mysteries typical of any Dan Brown book and find those responsible for the murder that takes place at the start of the story. Of course, the twist is that all the thrills and surprises that the human characters in the book are responding to are the creation of the all-knowing Winston. Langdon’s most crucial discovery is that his once trusted assistant is simply incapable of any human reasoning in determining what is right and what is wrong.

Needless to say, Brown’s depiction of AI technology is a work of fiction. But as with his previous efforts, there is always an element of reality. The question is, when it comes to Winston, just how much of it actually possible and should we be worried?

Our guest speaker last week, Craig Martell (Head of Science and Engineering at LinkedIn), strongly believed that depictions of AI, in the style of Brown’s, were way off the mark. “There is no magic,” was his reminder. Further, he argued that AI has been co-opted into a buzzword by those attempting to capitalize on this misconception.

Martell’s idea of AI was rather soberer than what much of the public has been led to believe. He laid out the bare bones, that AI is a way of delegating decision making from humans to machines, which use algorithms to crunch statistics in a super high dimensional space.

Regardless, of which version of AI you believe will take shape as technology progresses, an important question still has to be tackled. What level of decision making power are we comfortable delegating to AI driven machines?

Perhaps the most colloquial example that can be discussed in this context is the self-driving car. While the technology of self-driving cars has progressed rapidly in recent years, they remain a while way from any significant take up. However, the brief glimpse we’ve had thus far has already given rise to some very contentious issues. If a self-driving car is confronted with the unavoidable choice of either hitting an adult or a child, what choice should it make? And who is responsible? The car? The company that provided the AI software? Or the passenger that abdicated decision making power?

These are complex questions with endless of legal ramifications. The German Federal Ministry of Transport and Digital Infrastructure produced a report which attempted to lay out a series of ethical guidelines that any AI driven car must follow. It’s a first step in attempting to confront these problems, but much still has to be done.

It’s part of wider problem that society will have to confront as more decision-making power is transferred from humans to machines. Where those decisions remain of the mundane and repetitive variety, all should be well. When the decisions start to wander into themes of guilt or innocence, wrong or right, life or death, that’s when a machine’s inability to reason like a human starts causing problems.

 

[1] https://www.reuters.com/article/us-germany-bookfair-dan-brown/collective-consciousness-to-replace-god-author-dan-brown-idUSKBN1CH1O1

[2] https://www.dw.com/en/dan-browns-new-novel-origin-pits-artificial-intelligence-against-religion/a-40804515

[3] https://www.cbsnews.com/news/dan-brown-on-god-and-artificial-intelligence-in-his-new-thriller-origin/

[4] https://www.theglobeandmail.com/globe-drive/culture/technology/the-ethical-dilemmas-of-self-drivingcars/article37803470/https://www.popsci.com/conscience-self-driving-car

[5] Friday 13th of July, Leading Trends in Information Technology Lecture.

0