Artificially Intelligent Human Bias

The assertion made by Craig Martell[1] that Artificial Intelligence is a business model, a buzzword, rather than of substantial content, is not a new one. For him, the truth lies in the collection of data, and in the modeled statistical world. This has been articulated many times by various entities – and this makes sense – afterall, “[people] are irrational, inconsistent, weak-willed, computationally limited, heterogenous and sometimes downright evil”[2], and to make a robot a ‘person’ would then involve adopting various human concepts and emotions organically, a process akin to magic. So far, we’ve been getting better at mimicking, rather than trying to recreate, humans – take a look at Google’s Google Duplex, which is an automated phone system aimed at replicating human speech by introducing the natural intonation and pauses that come naturally in the way we speak.


As Martell rightly noted, the buzzword AI is a concept that calls to mind a notion of automated intelligence, or mental capacity, like a human. The word itself can be a misnomer, and what lies behind this word is an indispensable dependency on human input, what Solon terms as “pseudo-AI”.[3] Solon’s article takes a good look at the various ways that companies which market themselves as AI have capitalised on human labour and input such that humans are the ones ‘[pretending] to be AI pretending to be human’.[4]


This begs another question: how do we create a good, efficient, and artificially intelligent, entity? Arguably, it is the collection and feeding of data, so much data such that patterns can be extracted and fed into machines, so that they can make predictions as accurately as possible.


This is problematic, because as we know, statistics are inherently biased and are sometimes reluctant or irrelevant, often requiring a closer look to be taken by the human eye. The use, or rather, abuse, of statistics has fuelled post-factual politics, which has also prompted discussion of the proliferation of fake news and its possible remedies.[5] This also brings up issues of potential prejudice, which may be one of the very real threats that AI brings to society – because while prejudice and bias in the real world can be combatted with knowledge, cognizance, and an intention to equalize things, feeding it into AI risks automating the problem, leaving the bias powerful and unchecked. Many stories of a ‘racist’ or ‘sexist’ AI have surfaced – one of the most troubling in the area of predictive policing, with its potential of leading to prejudicial flagging, and lack of impartiality where it comes to recognizing threats.[6] In some sense, automation will repeat our past mistakes, under the façade of newfangled inventive technology.


This is exacerbated because the way AI works has been termed a ‘black box’[7] – in general, AI machines lack transparency in their decision-making, and even if we could peek at the code behind, it is highly doubtful that most people can understand it. Is transparency behind the machine’s language an answer then? Some people think so.[8] Others are trying to find ways to root out, or at least, identify and eliminate or reduce such biases, in a movement towards fairness and equality.[9]


In closing, in our quest for artificial intelligence, whatever the term may mean, we need to consider how human we want our systems to be – should they come programmed with our human weaknesses? Or should we root for less human-like systems that can bring us a fairer world?


[1] Head of Science and Engineering at LinkedIn










One comment on “Artificially Intelligent Human Bias”

  1. I totally agree that AI/ML systems can be biased in several situations. The intelligence of AI/ML systems is restricted by the training data.
    We should also remember that AI/ML systems are not scalable, across different demographics and sectors, unless they are adequately trained and tested for different markets. So we definitely need transparency and diversity in the training process of AI/ML systems.


Comments are closed.