Is Artificial Intelligence Fair ?
We a live in the Data realm. From every picture clicked to every Card Swipe, you’re data is being stored and fed into an algorithm that will make a decision about your daily day life.
Artificial Intelligence is used to make decisions about targeting people in fields like – Advertisements, Loans, Justice Systems, so it is imperative to understand how & why Artificial Intelligence; by default is not fair.
You must be asking yourself this question ! A Decision taken by an algorithm is essentially a Math equation, And Math equations don’t bias on the terms of human attributes like Cast, Color or Gender. These equations are only fair, because a team of Data Scientists only assert them to be so.
The Nuts and Bolts –” Our AI is only as good as the Data we put in them “.
In layman terms, Artificial Intelligence involves a Machine Learning Algorithm [ML -Developed in the mid 1990’s] takes historical data of a problem and either creates a decision path or a classifier, which is essentially used on new data instances of a similar problem. The more the Data, The better the Model learns and hopefully more accurate the outcomes.
When an ML algorithm is trying to find a decision path, it is essentially looking for a TREND in the Historical Data. If the existing data has a trace of bias against attributes like Cast, Color or Gender, the ML algorithm will most likely incorporate this bias into its learning model. Eventually, when this model is used it might pave a path of unfortunate decisions for people of a particular – Cast, Color or Gender.
So What if we Remove These Biased Attributes from our Algorithm?
The most important aspect of ML is to infer missing attributes from the present ones. When we remove these attributes, we might think that our Algorithm is unbiased, but were missing out on an import aspect of Data Dependency – Correlation. Even though we removed these Attributes, The trends in the correlated attributes will show a correlated effect on the new data points. There is no prescribed way to tell at which point Correlation is acceptable or not.
Lets us consider this case with a few examples.
If asked – ” Which do you think is more likely ? – Dying from a Shark Attack or Dying from a Firework accident ”
You would look into your heuristic memory and consider all the examples that involved a Fatal Shark Attack and Fireworks Accidents, and you make a decision based on the memorable experiences. You’d quickly jump to a conclusion and say – Shark Attacks, they’re probably more televised than firework accidents but in reality the risk of dying from a firecracker accident is more than that of a Shark Attack.
Example 2: taken from Khan Academy : Human Biases in contextual terms
IBM’s solution :
In the most recent AIES conference, IBM proposed a 3 level rating system to determine the fairness of an AI system. From these independent evaluations, the user can determine the trustworthiness of each system, based on its level of bias.
- It has no bias
- It has a Data Bias from its input
- It has the potential to reduce bias whether the Data is fair or not.
Research is crucial to the advancement of fair, trustworthy AI. By Introducing new standards and ethical frameworks for AI, we need to take a step back and think about the quality of the input data-set’s and how we perceive and work with AI. We can increase the productivity of the artificial intelligence field in a way that will benefit everyone because we increasingly rely on apps and services that use AI, “We need to be assertive that AI is transparent, interpretable, unbiased, and trustworthy.” The AI giants believe that AI actually holds the keys to mitigating bias out of AI systems – this offers a high level view of the fundamental flaw on the existing biases we hold as humans.
Users who have LIKED this post: