Is Artificial Intelligence Fair ?
We a live in the Data realm. From every picture clicked to every Card Swipe, you’re data is being stored and fed into an algorithm that will make a decision about your daily day life.
Artificial Intelligence is used to make decisions about targeting people in fields like – Advertisements, Loans, Justice Systems, so it is imperative to understand how & why Artificial Intelligence; by default is not fair.
You must be asking yourself this question ! A Decision taken by an algorithm is essentially a Math equation, And Math equations don’t bias on the terms of human attributes like Cast, Color or Gender. These equations are only fair, because a team of Data Scientists only assert them to be so.
The Nuts and Bolts –” Our AI is only as good as the Data we put in them “.
In layman terms, Artificial Intelligence involves a Machine Learning Algorithm [ML -Developed in the mid 1990’s] takes historical data of a problem and either creates a decision path or a classifier, which is essentially used on new data instances of a similar problem. The more the Data, The better the Model learns and hopefully more accurate the outcomes.
When an ML algorithm is trying to find a decision path, it is essentially looking for a TREND in the Historical Data. If the existing data has a trace of bias against attributes like Cast, Color or Gender, the ML algorithm will most likely incorporate this bias into its learning model. Eventually, when this model is used it might pave a path of unfortunate decisions for people of a particular – Cast, Color or Gender.
So What if we Remove These Biased Attributes from our Algorithm?
The most important aspect of ML is to infer missing attributes from the present ones. When we remove these attributes, we might think that our Algorithm is unbiased, but were missing out on an import aspect of Data Dependency – Correlation. Even though we removed these Attributes, The trends in the correlated attributes will show a correlated effect on the new data points. There is no prescribed way to tell at which point Correlation is acceptable or not.
Lets us consider this case with a few examples.
Example 1:
If asked – ” Which do you think is more likely ? – Dying from a Shark Attack or Dying from a Firework accident ”
You would look into your heuristic memory and consider all the examples that involved a Fatal Shark Attack and Fireworks Accidents, and you make a decision based on the memorable experiences. You’d quickly jump to a conclusion and say – Shark Attacks, they’re probably more televised than firework accidents but in reality the risk of dying from a firecracker accident is more than that of a Shark Attack.
Example 2: taken from Khan Academy : Human Biases in contextual terms
IBM’s solution :
In the most recent AIES conference, IBM proposed a 3 level rating system to determine the fairness of an AI system. From these independent evaluations, the user can determine the trustworthiness of each system, based on its level of bias.
- It has no bias
- It has a Data Bias from its input
- It has the potential to reduce bias whether the Data is fair or not.
Takeaways :
Research is crucial to the advancement of fair, trustworthy AI. By Introducing new standards and ethical frameworks for AI, we need to take a step back and think about the quality of the input data-set’s and how we perceive and work with AI. We can increase the productivity of the artificial intelligence field in a way that will benefit everyone because we increasingly rely on apps and services that use AI, “We need to be assertive that AI is transparent, interpretable, unbiased, and trustworthy.” The AI giants believe that AI actually holds the keys to mitigating bias out of AI systems – this offers a high level view of the fundamental flaw on the existing biases we hold as humans.
Sources :
2.https://www.ibm.com/blogs/policy/bias-in-ai/
3. https://medium.com/@hannawallach/big-data-machine-learning-and-the-social-sciences-927a8e20460d
Users who have LIKED this post:
4 comments on “Is Artificial Intelligence Fair ?”
Comments are closed.
Thanks for the post @svvamsi! It’s very informative and comes timely as I’m researching on this topic for potential final project!
As you mentioned previously AL/ML essentially becomes Math problem, I’m very interested to learn how multiple dimensions of biases could affect the correctness of reducing the discrimination, especially if some of the biases are intertwined. Indeed it’s great progress made by IBM, thanks again for sharing that!
There’s a misquoting of the 3rd level of IBM rating system in your post, “It has the potential to ‘introduce’ bias whether the data is fair or not.”, not ‘reduce’. Sorry for the nitpicking, I was little stucked there…lol
Hello Hailun,
Thank you for the feedback on the post and i hope it was thought provoking.
Just to clarify on the IBM’s Solution No.3, The point talks about the fact that when we create an AI algorithm, does it have to potential to recognize and reduce bias in the input data, so it learns without bias. I hope this clarifies your question.
Thanks,
Vamsi.
Thanks for posting. This is a very interesting topic. I totally agree with you that AI has the bias. There are several places where AI can have the bias, like training data bias, algorithm bias, adjustment bias, validation bias, etc. AI learns the bias from human bias. Human beings have bias too. However, there are even more scared. AI sometimes also has discrimination. This reminds me of a couple of interesting research papers I have read before. Researchers tried to use human-involved “bias” to attach discrimination. From this perspective, AI has the bias here and there, however, if we could leverage the bias smartly, we may prevent the worst.
Here are the papers I mentioned if anyone wants to take a dig.
1. https://techcrunch.com/2016/10/07/google-aims-to-prevent-discriminatory-ai-with-equality-of-opportunity-method/
2. http://research.google.com/bigpicture/attacking-discrimination-in-ml/
3. https://arxiv.org/abs/1610.02413
typo, “attach discrimination” -> “attack discrimination”.