top of page

Is your AI system unfair?


Banner created using canva.com, Image credit https://www.pexels.com/photo/leopard-on-brown-trunk-tree-46254/


Let's play a coin toss game, and the rule is as follows. If we get Head, I win, and if we get Tail, you lose. Very simple and straightforward, right? But is it a fair game?

"Fairness is the quality of treating people equally or in a way that is right or reasonable". - vocabulary.com

Fairness is fundamentally a societal concept. Therefore, no one-size-fits-all definition suits all systems and contexts.


In the above game, I get a special treatment that lets me win every time regardless of the outcome. Hence, is it a fair game? Will you be interested in playing first of all?


Now, let's replace me with an AI system with the same behaviour. Will you call that AI system 'Fair' or 'Unfair'?


So, the bigger question is, can there be an unfair AI System? There exists no human intervention during the usage of AI systems. Hence, there is a widespread notion that AI System doesn't include human biases and are therefore fair. Is that correct? The answer is a big "No". AI Systems can be unjust and cause harm.


How can the AI systems cause harm, and how can they be unfair in the first place? We will discuss them in this article.


As per Microsoft research, there are five types of harm that machine learning can cause. They are Allocation, Quality of service, Stereotyping, Denigration and Under-representation. Let's see what they are.


1. Allocation:

"An AI system extends or withholds opportunities, resources, or information for certain groups. Examples include hiring, school admissions, and lending where a model might be much better at picking good candidates among a specific group of people than among other groups". Microsoft Azure Machine learning documentation

In 2018, Amazon abandoned its AI-driven recruitment system as it amplified gender bias by withholding employment opportunities for women in the tech industry. For more details, click here.


2. Quality of service

"An AI system doesn’t work as well for one group of people as it does for another. As an example, a voice recognition system might fail to work as well for women as it does for men". Microsoft Azure Machine learning documentation

How do you feel if it recognises my voice but mostly fails to respond when you speak? Likewise, in financial services, whenever you and your neighbour apply for a bank loan from the same bank, she always gets it approved, but you never get it.


3. Stereotyping:

A stereotype is a preconceived notion, especially about a group of people. Many stereotypes are rooted in prejudice — so you should be wary of them. -vocabulary.com

Let's perform a small demo on what researchers at Princeton University uncovered. First, use the google translator to perform English to Turkish (a gender-neutral language) in one tab and use another to translate Turkish to English. See the screenshot below.


Post-translation, the gender assigned to a gender-neutral sentence in Turkish became male. Is it not stereotyping?


4. Denigration:

"A denigration is an act or instance of speaking about someone or something in a belittling or damaging way". - dictionary.com

Is your AI system exhibiting the above behaviour? If yes, it is causing denigration harm.

In 2016 Microsoft had to shut down its chatbot called Tay.ai within 16 hrs of its launch as it started generating hate speech. For more details, click here.


5. Under-representation

If your AI is under-representing a particular subpopulation, it causes harm. For example, I typed CEO in the BING Image search and used the app from India. I got the below results. Even though there are many Indian origin faces, I couldn't see one single CEO of an Indian company in the results.


Historically the society we live in has inbuilt conscious and unconscious bias. But the expectation here is that AI systems should not have that. So that's what we are calling "Fairness in AI".


How will you avoid these biases and how to overcome these biases in the AI system is the topic for our next article.


Fairness does not have a universal definition, and it is not a new topic. But, Fairness in AI is still evolving. As per a few industry experts, the research in this space is a lot slower than that of development. But, the good thing is that major players in the industry have started to implement Fairness as much as they can.


Thanks for reading. If you find this helpful article, please like, share and comment.

Views are personal and in no way reflect my current & previous organisations and vendor partners.

--------------------------------------------------------------------------------------------------------

Image Credit

  • Photo by CocaKolaLips: https://www.pexels.com/photo/a-person-tossing-a-coin-4029695/

  • Photo by Pixabay: https://www.pexels.com/photo/leopard-on-brown-trunk-tree-46254/

Reference and additional reading:

  • https://www.vocabulary.com/dictionary/fairness

  • https://www.vocabulary.com/dictionary/stereotype

  • https://www.dictionary.com/browse/denigration

  • Machine learning fairness (preview) - Azure Machine Learning | Microsoft Docs

  • https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G

  • https://spectrum.ieee.org/in-2016-microsofts-racist-chatbot-revealed-the-dangers-of-online-conversation

  • (321) The Trouble with Bias - NIPS 2017 Keynote - Kate Crawford #NIPS2017 - YouTube

  • https://www.youtube.com/watch?v=ZMsSc_utZ40

  • https://www.youtube.com/watch?v=xzsxXqZXs34

  • https://www.microsoft.com/en-us/research/theme/machine-learning-ai-nyc/videos/

  • http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf

  • https://www.youtube.com/watch?v=hTHDY2Ir5x4

  • https://blog.twitter.com/engineering/en_us/topics/insights/2021/sharing-learnings-about-our-image-cropping-algorithm

7 views0 comments

Recent Posts

See All
bottom of page