Politics of AI
Banner created using canva.com
Before we talk about politics in AI, let us try to understand what AI is in the first place? There are numerous definitions on the internet. But I liked 2 of them as mentioned below.
'AI is the creation of software that imitates human behaviours and capabilities.' - Microsoft Azure's documentation
As per Gartner Glossary, 'Artificial intelligence applies advanced analysis and logic-based techniques, including machine learning, to interpret events, support and automate decisions, and take actions.'
Again, we all know politics, but defining it is tricky. But we need to understand to appreciate this article.
'Politics is the set of activities that are associated with making decisions in groups or other forms of power relations among individuals, such as the distribution of resources or status.' - Wikipedia
So if you have corrupt politicians, they will not make the right decisions causing the improper distribution of resources or status. Sounds logical?
Suppose you apply the same logic to the AI system as it is also taking decisions. Can we say a biased AI system will cause improper distribution of resources or status, causing societal harm? How is the bias introduced in the AI system? Let us understand!
Who is developing and using it?
AI needs extraordinary computing power, and not every organisation can afford it. The powerful and wealthy can only afford to do that. Hence, the AI systems are trained by rich and powerful brains. They are building an AI system for candidate selection in recruitment. Do they know the impact of denying a job by mistake to a poor candidate from a weaker section of society?
Secondly, engineers dominate the field of AI. There are fewer philosophers, legal experts, and social scientists in this space. Yet, these engineers are building an AI system that classifies the convicts to be released on bail or not. Do they know how the criminal justice system works? Or they are just working on the datasets they have, or have they spoken to any convicts to understand their psychology? Remember, we discussed the inclusion of all stakeholders in the previous article.
So, you will introduce unconscious biases if we don't include all the stakeholders. Makes Sense?
What do you essentially use to develop an AI system?
In the mid-1973, two electrical engineering professors at the University of Southern California, a graduate student and the lab manager, hurriedly searched the lab for a good image to scan for a conference paper.
They weren't happy with the stock of typical test images. They wanted something glossy and a human face. Just then, somebody happened to walk in with a Men's lifestyle magazine. The engineers tore away the top third of the centrefold of that magazine and used it for the conference paper. That image became a standard test image, and it became one of the most used images in computer history.
That men's magazine was Playboy, and the model was Lenna. Lenne was featured in many research papers, including IEEE. Critics point this out as an example of sexism and male domination in the STEM industry. Will this male-dominated AI industry create an AI system that is fair for all genders?
We don't want AI to learn from past biased data. So, how will you train your model then? Food for thought!
I have provided a few examples, but many questions need answering in this space. The good news is that organisations have started to have an AI Governance body and recognise the importance of "Responsible AI" that should address the politics and power play.
Thanks for reading. I hope this article has given you an understanding of how AI can influence society. If you find this article interesting, please like, share and comment.
Views are personal.
Photo by Andrea Piacquadio from Pexels
Photo by Markus Spiske from Pexels
Reference and Additional Reading