Banner created using canva.com
Discussion on fairness is not complete without discussing the Trolley problem. To those who don't know what it is, watch the below video.
As you can see, many people who supported pulling the lever to save five people at the cost of losing one did not support pushing the man off the bridge, though the result is the same, i.e. saving five at the expense of 1.
Imagine another situation wherein you are a doctor working for a hospital specialising in Organ transplantation. There are six patients, and five need heart, kidney, lungs and liver. These five patients have a rare blood group, and they will die if you don't treat them in the next 48 hrs.
The 6th patient who came for some other treatment will also perish if you don't treat him in the next 48 hrs. However, you found out that the 6th patient's heart, kidney, lungs and liver function perfectly, and you can use his organs to treat the first five patients.
Now, what will you do? There are two options in front of you. Again, imagine you are a doctor, and everyone knows you in your society.
Option 1: Treat the 6th patient. In this case, the remaining five patients will die, but one will survive.
Option 2: You will use the 6th patient's organs to perform transplantation on the remaining five patients. The outcome is that five patients will survive, but one will die.
If your answer to the Trolley problem was to pull the lever, you should have chosen Option-2. But will you choose option-2 as a doctor? Even though numbers tally between the Trolley Problem and the Hospital problem mathematically, you don't select option-2 because the consequence of your action will result in taking off one's life. Hence, you wouldn't do it as a medical practitioner even though you intend to save five lives.
So why are these inconsistencies in our decisions? Does it vary from place to place, situation to situation, and your role? For example, in the trolley problem, you would have seen that pulling the lever and pushing a fat man off the bridge had the same consequence, but many people chose the option of redirecting the trolley. Why? Is it because of the harshness of pushing the man, daunting us?
Now you understand why it is challenging to have a universal definition of fairness. So let's now come back to our original discussion on fairness in AI. The word 'Intelligence' in 'Articifical Intelligence' is misleading because AI systems aren't intelligent but are trained.
So the question is, how do you want to train your driverless car as an AI system for the same trolley problem? Will it kill five people in the event of brake failure or take a diversion but kill one. So how are you planning to train?
Who is responsible for these decisions? Can you run away from the responsibilities by saying, 'I am just an Engineer'? When the system fails miserably, causing severe harm, will you say 'it was the wrong algorithm'? Or Whenever there is a bias, you will have a standard response as 'Sorry, the data was insufficient, but we will improve by including more data points'
In my view, we should be clear with the below 2 points:
First, fairness in AI is complex to define, but everyone starting, from the creator to the user, should be involved in decision making. It should not be addressed as an Engineering problem but as a Societal problem.
Post-implementation, if something goes wrong, you should not get away by citing trivial reasons such as blaming algorithms and data. Instead, everyone should take responsibility, including the users, policymakers, creators, and engineers.
Thanks for reading. If you find this article interesting, please like, share and comment.
Views are personal and in no way reflect my current & previous organisations and vendor partners.
References and Additional Reading: