No Free Lunch Theorem
In the 1870s, saloons (bars) in the United States started to offer free lunches to those customers purchasing at least one drink. However, they intentionally served food on the lunch menu as highly salty. So, those who ate them ended up buying a lot of beer. Eventually, the customers ended up paying much more than it would have cost for a meal and a drink.
Over a while, this phrase became a punchline in Newspapers, Articles and even Novels to convey that it is impossible to get something valuable without paying for it somehow.
Why are we discussing this now here? Well, let us go back in history. In the 1700s, Scottish philosopher David Hume argued that you could not justify inductive reasoning and causality rationally. He called that a "problem of induction".
Essentially, he believed that it is not rationally correct to draw a conclusion based on the past observation as we assume that the future will reflect the past. Taking a decision based on past observation is nothing, but believing what you have learned suits all situations in the future. Your decision depends on what you experienced rather than what is correct. Hence, Hume believed that it is not rationally correct.
This idea became an inspiration for machine learning. More than two hundred years later, an American mathematician and computer scientist David Wolpert published a paper in 1996 which concluded that you must make assumptions to distinguish performances between learning algorithms. As a result, no one algorithm works best in all problems. This conclusion is called the "No free lunch theorem for machine learning."
In 2002 he published another paper titled "The Supervised Learning No-Free-Lunch Theorems", in which he asserted that the Model is a simplification of reality, and Simplification is based on assumptions (model bias), and the assumptions fail in certain situations. So basically, no machine learning model is free of bias. This limitation is the same as paying the price for drinks to compensate for the free lunch. Also, different learning algorithms will have different biases.
Hence, an algorithm that performs very well for one problem will have to perform poorly for other problems where the same assumptions do not hold good.
Now, let's say your objective is to maximise the model performance or, in other words, the Model's predictive accuracy, then, the Model cannot be fair. Makes sense?
In our previous edition13, we discussed the impossibility theorem that no algorithm could satisfy all the fairness metrics, i.e. 1) 'Predictive Parity', 2) 'False Positive rate balance', and 3)'False Negative rate balance'. Again, this phenomenon is also due to the "No free lunch theorem" (NFLT).
I am sure you can now link the NFLT and fairness in machine learning or machine learning in general.
Though some mitigating algorithms disprove the NFLT in fairness, we still need to make a trade-off between accuracy and fairness, or even within all fairness metrics.
I hope you enjoyed reading this. If you find it interesting, please like, comment and share.
Views are personal.
Banner Created using canva.com
Foto von Saveurs Secretes: https://www.pexels.com/de-de/foto/kostlich-indisches-essen-aufsicht-essensfotografie-5410401/
By Allan Ramsay - https://www.nationalgalleries.org/art-and-artists/60610/david-hume-1711-1776-historian-and-philosopher-1754, Public Domain, https://commons.wikimedia.org/w/index.php?curid=1367760
Additional Reading and References: