top of page

Game theory to explainability


In game theory, a game can be a set of circumstances whereby two or more players or decision-makers contribute to an outcome. The strategy is the game plan that a player implements, while the payoff is the gain achieved for arriving at the desired outcome.


In a cooperative or coalition game, more than one player contributes to the game's desired outcome on each side. So how will you distribute the payoff to the players? The answer to this question is Shapley Value. But what is it?

The Shapley value is a solution concept of fairly distributing gains and costs to several actors working in a coalition. It applies primarily in situations when the contributions of each actor are unequal, but they work in cooperation with each other to obtain the payoff.

But why is it called Shapley value? This concept was named in honour of Lloyd Shapley, who introduced it in 1951 and won the Nobel Memorial Prize in Economic Sciences in 2012.

So what is Shapley value? Let's say four players in the coalition won the game. How will you distribute the payoff if each one contributed to the success in a different proportion?




It is not the difference between the payoff with or without each member. Because there is an interaction effect of one player with all other players as it is a coalition game. So how do you calculate the individual contribution? The answer is that you take all possible coalition pairs and compute the average marginal contribution for all pairs. This value is the individual contribution. You will repeat the same for all other players.


Isn't it translating this concept to model explainability relatively straightforward? More than fifty years later, in 2017, Scott Lundberg and Su Lee presented a paper in which they introduced SHAP (Shapley additive explanation).



Once the black box model is built, they derived a simplified local input for each original input and built a simplified model. They called this simplified model an explanation model. The idea is that if the simplified input is roughly equal to the original input, the explanation model will have to be approximately similar to the original black-box model. So essentially, they created a simplified model on top of the original model to explain the original black box model. This simplified model used the Shapley value to derive the feature coefficient.

So basically, they didn't try to understand the internals of the complex black-box model to explain a complex model. But, they computed the Shapley value using simplified inputs and derived the feature importance using the explanation model.

It is path-breaking in the field of model explainability. However, adversarial attacks can still fool these explanation models, and we will discuss them in the following newsletter. Like AI fairness, this field of study is still evolving. I hope this article has given how one of the model explainability algorithms works at a high level. Thanks for reading.


Views are personal.


Image Credit:

Banner created using canva.com

Photo by PNW Production: https://www.pexels.com/photo/friends-sharing-snacks-to-each-other-7624927/

Reference and additional Reading:

  1. https://arxiv.org/abs/1705.07874

  2. https://christophm.github.io/interpretable-ml-book/shapley.html

  3. https://www.youtube.com/watch?v=VB9uV-x0gtg&t=296s

  4. https://www.youtube.com/watch?v=u7Om2joZWYs

  5. https://www.investopedia.com/terms/s/shapley-value.asp#:~:text=In%20game%20theory%2C%20the%20Shapley,other%20to%20obtain%20the%20payoff.

  6. https://en.wikipedia.org/wiki/Shapley_value#:~:text=The%20Shapley%20value%20is%20a,Sciences%20for%20it%20in%202012.

  7. https://www.nobelprize.org/prizes/economic-sciences/2012/shapley/facts/

  8. https://www.ubs.com/microsites/nobel-perspectives/en/laureates/lloyd-shapley.html


4 views0 comments

Recent Posts

See All
bottom of page