top of page
Sujith Chandrasekaran

Why should I trust your model?


Nine out of ten machine learning models developed don't get deployed in production, and one of the primary reasons for the same is that you didn't get Sr Executive's approval. Do you know why? It is because the machine learning model is a black box to the Sr Executives and the data scientists weren't able to explain why it is predicting what it is predicting. So, if you don't understand something, you would disapprove. It is pretty natural.


But is the Sr Executive the only stakeholder who needs to understand? No, there are other stakeholders, at least two more, namely the End users and the Regulators.


Even if the Sr Executives approved your model, the End user would want to understand the model as they have to take action based on the prediction.


For instance, you have developed a prediction model for diagnosing Flu. Let us say the model is deployed and diagnoses one patient to have Flu. The physician would want to know why the model predicts the patient to have Flu. Let us say that sneezing, fatigue and headache contribute positively to predicting Flu. Hence, when Mr Jack walks into the Doctors cabin just sneezing, the model indicates "no Flu". However, Dr John started the flu treatment because he felt that he should start the treatment as there was a pandemic. Dr John was able to make a judgment call and started the medicine after understanding why the model predicts "No Flu". He understood that the model didn't consider the "prevalence of pandemic" as one of the input features for prediction. Makes sense?


Providing a list of features that contributed to the model building doesn't wholly serve the purpose of being transparent. The reason is that not all variables have a linear relationship with the outcome. Also, not all variables have equal importance.


For instance, the 'number of months of credit history' came out to be one of the variables for the "Risk assessment" for loan approval. It is natural to assume that this variable is directly related to the target—i.e. the longer your credit history, the lower your risk profile.


So let us say that there are two customers of the same profile, except that one customer has a more extended credit history than the other one. If both approach the bank for a personal loan, what do you think the model will predict? Keeping all other variables the same, the applicant with a more extended credit history will get a lower risk profile than the other applicant. Is that what you expected the model to do? But, to your surprise, the applicant with the higher number of months of credit history got a higher risk profile. Exactly opposite to what you expect the model to predict. So now, you started to doubt the model. As a banker, will you use this model for risk profiling anymore?


A deeper study of the variable revealed that the association between the number of months of credit history and the risk profile is non-linear, as shown below. So, on the x-axis, we have 'the number of months of credit history, and on the y-axis, we have the SHAP value, a value that tells the association between the variable and the risk profile.



Image credit: Scott Lundberg, Microsoft Research - Explainable Machine Learning with Shapley Values


As you can see, the applicants with a credit history beyond a certain number of months (400 as per the above chart) came out to be riskier. It is because these applicants are nearing retirement age, and this variable is serving as a proxy to the 'age' variable, which was protected and not included in the model.

So essentially, the so-called simpler model is not simple, and the complex model is not complex unless you understand what variables in your data impact the outcome in what way. Hence, bringing in transparency is an essential component before you deploy for broader usage.

I hope this gives a high-level understanding of machine learning model interpretability and explainability. We will discuss the model interpretability in detail in the following newsletter. Views are personal.


Image credit:

  • Foto von Anna Tarazevich: https://www.pexels.com/de-de/foto/person-die-babys-hand-halt-5080678/

  • https://www.slideshare.net/0xdata/scott-lundberg-microsoft-research-explainable-machine-learning-with-shapley-values-h2o-world-nyc-2019/1

References and additional reading:

  • https://www.youtube.com/watch?v=ngOBhhINWb8&t=943s

  • https://arxiv.org/pdf/1602.04938.pdf


6 views0 comments

Recent Posts

See All

コメント


bottom of page