top of page

Explainable AI - an overview


We talked about the definition of Explainable AI in our previous edition. We will discuss how to interpret it, the taxonomy of explainable AI, and why it is essential even for data scientists before they try to deploy it in production.

"XAI is the ability of machines to explain the rationale to characterise the strength and weakness of a particular decision-making process and convey the sense of how they behave in the future." - Source: Accenture

Before we understand how to interpret the models, we will need to understand the taxonomy of explainable AI.


There are two categories of interpretable machine learning models at a very high level.


Intrinsic:- The machine learning model is explainable due to the simplicity of the underlying algorithm using which you built the model. Examples: Linear Regression and Decision trees.


Post-hoc:- The machine learning model becomes a black box due to the underlying algorithms' complexity. For example, a deep neural network. Though you know what you gave as input, you don't see how each variable contributes to the decision output. For these models, you will have to perform additional activities on top of the model to understand what it is doing. These activities are based on specific proven model interpretability techniques.


Based on the scope, we can categorise the interpretability techniques as follows:


Local: Explain the interpretability of individual predictions. It is a crucial point from the compliance standpoint. For instance, from the "Right to explanation" perspective, a loan applicant should know why the banker has approved or rejected the loan.


Global: Explain the model as a whole. Overall the features used and how important each feature is at the model level.


Based on how they work, we can categorise the interpretability techniques as follows:


Model specific: Interpretability techniques that rely upon the inherent model's learning process. These techniques work for one particular model—for instance, Integrated gradients for the neural network.


Model agnostic: These techniques do not rely on model internals. They work based on changes in the input features and understand how it influences the output. As a result, these techniques are portable across all models.


As we have understood the taxonomy, let us see how we can understand the model. In other words, how to explain a model.


You explain the model using the data points used in the model-building process. These data points are called features/variables. In simple terms, you should be able to say what features contribute and how they contribute to the model's prediction. For example, let us say that 'age', 'number of savings accounts, 'number of months of credit history, and 'occupation' are the features that contribute to the loan approval model. Among these features, let us say the age of the applicants influences the model outcome 50% of the time, then the credit history and so on.


Lastly, why is explainability essential even for data scientists?


  1. Understand what features are most important for the predictions.

  2. Debug unexpected behaviour from the model.

  3. Uncover potential sources of unfairness.

  4. Build trust in your model's decisions by generating local explanations.

  5. Validate if the model satisfies the regulatory requirements.

  6. Monitor the impact of model decisions on humans.

  7. Explain predictions to support the decision-making process

  8. Refine modelling and data collection process.

  9. Present the model's prediction to the stakeholder.


I hope this article has given you a high-level overview of Explainable AI. Thanks for reading. If you find this article interesting, please like, share and comment.

Views are personal.


Image Credit:

  • Banner created using canva.com

  • Foto von ThisIsEngineering: https://www.pexels.com/de-de/foto/ingenieurin-halt-prasentation-3862615/


References:

5 views0 comments

Recent Posts

See All
bottom of page