On the one hand, we want an ML algorithm to predict accurately, and on the other hand, we want the data scientists / ML engineers to explain to us in simple language. Are they two different poles? Or is it straightforward to explain?

As you can see in the table1, Simple models are easy to understand but have relatively less accurate. However, a complex model will be more precise but difficult to understand.

A typical complexity explainability tradeoff is shown in figure 1.

In simple terms, every one of us would have studied y = mx + c in our 10th grade. It is a linear equation and is simple to explain. Moreover, if there is more than one x, say x1,x2,x3,...xn, and all have a linear relation with the outcome, it is also easy to explain. The model equation will be like this: M1X1+M2X2+M3X3+....+MnXn = y, where M is the coefficient, X is a feature, and y is the predicted outcome.

But how will you explain the non-linear equation as mentioned in Fig 2? One way is to create an approximation model on top of the complex model and demonstrate the model using the approximation model. There are many methods for model interpretability. Regardless of the methods, what makes explanations meaningful and what is an explainable AI in the first place? Lets us discuss this now.

### What is Explainable AI:

"Explainable AI is a research field on ML interpretability techniques whose aims are to understand machine learning model predictions and explain them in human and understandable terms to build trust with stakeholders."

- Tensorflow - Google Cloud

### What makes the explanation meaningful:

### Complete:

The interpretability method must be able to provide evidence and explanation for all model outputs.

### Accurate:

The explanation must be able to accurately reflect the model prediction with a high degree of precision as much as possible.

### Meaningful:

All stakeholders involved should be able to understand the explanation.

### Consistent:

The explanation must provide a stable explanation on equal models to build stakeholder trust. For example, it means if the value of the input is positively correlated to the output, it cannot be negatively correlated for another set of inputs for the same model.

As you can see, explaining the explainability itself has been a bit challenging task. No wonder there is a separate research field on model interpretability as part of Responsible AI. I hope this article has given an overview of Explainable AI. I hope you enjoyed reading. Views are personal.

### Image Credit:

Foto von Andrea Piacquadio: https://www.pexels.com/de-de/foto/strenge-lehrerin-mit-buch-das-auf-gekritzelte-tafel-zeigt-3771074/

### References:

https://www.youtube.com/watch?v=6xePkn3-LME

https://www.youtube.com/watch?v=qCYAKmFFpbs&t=2204s