Building Trust in ML: The Importance of Model Transparency
Explaining what happens with your model is as important as developing it.
We might be deluded that as long as the model result is fine, then all is good with the business.
However, we must build trust with everyone affected by the model, especially the user.
It is the aim of machine learning interpretability. The goal of interpretability is to make the inner workings of a model transparent so that its predictions can be understood and trusted.
There are a few techniques example to perform machine learning interpretability, including:
👉Feature Importance
👉Partial Dependence Plots
👉SHAP Values
👉LIME
👉Permutation Importance
👉Attention Mechanisms
👉Model Visualization
👉Counterfactual Analysis
It's only some examples, and I know there are still more techniques than were listed above.
But what is important is not the technique but how the transparency from our model.
That is all for today! Please comment if you want to know something else from Machine Learning and Python domain!