Gabriel Cypriano

1755 days ago

the examples contradict the notion that you have to either select an explainable model or a more complex – and therefore more accurate – one. The explanations given using an accurate and complex Gradient Boosting Machine model were as clear as a linear model could get and made total intuitive sense. This also highlights the strengths of SHAP in providing accurate and consistent explanations.

Model Interpretability with SHAP – F1 predictor

In applied machine learning, there is usually a trade-off between model accuracy and interpretability. Some models like linear regression or decision trees are considered interpretable whereas others, such as tree ensembles or neural networks, are used as black-box algorithms.