Explainability
Last updated
Last updated
"My model is working perfectly! But why?"
This is what explainability is all about - the ability to tell why your model predicted what it actually predicted. Or, in other words, what is the impact of each feature on the final prediction?
There are many reasons why you would need explainability for your models, some examples:
Trust: Models can be viewed as a black box that generates predictions; the ability to explain these predictions increases trust in the model.
Debugging: Being able to explain predictions based on different inputs is a powerful debugging tool for identifying errors.
Bias and Fairness: The ability to see the effect of each feature can aid in identifying unintentional biases that may affect the model's fairness.
For further reading on the subject, check out our blog about explainability.
Aporia lets you explain each prediction by visualizing the impact of each feature on the final prediction. This can be done by clicking on the Explain button near each prediction in the "Data Points" page of your model.
You can also interactively change any feature value, click Re-Explain and see the impact on a theoretical prediction.
Make sure your feature schema in the model version is ordered
When creating your model version, you'll need to make sure that the order of the features is identical to your model artifact features.
Instead of passing a normal dict
as features schema, you'll need to pass OrderedDict
. For example:
Log Training + Serving data
Training data is required for Explainability. Please check out Data Sources - Overview for more information.
Upload Model Artifact in ONNX format
ONNX is an open format for Machine Learning models. Models from all popular ML libraries (XGBoost, Sklearn, Tensorflow, Pytorch, etc.) can be converted to ONNX
To upload your model artifact, you'll need to execute:
Here are quick snippets and references that may help you with converting your model.