Aporia Documentation
Get StartedBook a Demo🚀 Cool StuffBlog
V1
V1
  • Welcome to Aporia!
  • 🤗Introduction
    • Quickstart
    • Support
  • 💡Core Concepts
    • Why Monitor ML Models?
    • Understanding Data Drift
    • Analyzing Performance
    • Tracking Data Segments
    • Models & Versions
    • Explainability
  • 🏠Storing your Predictions
    • Overview
    • Real-time Models (Postgres)
    • Real-time Models (Kafka)
    • Batch Models
    • Kubeflow / KServe
    • Logging to Aporia directly
  • 🚀Model Types
    • Regression
    • Binary Classification
    • Multiclass Classification
    • Multi-Label Classification
    • Ranking
  • 📜NLP
    • Intro to NLP Monitoring
    • Example: Text Classification
    • Example: Token Classification
    • Example: Question Answering
  • 🍪Data Sources
    • Overview
    • Amazon S3
    • Athena
    • BigQuery
    • Delta Lake
    • Glue Data Catalog
    • PostgreSQL
    • Redshift
    • Snowflake
  • ⚡Monitors
    • Overview
    • Data Drift
    • Metric Change
    • Missing Values
    • Model Activity
    • Model Staleness
    • New Values
    • Performance Degradation
    • Prediction Drift
    • Value Range
    • Custom Metric
  • 📡Integrations
    • Slack
    • JIRA
    • New Relic
    • Single Sign On (SAML)
    • Webhook
    • Bodywork
  • 🔑API Reference
    • Custom Metric Definition Language
    • REST API
    • SDK Reference
    • Metrics Glossary
Powered by GitBook
On this page
  • Why Explainability?
  • Integrating Explainability in Aporia
  1. Core Concepts

Explainability

PreviousModels & VersionsNextOverview

Last updated 2 years ago

"My model is working perfectly! But why?"

This is what explainability is all about - the ability to tell why your model predicted what it actually predicted. Or, in other words, what is the impact of each feature on the final prediction?

Why Explainability?

There are many reasons why you would need explainability for your models, some examples:

  • Trust: Models can be viewed as a black box that generates predictions; the ability to explain these predictions increases trust in the model.

  • Debugging: Being able to explain predictions based on different inputs is a powerful debugging tool for identifying errors.

  • Bias and Fairness: The ability to see the effect of each feature can aid in identifying unintentional biases that may affect the model's fairness.

Integrating Explainability in Aporia

Aporia lets you explain each prediction by visualizing the impact of each feature on the final prediction. This can be done by clicking on the Explain button near each prediction in the "Data Points" page of your model.

You can also interactively change any feature value, click Re-Explain and see the impact on a theoretical prediction.

Make sure your feature schema in the model version is ordered

When creating your model version, you'll need to make sure that the order of the features is identical to your model artifact features.

Instead of passing a normal dict as features schema, you'll need to pass OrderedDict. For example:

# Build feature schema by order - you can use model.columns for this of course :)
features = OrderedDict()
features["sepal_length"] = "numeric"
features["sepal_width"] = "numeric"
features["petal_length"] = "numeric"
features["petal_width"] = "numeric"

apr_model = aporia.create_model_version(
    model_id="<MODEL_ID>",
    model_version="v1",
    model_type="multiclass",
    features=features,
    predictions={
          "variety": "categorical"
    }
)

Log Training + Serving data

Upload Model Artifact in ONNX format

To upload your model artifact, you'll need to execute:

apr_model.upload_model_artifact(
    artifact_type="onnx",
    model_artifact=onnx_model.SerializeToString()
)

Here are quick snippets and references that may help you with converting your model.

XGBoost

import onnxmltools
from onnxmltools.convert.common.data_types import FloatTensorType

initial_types = [('features', FloatTensorType([None, X_train.shape[1]]))]

onnx_model = onnxmltools.convert_xgboost(xgb_model, initial_types=initial_types, target_opset=9)

LightGBM

import onnxmltools
from onnxmltools.convert.common.data_types import FloatTensorType

initial_types = [('features', FloatTensorType([None, X_train.shape[1]]))]

onnx_model = onnxmltools.convert_lightgbm(lgb_model, initial_types=initial_types, target_opset=9)

Catboost

import onnxmltools
from onnxmltools.convert.common.data_types import FloatTensorType

initial_types = [('features', FloatTensorType([None, X_train.shape[1]]))]

onnx_model = onnxmltools.covnert_catboost(catboost_model, initial_types=initial_types, target_opset=9)

Scikit Learn

import onnxmltools
from onnxmltools.convert.common.data_types import FloatTensorType

initial_types = [('features', FloatTensorType([None, X_train.shape[1]]))]

onnx_model = onnxmltools.convert_sklearn(skl_model, initial_types=initial_types, target_opset=9)

Keras

import onnxmltools

onnx_model = onnxmltools.convert_keras(keras_model, target_opset=9)

Tensorflow

import onnxmltools

onnx_model = onnxmltools.convert_tensorflow(keras_model, target_opset=9)

Pytorch

# Please see https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html

For further reading on the subject, check out .

Training data is required for Explainability. Please check out for more information.

is an open format for Machine Learning models. Models from all popular ML libraries (XGBoost, Sklearn, Tensorflow, Pytorch, etc.) can be converted to ONNX

💡
our blog about explainability
Data Sources - Overview
ONNX
Explainability in Action