Aporia Documentation
Get StartedBook a DemoπŸš€ Cool StuffBlog
V2
V2
  • πŸ“–Aporia Docs
  • πŸ€—Introduction
    • Quickstart
    • Support
  • πŸ’‘Core Concepts
    • Why Monitor ML Models?
    • Understanding Data Drift
    • Analyzing Performance
    • Tracking Data Segments
    • Models & Versions
  • πŸš€Deployment
    • AWS
    • Google Cloud
    • Azure
    • Databricks
    • Offline / On-Prem
    • Platform Architecture
  • 🏠Storing your Predictions
    • Overview
    • Real-time Models (Postgres)
    • Real-time Models (Kafka)
    • Batch Models
    • Kubeflow / KServe
  • 🧠Model Types
    • Regression
    • Binary Classification
    • Multiclass Classification
    • Multi-Label Classification
    • Ranking
  • 🌈Explainability
    • SHAP values
  • πŸ“œNLP
    • Intro to NLP Monitoring
    • Example: Text Classification
    • Example: Token Classification
    • Example: Question Answering
  • πŸͺData Sources
    • Overview
    • Amazon S3
    • Athena
    • BigQuery
    • Databricks
    • Glue Data Catalog
    • Google Cloud Storage
    • PostgreSQL
    • Redshift
    • Snowflake
    • Microsoft SQL Server
    • Oracle
  • ⚑Monitors & Alerts
    • Overview
    • Data Drift
    • Metric Change
    • Missing Values
    • Model Activity
    • Model Staleness
    • Performance Degradation
    • Prediction Drift
    • Value Range
    • Custom Metric
    • New Values
    • Alerts Consolidation
  • 🎨Dashboards
    • Overview
  • πŸ€–ML Monitoring as Code
    • Getting started
    • Adding new models
    • Data Segments
    • Custom metrics
    • Querying metrics
    • Monitors
    • Dashboards
  • πŸ“‘Integrations
    • Slack
    • Webhook
    • Teams
    • Single Sign On (SAML)
    • Cisco
  • πŸ”Administration
    • Role Based Access Control (RBAC)
  • πŸ”‘API Reference
    • REST API
    • API Extended Reference
    • Custom Segment Syntax
    • Custom Metric Syntax
    • Code-Based Metrics
    • Metrics Glossary
  • ⏩Release Notes
    • Release Notes 2024
    • Release Notes 2023
Powered by GitBook
On this page
  • Ingest your Shaply values
  • Explain your predictions
  1. Explainability

SHAP values

PreviousRankingNextIntro to NLP Monitoring

Last updated 1 year ago

In the following guide we will explain how one can visualize SHAP values in Aporia to gain better explainability for their model’s predictions and increase trust.

Ingest your Shaply values

Ingesting your Shaply values in Aporia can be done by adding a column with the following naming convention <feature_name>_shap.

For example, the SHAP column corresponding to a featureX would be featureX_shap.

Please note:

  1. the SHAP column should not be mapped to the version schema but you must include it in your SQL query when integrating your training/serving dataset.

  2. _shap must be lowercase and the <feature_name> must be same case as the feature in Aporia. For those of you who use Snowflake we would recommend to pay attention that if the value is read directly from a table using SELECT *, the case-ness of the column name will be saved. Otherwise, your can force Snowflake to preserve case by using double quotes in the query. For example, SELECT 1 AS a, 2 AS "b" would return a table with 2 columns: A and b.

Explain your predictions

Exploring SHAP values can be done via our Data Points cell as part of an Investigation Case.

When clicking on explain you’ll be able to view all the available SHAP values as well as getting a textual business explanation which you can share with stakeholders.

🌈
Click on Explain to view the SHAP values of the chosen prediction
Copy the business explanation to share with stakeholders