Aporia Documentation
Get StartedBook a Demo🚀 Cool StuffBlog
V2
V2
  • 📖Aporia Docs
  • 🤗Introduction
    • Quickstart
    • Support
  • 💡Core Concepts
    • Why Monitor ML Models?
    • Understanding Data Drift
    • Analyzing Performance
    • Tracking Data Segments
    • Models & Versions
  • 🚀Deployment
    • AWS
    • Google Cloud
    • Azure
    • Databricks
    • Offline / On-Prem
    • Platform Architecture
  • 🏠Storing your Predictions
    • Overview
    • Real-time Models (Postgres)
    • Real-time Models (Kafka)
    • Batch Models
    • Kubeflow / KServe
  • 🧠Model Types
    • Regression
    • Binary Classification
    • Multiclass Classification
    • Multi-Label Classification
    • Ranking
  • 🌈Explainability
    • SHAP values
  • 📜NLP
    • Intro to NLP Monitoring
    • Example: Text Classification
    • Example: Token Classification
    • Example: Question Answering
  • 🍪Data Sources
    • Overview
    • Amazon S3
    • Athena
    • BigQuery
    • Databricks
    • Glue Data Catalog
    • Google Cloud Storage
    • PostgreSQL
    • Redshift
    • Snowflake
    • Microsoft SQL Server
    • Oracle
  • ⚡Monitors & Alerts
    • Overview
    • Data Drift
    • Metric Change
    • Missing Values
    • Model Activity
    • Model Staleness
    • Performance Degradation
    • Prediction Drift
    • Value Range
    • Custom Metric
    • New Values
    • Alerts Consolidation
  • 🎨Dashboards
    • Overview
  • 🤖ML Monitoring as Code
    • Getting started
    • Adding new models
    • Data Segments
    • Custom metrics
    • Querying metrics
    • Monitors
    • Dashboards
  • 📡Integrations
    • Slack
    • Webhook
    • Teams
    • Single Sign On (SAML)
    • Cisco
  • 🔐Administration
    • Role Based Access Control (RBAC)
  • 🔑API Reference
    • REST API
    • API Extended Reference
    • Custom Segment Syntax
    • Custom Metric Syntax
    • Code-Based Metrics
    • Metrics Glossary
  • ⏩Release Notes
    • Release Notes 2024
    • Release Notes 2023
Powered by GitBook
On this page
  • Monitor types
  • Comparison methods
  • It's time to create your own monitor! 🎬
  1. Monitors & Alerts

Overview

By now, you probably understand why monitoring your model is essential to keeping it healthy and up-to-date in production.

In the following section, you will learn how to setup relevant monitors for your model and customize them for your needs.

If this is your first time creating a monitor in Aporia, feel free to quickly go over the following basic monitoring concepts.

Monitor types

In general, monitors can be divided into four sections of interest:

  • Integrity - credible data is basic to maintaining a successful model. Monitoring the appearance of new values, amount of missing ones and that all values are within a reasonable range can help you assure that and detect problems early.

  • Performance - depending on your use-case and KPIs, you can use different performance metric to assess how productive your model is and decide when it's best to retrain it.

  • Drift - drift of features or predictions can result in model performance degradation. Monitoring them both is useful to notice such trends early and take the proper action before it affects your business.

  • Activity - it's great to know that after all your hard work your model is out there making real world decisions. Monitoring your activity can help you reflect that to others and notice any surprising changes in volume that needs further investigation

Comparison methods

Aporia provides you with several comparison methods:

  • Absolute values - thresholds or boundaries are defined by specific predefined values. The inspection data is a serving data segment of your choice.

  • Change in percentage - thresholds or boundaries are defined by a change in percentage compared to baseline. Both baseline and inspection data are of the same serving data segment.

  • Anomaly detection - detects anomalies in pattern compared to the baseline. Both baseline and inspection data are of the same serving data segment.

  • Compared to segment - thresholds or boundaries are defined by a change in percentage compared to baseline. Inspection data and baseline data can be of deferent serving data segments.

  • Compared to training - thresholds or boundaries are defined by a change in percentage compared to baseline. Baseline data includes all the training data reported, filtered by the same data segment as the inspection data's.

It's time to create your own monitor! 🎬

PreviousOracleNextData Drift

Last updated 1 year ago

⚡