Aporia Documentation
Get StartedBook a Demo🚀 Cool StuffBlog
V2
V2
  • 📖Aporia Docs
  • 🤗Introduction
    • Quickstart
    • Support
  • 💡Core Concepts
    • Why Monitor ML Models?
    • Understanding Data Drift
    • Analyzing Performance
    • Tracking Data Segments
    • Models & Versions
  • 🚀Deployment
    • AWS
    • Google Cloud
    • Azure
    • Databricks
    • Offline / On-Prem
    • Platform Architecture
  • 🏠Storing your Predictions
    • Overview
    • Real-time Models (Postgres)
    • Real-time Models (Kafka)
    • Batch Models
    • Kubeflow / KServe
  • 🧠Model Types
    • Regression
    • Binary Classification
    • Multiclass Classification
    • Multi-Label Classification
    • Ranking
  • 🌈Explainability
    • SHAP values
  • 📜NLP
    • Intro to NLP Monitoring
    • Example: Text Classification
    • Example: Token Classification
    • Example: Question Answering
  • 🍪Data Sources
    • Overview
    • Amazon S3
    • Athena
    • BigQuery
    • Databricks
    • Glue Data Catalog
    • Google Cloud Storage
    • PostgreSQL
    • Redshift
    • Snowflake
    • Microsoft SQL Server
    • Oracle
  • ⚡Monitors & Alerts
    • Overview
    • Data Drift
    • Metric Change
    • Missing Values
    • Model Activity
    • Model Staleness
    • Performance Degradation
    • Prediction Drift
    • Value Range
    • Custom Metric
    • New Values
    • Alerts Consolidation
  • 🎨Dashboards
    • Overview
  • 🤖ML Monitoring as Code
    • Getting started
    • Adding new models
    • Data Segments
    • Custom metrics
    • Querying metrics
    • Monitors
    • Dashboards
  • 📡Integrations
    • Slack
    • Webhook
    • Teams
    • Single Sign On (SAML)
    • Cisco
  • 🔐Administration
    • Role Based Access Control (RBAC)
  • 🔑API Reference
    • REST API
    • API Extended Reference
    • Custom Segment Syntax
    • Custom Metric Syntax
    • Code-Based Metrics
    • Metrics Glossary
  • ⏩Release Notes
    • Release Notes 2024
    • Release Notes 2023
Powered by GitBook
On this page
  • Why Monitor New Values?
  • Comparison methods
  • Customizing your monitor
  1. Monitors & Alerts

New Values

PreviousCustom MetricNextAlerts Consolidation

Last updated 1 year ago

Why Monitor New Values?

Monitoring new values of categorical fields helps to locate and examine changes in the model's input.

For example, setting the monitor for a feature named state will help us discover a new region for which the model is asked to predict results.

Comparison methods

For this monitor, the following comparison methods are available:

Customizing your monitor

Configuration may slightly vary depending on the comparison method you choose.

STEP 1: Choose the fields you would like to monitor

You may select as many fields as you want 😊

Note that the monitor will run on each field chosen separately.

The monitor supports both categorical and array field types(for categorical arrays).

For arrays, it will monitor all the categories in the arrays together and not per dimension. Please note that by default, the monitor will run only over the first 500 categories seen within the arrays and ignore the rest, if your use case requires monitoring arrays with more unique values, please reach out so we can update your specific configuration.

For categorical fields, the monitor will support up to 256 unique values.

STEP 2: Choose the inspection period and baseline

For the fields you chose in the previous step, the monitor will raise an alert if the number of new values in the inspection period compared to the baseline values exceeds your threshold.

STEP 3: calibrate thresholds

This step is essential to ensure you have the right amount of alerts that fit your needs. You can always readjust it later if needed.

⚡
Change in percentage
Compared to segment
Compared to training