Aporia Documentation
Get StartedBook a Demo🚀 Cool StuffBlog
V2
V2
  • 📖Aporia Docs
  • 🤗Introduction
    • Quickstart
    • Support
  • 💡Core Concepts
    • Why Monitor ML Models?
    • Understanding Data Drift
    • Analyzing Performance
    • Tracking Data Segments
    • Models & Versions
  • 🚀Deployment
    • AWS
    • Google Cloud
    • Azure
    • Databricks
    • Offline / On-Prem
    • Platform Architecture
  • 🏠Storing your Predictions
    • Overview
    • Real-time Models (Postgres)
    • Real-time Models (Kafka)
    • Batch Models
    • Kubeflow / KServe
  • 🧠Model Types
    • Regression
    • Binary Classification
    • Multiclass Classification
    • Multi-Label Classification
    • Ranking
  • 🌈Explainability
    • SHAP values
  • 📜NLP
    • Intro to NLP Monitoring
    • Example: Text Classification
    • Example: Token Classification
    • Example: Question Answering
  • 🍪Data Sources
    • Overview
    • Amazon S3
    • Athena
    • BigQuery
    • Databricks
    • Glue Data Catalog
    • Google Cloud Storage
    • PostgreSQL
    • Redshift
    • Snowflake
    • Microsoft SQL Server
    • Oracle
  • ⚡Monitors & Alerts
    • Overview
    • Data Drift
    • Metric Change
    • Missing Values
    • Model Activity
    • Model Staleness
    • Performance Degradation
    • Prediction Drift
    • Value Range
    • Custom Metric
    • New Values
    • Alerts Consolidation
  • 🎨Dashboards
    • Overview
  • 🤖ML Monitoring as Code
    • Getting started
    • Adding new models
    • Data Segments
    • Custom metrics
    • Querying metrics
    • Monitors
    • Dashboards
  • 📡Integrations
    • Slack
    • Webhook
    • Teams
    • Single Sign On (SAML)
    • Cisco
  • 🔐Administration
    • Role Based Access Control (RBAC)
  • 🔑API Reference
    • REST API
    • API Extended Reference
    • Custom Segment Syntax
    • Custom Metric Syntax
    • Code-Based Metrics
    • Metrics Glossary
  • ⏩Release Notes
    • Release Notes 2024
    • Release Notes 2023
Powered by GitBook
On this page
  • Why Monitor Value Range?
  • Comparison methods
  • Customizing your monitor
  • Monitoring arrays
  1. Monitors & Alerts

Value Range

PreviousPrediction DriftNextCustom Metric

Last updated 1 year ago

Why Monitor Value Range?

Monitoring changes in the value range of numeric fields helps to locate and examine anomalies in the model's input.

For example, setting the monitor for a feature named hour_sin with the range -1 <= x <= 1 will help us discover issues in model input.

Comparison methods

For this monitor, the following comparison methods are available:

Customizing your monitor

Configuration may vary slightly depending on the comparison method you choose.

STEP 1: Choose the fields you would like to monitor

You may select as many fields as you want (from features/raw inputs) 😊

Note that the monitor will run on each selected field separately.

STEP 2: Choose the inspection period and baseline

For the fields you chose in the previous step, the monitor will raise an alert if the value range in the inspection period exceeds your threshold boundaries compared to the baseline's value range.

STEP 3: calibrate thresholds

This step is important to ensure you have the right amount of alerts that fit your needs. You can always readjust it later if needed.

Monitoring arrays

This type of monitor also allows monitoring the range of all the values within an array/ embedding type. The monitor will validate that any of the elements within a selected array/embedding, fit the condition set in the monitor.

The monitor will ignore any non-numeric fields but will do its best effort to cast strings to numbers if possible. For example, the value "check" within an array will be ignored, and the value "60" will be handled as the number 60.

⚡
Change in percentage
Absolute value
Compared to segment
Compared to training