Aporia Documentation
Get StartedBook a Demo🚀 Cool StuffBlog
V2
V2
  • 📖Aporia Docs
  • 🤗Introduction
    • Quickstart
    • Support
  • 💡Core Concepts
    • Why Monitor ML Models?
    • Understanding Data Drift
    • Analyzing Performance
    • Tracking Data Segments
    • Models & Versions
  • 🚀Deployment
    • AWS
    • Google Cloud
    • Azure
    • Databricks
    • Offline / On-Prem
    • Platform Architecture
  • 🏠Storing your Predictions
    • Overview
    • Real-time Models (Postgres)
    • Real-time Models (Kafka)
    • Batch Models
    • Kubeflow / KServe
  • 🧠Model Types
    • Regression
    • Binary Classification
    • Multiclass Classification
    • Multi-Label Classification
    • Ranking
  • 🌈Explainability
    • SHAP values
  • 📜NLP
    • Intro to NLP Monitoring
    • Example: Text Classification
    • Example: Token Classification
    • Example: Question Answering
  • 🍪Data Sources
    • Overview
    • Amazon S3
    • Athena
    • BigQuery
    • Databricks
    • Glue Data Catalog
    • Google Cloud Storage
    • PostgreSQL
    • Redshift
    • Snowflake
    • Microsoft SQL Server
    • Oracle
  • ⚡Monitors & Alerts
    • Overview
    • Data Drift
    • Metric Change
    • Missing Values
    • Model Activity
    • Model Staleness
    • Performance Degradation
    • Prediction Drift
    • Value Range
    • Custom Metric
    • New Values
    • Alerts Consolidation
  • 🎨Dashboards
    • Overview
  • 🤖ML Monitoring as Code
    • Getting started
    • Adding new models
    • Data Segments
    • Custom metrics
    • Querying metrics
    • Monitors
    • Dashboards
  • 📡Integrations
    • Slack
    • Webhook
    • Teams
    • Single Sign On (SAML)
    • Cisco
  • 🔐Administration
    • Role Based Access Control (RBAC)
  • 🔑API Reference
    • REST API
    • API Extended Reference
    • Custom Segment Syntax
    • Custom Metric Syntax
    • Code-Based Metrics
    • Metrics Glossary
  • ⏩Release Notes
    • Release Notes 2024
    • Release Notes 2023
Powered by GitBook
On this page
  • Parameters
  • MetricParameters
  • MetricDataset
  1. ML Monitoring as Code

Querying metrics

To query metrics from Aporia, initialize a new client and call the query_metrics API:

from datetime import datetime, timedelta
from aporia import (
    Aporia,
    MetricDataset,
    MetricParameters,
    TimeRange,
)
from aporia.sdk.datasets import DatasetType

aporia_token = os.environ["APORIA_TOKEN"]
aporia_account = os.environ["APORIA_ACCOUNT"]
aporia_workspace = os.environ["APORIA_WORKSPACE"]

aporia_client = Aporia(
    base_url="https://platform.aporia.com",  # or "https://platform-eu.aporia.com"
    token=aporia_token,
    account_name=aporia_account,
    workspace_name=aporia_workspace,
)

last_week_dataset = MetricDataset(
    dataset_type=DatasetType.SERVING,
    time_range=TimeRange(
        start=datetime.datetime.now() - datetime.timedelta(days=7),
        end=datetime.datetime.now(),
    ),
)

res = metrics.query_batch(
    model_id=model_id,
    metrics=[
        MetricParameters(
            dataset=last_week_dataset,
            name="count",
        ),
    ],
)

print(f"The model had {metrics[0]} predictions last week")

Parameters

The query_metrics API has the following parameters:

Parameter
Type
Description

model_id

str

Model ID to query metrics for.

metrics

List[MetricParameters]

List of metrics to query.

The API can request values for multiple metrics concurrently.

MetricParameters

Here are different fields for the MetricParameters object:

Field
Type
Description

name

str

dataset

MetricDataset

Specifies what data to query (training / serving), what segment, and what timeframe. Required.

column

str

Name of the column to calculate the metric for. Required except for the count metric. For performance metrics, this should be the name of the prediction, not the actual.

k

int

K value for ranking metrics such as nDCG. Required only for ndcg_at_k, map_at_k, mrr_at_k, accuracy_at_k, precision_at_k, and recall_at_k.

threshold

float

Threshold to use when calculating binary performance metrics.

Required only if the prediction is numeric and the actual is boolean, and the metric is a binary performance metric such as accuracy, recall, precision, f1_score, etc.

custom_metric_id

str

Custom metric ID.

Required only if you want to query a custom metric.

baseline

MetricDataset

Specifies what data to use as baseline.

Required only for statistical distances such as js_distance, ks_distance, psi, and hellinger_distance.

MetricDataset

The MetricDataset object contains the following fields:

Field
Type
Description

dataset_type

DatasetType

Can be either DatasetType.SERVING or DatasetType.TRAINING. Required.

time_range

TimeRange

Time range (contains start and end fields). Do not pass this for training.

model_version

str

Model version to filter by. Optional.

segment

MetricSegment

Used to query metrics in a specific data segment. Contains id and value fields.

PreviousCustom metricsNextMonitors

Last updated 1 year ago

Metric name (see ). Required.

🤖
Supported functions