Aporia Documentation
Get StartedBook a Demo🚀 Cool StuffBlog
V2
V2
  • 📖Aporia Docs
  • 🤗Introduction
    • Quickstart
    • Support
  • 💡Core Concepts
    • Why Monitor ML Models?
    • Understanding Data Drift
    • Analyzing Performance
    • Tracking Data Segments
    • Models & Versions
  • 🚀Deployment
    • AWS
    • Google Cloud
    • Azure
    • Databricks
    • Offline / On-Prem
    • Platform Architecture
  • 🏠Storing your Predictions
    • Overview
    • Real-time Models (Postgres)
    • Real-time Models (Kafka)
    • Batch Models
    • Kubeflow / KServe
  • 🧠Model Types
    • Regression
    • Binary Classification
    • Multiclass Classification
    • Multi-Label Classification
    • Ranking
  • 🌈Explainability
    • SHAP values
  • 📜NLP
    • Intro to NLP Monitoring
    • Example: Text Classification
    • Example: Token Classification
    • Example: Question Answering
  • 🍪Data Sources
    • Overview
    • Amazon S3
    • Athena
    • BigQuery
    • Databricks
    • Glue Data Catalog
    • Google Cloud Storage
    • PostgreSQL
    • Redshift
    • Snowflake
    • Microsoft SQL Server
    • Oracle
  • ⚡Monitors & Alerts
    • Overview
    • Data Drift
    • Metric Change
    • Missing Values
    • Model Activity
    • Model Staleness
    • Performance Degradation
    • Prediction Drift
    • Value Range
    • Custom Metric
    • New Values
    • Alerts Consolidation
  • 🎨Dashboards
    • Overview
  • 🤖ML Monitoring as Code
    • Getting started
    • Adding new models
    • Data Segments
    • Custom metrics
    • Querying metrics
    • Monitors
    • Dashboards
  • 📡Integrations
    • Slack
    • Webhook
    • Teams
    • Single Sign On (SAML)
    • Cisco
  • 🔐Administration
    • Role Based Access Control (RBAC)
  • 🔑API Reference
    • REST API
    • API Extended Reference
    • Custom Segment Syntax
    • Custom Metric Syntax
    • Code-Based Metrics
    • Metrics Glossary
  • ⏩Release Notes
    • Release Notes 2024
    • Release Notes 2023
Powered by GitBook
On this page
  1. ML Monitoring as Code

Custom metrics

This guide will show you how to automatically add custom metrics to your model from code using the Python SDK.

For more information on the syntax of custom metrics, check the Custom Metric Syntax documentation.

Defining Custom Metrics

To add new custom metrics:

conversion_rate = aporia.CustomMetric(
    "Conversion Rate",
    code="""
        count(filter="purchased=TRUE") / count()
    """,
)

model_revenue = aporia.CustomMetric(
    "Model Revenue",
    code="""
        sum(
            column="candidate_item_price",
            filter="purchased=TRUE AND ranking_index<=4"
        ) / count()*2
    """,
)

In this example, we're adding two new custom metrics - conversion rate and model revenue. To add the custom metrics to your model, pass them to the model object:

model = aporia.Model(
    "My Model",
    type=aporia.ModelType.RANKING,
    versions=[model_version],
    segments=[platform_segment, country_segments],
    custom_metrics=[conversion_rate, model_revenue]
)
PreviousData SegmentsNextQuerying metrics

Last updated 1 year ago

🤖