Aporia Documentation
Get StartedBook a DemoπŸš€ Cool StuffBlog
V2
V2
  • πŸ“–Aporia Docs
  • πŸ€—Introduction
    • Quickstart
    • Support
  • πŸ’‘Core Concepts
    • Why Monitor ML Models?
    • Understanding Data Drift
    • Analyzing Performance
    • Tracking Data Segments
    • Models & Versions
  • πŸš€Deployment
    • AWS
    • Google Cloud
    • Azure
    • Databricks
    • Offline / On-Prem
    • Platform Architecture
  • 🏠Storing your Predictions
    • Overview
    • Real-time Models (Postgres)
    • Real-time Models (Kafka)
    • Batch Models
    • Kubeflow / KServe
  • 🧠Model Types
    • Regression
    • Binary Classification
    • Multiclass Classification
    • Multi-Label Classification
    • Ranking
  • 🌈Explainability
    • SHAP values
  • πŸ“œNLP
    • Intro to NLP Monitoring
    • Example: Text Classification
    • Example: Token Classification
    • Example: Question Answering
  • πŸͺData Sources
    • Overview
    • Amazon S3
    • Athena
    • BigQuery
    • Databricks
    • Glue Data Catalog
    • Google Cloud Storage
    • PostgreSQL
    • Redshift
    • Snowflake
    • Microsoft SQL Server
    • Oracle
  • ⚑Monitors & Alerts
    • Overview
    • Data Drift
    • Metric Change
    • Missing Values
    • Model Activity
    • Model Staleness
    • Performance Degradation
    • Prediction Drift
    • Value Range
    • Custom Metric
    • New Values
    • Alerts Consolidation
  • 🎨Dashboards
    • Overview
  • πŸ€–ML Monitoring as Code
    • Getting started
    • Adding new models
    • Data Segments
    • Custom metrics
    • Querying metrics
    • Monitors
    • Dashboards
  • πŸ“‘Integrations
    • Slack
    • Webhook
    • Teams
    • Single Sign On (SAML)
    • Cisco
  • πŸ”Administration
    • Role Based Access Control (RBAC)
  • πŸ”‘API Reference
    • REST API
    • API Extended Reference
    • Custom Segment Syntax
    • Custom Metric Syntax
    • Code-Based Metrics
    • Metrics Glossary
  • ⏩Release Notes
    • Release Notes 2024
    • Release Notes 2023
Powered by GitBook
On this page

Aporia Docs

Aporia is an ML observability platform that empowers ML teams to monitor and improve their models in production.

NextQuickstart

Last updated 1 year ago

Data Science and ML teams rely on Aporia to visualize their models in production, as well as detect and resolve data drift, model performance degradation, and data integrity issues.

Aporia offers quick and simple deployment and can monitor billions of predictions with low cloud costs. We understand that use cases vary and each model is unique, that’s why we’ve cemented customization at our core, to allow our users to tailor their dashboards, monitors, metrics, and data segments to their needs.

Monitor your models in 3 easy steps

πŸ“–

Learn

Learn about data drift, measuring model performance in production across various data segments, and other ML monitoring concepts.

Connect

Connect to an existing database where you already store the predictions of your models.

Monitor

Build a dashboard to visualize your model in production and create alerts to notify you when something bad happens.