Aporia Documentation
Get StartedBook a Demo🚀 Cool StuffBlog
V1
V1
  • Welcome to Aporia!
  • 🤗Introduction
    • Quickstart
    • Support
  • 💡Core Concepts
    • Why Monitor ML Models?
    • Understanding Data Drift
    • Analyzing Performance
    • Tracking Data Segments
    • Models & Versions
    • Explainability
  • 🏠Storing your Predictions
    • Overview
    • Real-time Models (Postgres)
    • Real-time Models (Kafka)
    • Batch Models
    • Kubeflow / KServe
    • Logging to Aporia directly
  • 🚀Model Types
    • Regression
    • Binary Classification
    • Multiclass Classification
    • Multi-Label Classification
    • Ranking
  • 📜NLP
    • Intro to NLP Monitoring
    • Example: Text Classification
    • Example: Token Classification
    • Example: Question Answering
  • 🍪Data Sources
    • Overview
    • Amazon S3
    • Athena
    • BigQuery
    • Delta Lake
    • Glue Data Catalog
    • PostgreSQL
    • Redshift
    • Snowflake
  • ⚡Monitors
    • Overview
    • Data Drift
    • Metric Change
    • Missing Values
    • Model Activity
    • Model Staleness
    • New Values
    • Performance Degradation
    • Prediction Drift
    • Value Range
    • Custom Metric
  • 📡Integrations
    • Slack
    • JIRA
    • New Relic
    • Single Sign On (SAML)
    • Webhook
    • Bodywork
  • 🔑API Reference
    • Custom Metric Definition Language
    • REST API
    • SDK Reference
    • Metrics Glossary
Powered by GitBook
On this page

Welcome to Aporia!

Aporia is an ML observability platform that empowers ML teams to monitor and improve their models in production.

NextQuickstart

Last updated 2 years ago

Data Science and ML teams rely on Aporia to visualize their models in production, as well as detect and resolve data drift, model performance degradation, and data integrity issues.

Aporia offers quick and simple deployment and can monitor billions of predictions with low cloud costs. We understand that use cases vary and each model is unique, that’s why we’ve cemented customization at our core, to allow our users to tailor their dashboards, monitors, metrics, and data segments to their needs.

Monitor your models in 3 easy steps

Learn

Learn about data drift, measuring model performance in production across various data segments, and other ML monitoring concepts.

Connect

Connect to an existing database where you already store the predictions of your models.

Monitor

Build a dashboard to visualize your model in production and create alerts to notify you when something bad happens.