Aporia Documentation
Get StartedBook a Demo🚀 Cool StuffBlog
V1
V1
  • Welcome to Aporia!
  • 🤗Introduction
    • Quickstart
    • Support
  • 💡Core Concepts
    • Why Monitor ML Models?
    • Understanding Data Drift
    • Analyzing Performance
    • Tracking Data Segments
    • Models & Versions
    • Explainability
  • 🏠Storing your Predictions
    • Overview
    • Real-time Models (Postgres)
    • Real-time Models (Kafka)
    • Batch Models
    • Kubeflow / KServe
    • Logging to Aporia directly
  • 🚀Model Types
    • Regression
    • Binary Classification
    • Multiclass Classification
    • Multi-Label Classification
    • Ranking
  • 📜NLP
    • Intro to NLP Monitoring
    • Example: Text Classification
    • Example: Token Classification
    • Example: Question Answering
  • 🍪Data Sources
    • Overview
    • Amazon S3
    • Athena
    • BigQuery
    • Delta Lake
    • Glue Data Catalog
    • PostgreSQL
    • Redshift
    • Snowflake
  • ⚡Monitors
    • Overview
    • Data Drift
    • Metric Change
    • Missing Values
    • Model Activity
    • Model Staleness
    • New Values
    • Performance Degradation
    • Prediction Drift
    • Value Range
    • Custom Metric
  • 📡Integrations
    • Slack
    • JIRA
    • New Relic
    • Single Sign On (SAML)
    • Webhook
    • Bodywork
  • 🔑API Reference
    • Custom Metric Definition Language
    • REST API
    • SDK Reference
    • Metrics Glossary
Powered by GitBook
On this page
  1. Model Types

Multiclass Classification

Multiclass classification models predict one of more than two outcomes. In Aporia, these models are represented with the multiclass model type.

Examples of multiclass classification problems:

  • Is this product a book, movie, or clothing?

  • Is this movie a romantic comedy, documentary, or thriller?

  • Which category of products is most interesting to this customer?

Frequently, multiclass models output a confidence value or a score for each class.

Integration

To monitor a multiclass model, create a new model version with a string field representing the predicted class, and optionally a dict field with the probabilities for all classes:

apr_model = aporia.create_model_version(
  model_id="<MODEL_ID>",
  model_version="v1",
  model_type="multiclass"
  features={
     ...
  },
  predictions={
    "product_type": "string",
    "proba": "dict"
  },
)

Next, connect to a data source or manually log predictions like so:

apr_model.log_prediction(
  id="<PREDICTION_ID>",
  features={
    ...
  },
  predictions={
    "product_type": "book",
    "proba": {
        "book": 0.8,
        "movie": 0.1,
        "clothing": 0.1
    }
  },
)

To log actuals for this prediction:

apr_model.log_actuals(
  id="<PREDICTION_ID>",
  actuals={
    "product_type": "book",
    "proba": {
        "book": 1.0,
        "movie": 0.0,
        "clothing": 0.0,
    },
  },
)

If you don't need to monitor probabilities, you may omit the proba field.

PreviousBinary ClassificationNextMulti-Label Classification

Last updated 2 years ago

🚀