Aporia Documentation
Get StartedBook a Demo🚀 Cool StuffBlog
V2
V2
  • 📖Aporia Docs
  • 🤗Introduction
    • Quickstart
    • Support
  • 💡Core Concepts
    • Why Monitor ML Models?
    • Understanding Data Drift
    • Analyzing Performance
    • Tracking Data Segments
    • Models & Versions
  • 🚀Deployment
    • AWS
    • Google Cloud
    • Azure
    • Databricks
    • Offline / On-Prem
    • Platform Architecture
  • 🏠Storing your Predictions
    • Overview
    • Real-time Models (Postgres)
    • Real-time Models (Kafka)
    • Batch Models
    • Kubeflow / KServe
  • 🧠Model Types
    • Regression
    • Binary Classification
    • Multiclass Classification
    • Multi-Label Classification
    • Ranking
  • 🌈Explainability
    • SHAP values
  • 📜NLP
    • Intro to NLP Monitoring
    • Example: Text Classification
    • Example: Token Classification
    • Example: Question Answering
  • 🍪Data Sources
    • Overview
    • Amazon S3
    • Athena
    • BigQuery
    • Databricks
    • Glue Data Catalog
    • Google Cloud Storage
    • PostgreSQL
    • Redshift
    • Snowflake
    • Microsoft SQL Server
    • Oracle
  • ⚡Monitors & Alerts
    • Overview
    • Data Drift
    • Metric Change
    • Missing Values
    • Model Activity
    • Model Staleness
    • Performance Degradation
    • Prediction Drift
    • Value Range
    • Custom Metric
    • New Values
    • Alerts Consolidation
  • 🎨Dashboards
    • Overview
  • 🤖ML Monitoring as Code
    • Getting started
    • Adding new models
    • Data Segments
    • Custom metrics
    • Querying metrics
    • Monitors
    • Dashboards
  • 📡Integrations
    • Slack
    • Webhook
    • Teams
    • Single Sign On (SAML)
    • Cisco
  • 🔐Administration
    • Role Based Access Control (RBAC)
  • 🔑API Reference
    • REST API
    • API Extended Reference
    • Custom Segment Syntax
    • Custom Metric Syntax
    • Code-Based Metrics
    • Metrics Glossary
  • ⏩Release Notes
    • Release Notes 2024
    • Release Notes 2023
Powered by GitBook
On this page
  • Extract Embeddings
  • Storing your Predictions
  • Schema mapping
  • Next steps
  1. NLP

Intro to NLP Monitoring

Whether it's text classification, information extraction, or question answering, use Aporia to monitor your Natural Language Processing models in production.

PreviousSHAP valuesNextExample: Text Classification

Last updated 1 year ago

This guide will walk you through the core concepts of NLP model monitoring, including drift detection and model performance. 🚀

Throughout the guide, we will use a simple sentiment analysis model based on 🤗 :

>>> from transformers import pipeline

>>> classifier = pipeline("sentiment-analysis")

This downloads a default pre-trained model and tokenizer for Sentiment Analysis. Now you can use the classifier on your target text:

>>> classifier("I love cookies and Aporia")
[{'label': 'POSITIVE', 'score': 0.9997883439064026}]

Extract Embeddings

To effectively detect drift in NLP models, we use embeddings.

But... what are embeddings?

Textual data is complex, high-dimensional, and free-form. Embeddings represent text as low-dimensional vectors.

Various language models, such as and transformer-based models like , are used obtain embeddings for NLP models. In case of BERT, embeddings are usually vectors of size 768.

To get embeddings for our HuggingFace model, we'll need to do two things:

  1. Pass output_hidden_states=True to our model params.

  2. When we call pipeline(...) it does a lot of things for us - preprocessing, inference, and post processing. We need to break all of this down into each step, so we can extract the embeddings.

In other words:

classifier = pipeline(
    task="sentiment-analysis",
    model_kwargs={"output_hidden_states": True}
    )

# Preprocessing
model_input = classifier.preprocess("I love cookies and Aporia")

# Inference
model_output = classifier.forward(model_input)

# Postprocessing
classifier.postprocess(model_output)
  # ==> {'label': 'POSITIVE', 'score': 0.9998340606689453} 

And finally, to extract embeddings for this prediction:

embeddings = torch.mean(model_output.hidden_states[-1], dim=1).squeeze()

Storing your Predictions

For example, you could use a Parquet file on S3 or a Postgres table that looks like this:

id
raw_text (text)
embeddings (embedding)
prediction (boolean)
score (numeric)
timestamp (datetime)

1

I love cookies and Aporia

[0.77, 0.87, 0.94, ...]

True

0.98

2021-11-20 13:41:00

2

This restaurant was really bad

[0.97, 0.82, 0.13, ...]

False

0.88

2021-11-20 13:45:00

3

Hummus is the tastiest thing ever

[0.14, 0.55, 0.66, ...]

True

0.92

2021-11-20 13:49:00

  • Note that in the prediction column True is the Positive sentiment, and the false is the Negative.

Schema mapping

There are 2 unique types in Aporia to help you integrate your NLP model - text, and embedding.

The text should be used with your raw_text column. Note that by default, in the UI every string column will be automatically marked as categorical, but you'll have the option to change it to text for NLP use cases.

The embedding as the name suggested, should be used with your embedding column. Note that by default, in the UI every array column will be automatically marked as array, but you'll have the option to change it to embedding for NLP use cases.

Next steps

  • Create a custom dashboard for your model in Aporia - Drag & drop widgets to show different performance metrics, top drifted features, etc.

  • Visualize NLP drift using Aporia's Embeddings Projector - Use the Embedding Projector widget within the investigation room, to view drift between different datasets in production, using UMAP for dimension reduction.

  • Set up monitors to get notified for ML issues - Including data integrity issues, model performance degradation, and model drift.

The next step would be to store your predictions in a data store, including the embeddings themselves. For more information on storing your predictions, please check out the section.

To integrate this type of model follow our .

Check out the for more information about how to connect from different data sources.

📜
HuggingFace
Word2Vec
BERT
Storing Your Predictions
Quickstart
data sources section