Example: Token Classification
Token classification is a natural language understanding task in which a label is assigned to some tokens in a text
Named Entity Recognition (NER) and Part-of-Speech (PoS) tagging are two popular token classification subtasks. NER models could be trained to recognize specific entities in a text, such as dates, individuals, and locations, while PoS tagging would identify which words in a text are verbs, nouns, and punctuation marks.
This guide will walk you through an example of NER model monitoring using spacy. Let's start by creating a dummy model:
And letβs assume this is how our prediction function looks like (maybe itβs part of an http server, for example):
Each entity will include the text, the embedding, and the prediction as follow:
text (raw input) -
entity.text
embedding -
entity.vector
prediction -
entity.label
Storing your Predictions
The next step would be to store your predictions in a data store, including the embeddings themselves. For more information on storing your predictions, please check out the Storing Your Predictions section.
For example, you could use a Parquet file on S3 or a Postgres table that looks like this:
id | raw_text (text) | embeddings (embedding) | prediction (categorical) | timestamp (datetime) |
---|---|---|---|---|
1 | I love cookies and Aporia |
|
| 2021-11-20 13:41:00 |
2 | This restaurant was really bad |
|
| 2021-11-20 13:45:00 |
3 | Hummus is a type of food |
|
| 2021-11-20 13:49:00 |
To integrate this type of model follow our Quickstart.
Check out the data sources section for more information about how to connect from different data sources.
Schema mapping
This type of model is a multiclass model, with text
raw input and a embedding
feature.
There are 2 unique types in aporia to help you integrate your NLP model - text
, and embedding
.
The text
should be used with your raw_text column. Note that by default, in the UI every string column will be automatically marked as categorical
, but you'll have the option to change it to text
for NLP use cases.
The embedding
as the name suggested, should be used with your embedding column. Note that by default, in the UI every array column will be automatically marked as array
, but you'll have the option to change it to embedding
for NLP use cases.
Next steps
Create a custom dashboard for your model in Aporia - Drag & drop widgets to show different performance metrics, top drifted features, etc.
Visualize NLP drift using Aporia's Embeddings Projector - Use the Embedding Projector widget within the investigation room, to view drift between different datasets in production, using UMAP for dimension reduction.
Set up monitors to get notified for ML issues - Including data integrity issues, model performance degradation, and model drift. For example:
Make sure the distribution of the different entity labels doesnβt drift across time
Make sure the distribution of the embedding vector doesnβt drift across time
Last updated