Quickstart
Last updated
Last updated
With just a few lines of code, any Machine Learning model can be integrated and monitored in production with Aporia.
In this guide, we will use Aporia's Python API to create a model in Aporia and log its predictions.
To get started, install the Aporia Python library:
Next, import and initialize the Aporia library:
To create a new model to be monitored in Aporia, you can call the aporia.create_model(...)
API:
This API will not recreate the model if the model ID already exists. You can also specify color, icon, tags and model owner:
Each model in Aporia contains different Model Versions. When you (re)train your model, you should create a new model version in Aporia.
Model version parameter can be any string - you can use the model file's hash, git commit hash, experiment/run ID from MLFlow or anything else.
Model type can be regression, binary, multiclass, multi-label, or ranking. Please refer to the relevant documentation on each model type for more info.
numeric
- valid examples: 1, 2.87, 0.53, 300.13
boolean
- valid examples: True, False
categorical
- a categorical field with integer values
string
- a categorical field with string values
datetime
- contains either python datetime objects, or an ISO-8601 timestamp string
text
- freeform text
dict
- dictionaries - at the moment keys are strings and values are numeric
tensor
- useful for unstructured data, must specify shape, e.g. {"type": "tensor", "dimensions": [768]}
Next, we will log some predictions to the newly created model version. These predictions will be kept in an Aporia-managed database.
In production, we strongly recommend storing your model's predictions in your own database that you have complete control over- we've seen many of our customers do this anyway for retraining, auditing, and other purposes.
Aporia can then connect to your data directly and use it for model observability, stripping away the need for data duplication. However, this quickstart assumes you have no database and would simply like to log model inferences:
You must specify an ID for each prediction. This ID can later be used to log the prediction's actual value. If you don't care about this, just pass str(uuid.uuid4())
as the prediction ID.
Both of these APIs are entirely asynchronous. This was done to avoid blocking your application, which may handle a large number of predictions per second.
You can now access Aporia and see your model, as well as create dashboards and monitors for it!