Overview
Last updated
Last updated
Monitoring your Machine Learning models begins with storing their inputs and outputs in production.
Oftentimes, this data is used not just for model monitoring, but also for retraining, auditing, and other purposes; therefore, it is crucial that you have complete control over it.
Aporia monitors your models by connecting directly to your data, in your format. This section discusses the fundamentals of storing model predictions.
If you are not storing your predictions today, you can also , although storing your predictions in your own database is highly recommended.
Depending on your existing enterprise data lake infrastructure, performance requirements, and cloud costs constraints, storing your predictions can be done in a variety of data stores.
Here are some common options:
/
/
Parquet files on S3 / GCS / ABS
If you choose this option, a metastore such as is recommended.
When storing your predictions, it's highly recommended to adopt a standardized directory structure (or SQL table structure) across all of your organization's models.
With a standardized structure, you'll be able to get all models onboarded to the monitoring system automatically.
Here is a very basic example:
Even though this section focuses on the storage of predictions, you should also consider saving the training and test sets of your models. They can serve as a monitoring baseline.
Recommendations:
One row per prediction.
One column per feature, prediction or raw input.
Use a prefix for column names to identify their group (e.g features.
, raw_inputs.
, predictions.
, actuals.
, etc.)
For serving, add ID and prediction timestamp columns.
Example: