Last updated
Last updated
If you are using or for model serving, you can store the predictions of your models using InferenceDB.
is an open-source cloud native tool that connects to KServe and streams predictions to a data lake, based on Kafka.
WARNING: InferenceDB is still experimental!
InferenceDB is an open-source project developed by Aporia. It is still experimental, and not yet ready for production!
This guide will explain how to deploy a simple scikit-learn model using KServe, and log its inferences to a Parquet file in S3.
- with the
- with Schema Registry, Kafka Connect, and plugin
To get started as quickly as possible, see the , which shows how to set up a full environment in minutes.
First, we will need a Kafka broker to collect all KServe inference requests and responses:
Next, we will serve a simple sklearn model using KServe:
Finally, we can log the predictions of our new model using InferenceDB:
First, we will need to port-forward the Istio service so we can access it from our local machine:
Prepare a payload in a file called iris-input.json
:
And finally, you can send some inference requests:
If everything was configured correctly, these predictions should have been logged to a Parquet file in S3.
Note the logger
section - you can read more about it in the .