For high-throughput, real-time models (e.g models with an HTTP endpoint such as POST /predict and billions of predictions per day), you can stream predictions to or other message brokers, and then have a separate process to store them in a persistent storage.
Using a message broker such as Kafka lets you store predictions of real-time models with low latency.
Don't have billions of predictions?
If you are not dealing with billions of predictions per day, you should consider a simpler solution.
Please see the guide on .
Step 1: Deploy Kafka
You can deploy Kafka in various ways:
If you are using Kubernetes, you can deploy the or the .
Deploy a managed Kafka service in your cloud provider, e.g .
Use a managed service such as .
Step 2: Write predictions to Kafka
Writing messages to a Kafka queue is very simple in Python and other languages. Here are examples for Flask and FastAPI, which are commonly used to serve ML models.
Spark Streaming is an extension of the core Spark API that allows you to process real-time data from various sources including Kafka. This processed data can be pushed out to file systems and databases.
In this example, we will process messages from the my-model topic and store them in a Delta lake table: