Redshift
Last updated
Last updated
This guide describes how to connect Aporia to an Redshift data source in order to monitor a new ML Model in production.
We will assume that your model inputs, outputs and optionally delayed actuals can be queried with Redshift SQL. This data source may also be used to connect to your model's training/test set to be used as a baseline for model monitoring.
In order to provide access to Redshift, create a IAM role with the necessary API permissions.
First, create a JSON file on your computer with the following content:
Make sure to replace the following placeholders:
<REGION>
: You can specify the Redshift AWS region or *
for the default region.
<ACCOUNT_ID>
: The Redshift AWS account ID.
<REDSHIFT_CLUSTER_NAME>
: The Redshift cluster name.
<DBUSER_NAME>
: Name of the Redshift user.
Next, create a new user in AWS with programmatic access only, and grant it the role you've just created. Create security credentials for it (access and secret keys) and use them in the next section.
IAM Authentication
For authentication without security credentials (access key and secret key), please contact your Aporia account manager.
To create a new model to be monitored in Aporia, you can call the aporia.create_model(...)
API:
Each model in Aporia contains different Model Versions. When you (re)train your model, you should create a new model version in Aporia.
Each raw input, feature or prediction is mapped by default to the column of the same name in the Redshift query.
By creating a feature named amount
or a prediction named proba
, for example, the Redshift data source will expect a column in the Redshift query named amount
or proba
, respectively.
Next, create an instance of RedshiftDataSource
and pass it to apr_model.connect_serving(...)
or apr_model.connect_training(...)
:
Note that as part of the connect_serving
API, you are required to specify additional 2 columns:
id_column
- A unique ID to represent this prediction.
timestamp_column
- A column representing when did this prediction occur.
For more information on:
Advanced feature / prediction <-> column mapping
How to integrate delayed actuals
How to integrate training / test sets
For more information, see .
Please see the page.