Aporia Documentation
Get StartedBook a Demo🚀 Cool StuffBlog
V2
V2
  • 📖Aporia Docs
  • 🤗Introduction
    • Quickstart
    • Support
  • 💡Core Concepts
    • Why Monitor ML Models?
    • Understanding Data Drift
    • Analyzing Performance
    • Tracking Data Segments
    • Models & Versions
  • 🚀Deployment
    • AWS
    • Google Cloud
    • Azure
    • Databricks
    • Offline / On-Prem
    • Platform Architecture
  • 🏠Storing your Predictions
    • Overview
    • Real-time Models (Postgres)
    • Real-time Models (Kafka)
    • Batch Models
    • Kubeflow / KServe
  • 🧠Model Types
    • Regression
    • Binary Classification
    • Multiclass Classification
    • Multi-Label Classification
    • Ranking
  • 🌈Explainability
    • SHAP values
  • 📜NLP
    • Intro to NLP Monitoring
    • Example: Text Classification
    • Example: Token Classification
    • Example: Question Answering
  • 🍪Data Sources
    • Overview
    • Amazon S3
    • Athena
    • BigQuery
    • Databricks
    • Glue Data Catalog
    • Google Cloud Storage
    • PostgreSQL
    • Redshift
    • Snowflake
    • Microsoft SQL Server
    • Oracle
  • ⚡Monitors & Alerts
    • Overview
    • Data Drift
    • Metric Change
    • Missing Values
    • Model Activity
    • Model Staleness
    • Performance Degradation
    • Prediction Drift
    • Value Range
    • Custom Metric
    • New Values
    • Alerts Consolidation
  • 🎨Dashboards
    • Overview
  • 🤖ML Monitoring as Code
    • Getting started
    • Adding new models
    • Data Segments
    • Custom metrics
    • Querying metrics
    • Monitors
    • Dashboards
  • 📡Integrations
    • Slack
    • Webhook
    • Teams
    • Single Sign On (SAML)
    • Cisco
  • 🔐Administration
    • Role Based Access Control (RBAC)
  • 🔑API Reference
    • REST API
    • API Extended Reference
    • Custom Segment Syntax
    • Custom Metric Syntax
    • Code-Based Metrics
    • Metrics Glossary
  • ⏩Release Notes
    • Release Notes 2024
    • Release Notes 2023
Powered by GitBook
On this page
  • Create an S3 output bucket
  • Update the Aporia IAM role for Redshift access
  • Create a Redshift data source in Aporia
  1. Data Sources

Redshift

PreviousPostgreSQLNextSnowflake

Last updated 2 years ago

This guide describes how to connect Aporia to a Redshift data source in order to monitor your ML Model in production.

We will assume that your model inputs, outputs and optionally delayed actuals can be queried with Redshift SQL. This data source may also be used to connect to your model's training set to be used as a baseline for model monitoring.

Create an S3 output bucket

Create an S3 bucket to which query results will be written. It is recommended that the bucket will be in the same region as the Redshift cluster.

Update the Aporia IAM role for Redshift access

In order to provide access to Redshift, you'll need to update your Aporia IAM role with the necessary API permissions.

Step 1: Obtain your aporia IAM role

Use the same role used for the Aporia deployment. If someone else on your team has deployed Aporia, please reach out to them to obtain the role ARN (it should be in the following format: arn:aws:iam::<account>:role/<role-name-with-path>).

Step 2: Create an access policy

  1. In the list of roles, click the role you obtained.

  2. Add an inline policy.

  3. On the Permissions tab, click Add permissions then click Create inline policy.

  4. In the policy editor, click the JSON tab.

  5. Copy the following access policy, and make sure to fill your correct region, account ID and restrict access to specific databases and tables if necessary.

    Make sure to replace the following placeholders:

    • <region>: You can specify the Redshift AWS region or * for the default region

    • <account-id>: The Redshift AWS account ID.

    • <redshift-cluster>: The Redshift cluster id.

    • <database-name>: You can specify one database name within your Redshift cluster.

    • <results-bucket>: The bucket we will use for the query results.

      {   
       "Version": "2012-10-17",
          "Statement": [
              {
                  "Effect": "Allow",
                  "Action": [
                      "s3:ListBucket",
                      "s3:GetBucketLocation"
                  ],
                  "Resource": [
                      "arn:aws:s3:::<results-bucket>"
                  ]
              },
              {
                  "Effect": "Allow",
                  "Action": [
                      "s3:GetObject",
                      "s3:PutObject"
                  ],
                  "Resource": [
                      "arn:aws:s3:::<results-bucket>/*"
                  ]
              },
              {
                  "Effect": "Allow",
                  "Action": "redshift:GetClusterCredentials",
                  "Resource": "arn:aws:redshift:<region>:<account-id>:dbuser:<redshift-cluster>/<database-name>"
              },
              {
                  "Effect": "Allow",
                  "Action": "redshift:DescribeClusters",
                  "Resource": "*"
              }
          ]
      }
      
  6. Click Review Policy.

  7. In the Name field, enter a policy name.

  8. Click Create policy.

Now Aporia has the read permission it needs to connect to the Redshift database and the S3 bucket you have specified in the policy.

Create a Redshift data source in Aporia

  1. Go to Integrations page and click on the Data Connectors tab

  2. Scroll to Connect New Data Source section

  3. Click Connect on the Redshift card and follow the instructions

Go to and login to your account.

Bravo! now you can use the data source you've created across all your models in Aporia.

🍪
👏
Aporia platform