Aporia Documentation
Get StartedBook a Demo🚀 Cool StuffBlog
V2
V2
  • 📖Aporia Docs
  • 🤗Introduction
    • Quickstart
    • Support
  • 💡Core Concepts
    • Why Monitor ML Models?
    • Understanding Data Drift
    • Analyzing Performance
    • Tracking Data Segments
    • Models & Versions
  • 🚀Deployment
    • AWS
    • Google Cloud
    • Azure
    • Databricks
    • Offline / On-Prem
    • Platform Architecture
  • 🏠Storing your Predictions
    • Overview
    • Real-time Models (Postgres)
    • Real-time Models (Kafka)
    • Batch Models
    • Kubeflow / KServe
  • 🧠Model Types
    • Regression
    • Binary Classification
    • Multiclass Classification
    • Multi-Label Classification
    • Ranking
  • 🌈Explainability
    • SHAP values
  • 📜NLP
    • Intro to NLP Monitoring
    • Example: Text Classification
    • Example: Token Classification
    • Example: Question Answering
  • 🍪Data Sources
    • Overview
    • Amazon S3
    • Athena
    • BigQuery
    • Databricks
    • Glue Data Catalog
    • Google Cloud Storage
    • PostgreSQL
    • Redshift
    • Snowflake
    • Microsoft SQL Server
    • Oracle
  • ⚡Monitors & Alerts
    • Overview
    • Data Drift
    • Metric Change
    • Missing Values
    • Model Activity
    • Model Staleness
    • Performance Degradation
    • Prediction Drift
    • Value Range
    • Custom Metric
    • New Values
    • Alerts Consolidation
  • 🎨Dashboards
    • Overview
  • 🤖ML Monitoring as Code
    • Getting started
    • Adding new models
    • Data Segments
    • Custom metrics
    • Querying metrics
    • Monitors
    • Dashboards
  • 📡Integrations
    • Slack
    • Webhook
    • Teams
    • Single Sign On (SAML)
    • Cisco
  • 🔐Administration
    • Role Based Access Control (RBAC)
  • 🔑API Reference
    • REST API
    • API Extended Reference
    • Custom Segment Syntax
    • Custom Metric Syntax
    • Code-Based Metrics
    • Metrics Glossary
  • ⏩Release Notes
    • Release Notes 2024
    • Release Notes 2023
Powered by GitBook
On this page
  • Create a read-only user for PostgreSQL access
  • Create a PostgreSQL data source in Aporia
  1. Data Sources

PostgreSQL

This guide describes how to connect Aporia to an PostgreSQL data source in order to monitor your ML Model in production.

We will assume that your model inputs, outputs and optionally delayed actuals can be queried with SQL. This data source may also be used to connect to your model's training set to be used as a baseline for model monitoring.

Create a read-only user for PostgreSQL access

In order to provide access to PostgreSQL, create a read-only user for Aporia in PostgreSQL.

Please use the SQL snippet below to create the user for Aporia. Before using the snippet, you will need to populate the following:

  • <aporia_password>: Strong password to be used by the user.

  • <your_database>: PostgreSQL database with your ML training / inference data.

  • <your_schema>: PostgreSQL schema with your ML training / inference data.

CREATE USER aporia WITH PASSWORD '<aporia_password>';

-- Grant access to DB and schema
GRANT CONNECT ON DATABASE database_name TO username;
GRANT USAGE ON SCHEMA <your_schema> TO username;

-- Grant access to multiple tables
GRANT SELECT ON table1 TO username;
GRANT SELECT ON table2 TO username;
GRANT SELECT ON table3 TO username;

Create a PostgreSQL data source in Aporia

  1. Go to Integrations page and click on the Data Connectors tab

  2. Scroll to Connect New Data Source section

  3. Click Connect on the PostgreSQL card and follow the instructions

    1. Note that the provided URL should be in the following format jdbc:postgresql://<SERVER_HOSTNAME>.

PreviousGoogle Cloud StorageNextRedshift

Last updated 1 year ago

Go to and login to your account.

Bravo! now you can use the data source you've created across all your models in Aporia.

🍪
👏
Aporia platform