Aporia Documentation
Get StartedBook a Demo🚀 Cool StuffBlog
V2
V2
  • 📖Aporia Docs
  • 🤗Introduction
    • Quickstart
    • Support
  • 💡Core Concepts
    • Why Monitor ML Models?
    • Understanding Data Drift
    • Analyzing Performance
    • Tracking Data Segments
    • Models & Versions
  • 🚀Deployment
    • AWS
    • Google Cloud
    • Azure
    • Databricks
    • Offline / On-Prem
    • Platform Architecture
  • 🏠Storing your Predictions
    • Overview
    • Real-time Models (Postgres)
    • Real-time Models (Kafka)
    • Batch Models
    • Kubeflow / KServe
  • 🧠Model Types
    • Regression
    • Binary Classification
    • Multiclass Classification
    • Multi-Label Classification
    • Ranking
  • 🌈Explainability
    • SHAP values
  • 📜NLP
    • Intro to NLP Monitoring
    • Example: Text Classification
    • Example: Token Classification
    • Example: Question Answering
  • 🍪Data Sources
    • Overview
    • Amazon S3
    • Athena
    • BigQuery
    • Databricks
    • Glue Data Catalog
    • Google Cloud Storage
    • PostgreSQL
    • Redshift
    • Snowflake
    • Microsoft SQL Server
    • Oracle
  • ⚡Monitors & Alerts
    • Overview
    • Data Drift
    • Metric Change
    • Missing Values
    • Model Activity
    • Model Staleness
    • Performance Degradation
    • Prediction Drift
    • Value Range
    • Custom Metric
    • New Values
    • Alerts Consolidation
  • 🎨Dashboards
    • Overview
  • 🤖ML Monitoring as Code
    • Getting started
    • Adding new models
    • Data Segments
    • Custom metrics
    • Querying metrics
    • Monitors
    • Dashboards
  • 📡Integrations
    • Slack
    • Webhook
    • Teams
    • Single Sign On (SAML)
    • Cisco
  • 🔐Administration
    • Role Based Access Control (RBAC)
  • 🔑API Reference
    • REST API
    • API Extended Reference
    • Custom Segment Syntax
    • Custom Metric Syntax
    • Code-Based Metrics
    • Metrics Glossary
  • ⏩Release Notes
    • Release Notes 2024
    • Release Notes 2023
Powered by GitBook
On this page
  • Create a read-only user for Oracle access
  • Create a Oracle data source in Aporia
  1. Data Sources

Oracle

This guide describes how to connect Aporia to an Oracle data source in order to monitor your ML Model in production.

We will assume that your model inputs, outputs and optionally delayed actuals can be queried with SQL. This data source may also be used to connect to your model's training set to be used as a baseline for model monitoring.

Create a read-only user for Oracle access

In order to provide access to Oracle, create a read-only user for Aporia in Oracle.

Please use the SQL snippet below to create the user for Aporia. Before using the snippet, you will need to populate the following:

  • <username>: The user name to create.

  • <aporia_password>: Strong password to be used by the user.

  • <schema_name.table>: The resources to which we want to granted access to the new user.

-- Create user and grant access
CREATE USER <username> IDENTIFIED BY '<aporia_password>';

-- Grant access to DB and schema
GRANT CONNECT TO <username>;

-- Grant access to multiple tables
GRANT SELECT ON schema_name.table1 TO <username>;
GRANT SELECT ON schema_name.table2 TO <username>;
GRANT SELECT ON schema_name.table3 TO <username>;

Create a Oracle data source in Aporia

  1. Go to Integrations page and click on the Data Connectors tab

  2. Scroll to Connect New Data Source section

  3. Click Connect on the Oracle card and follow the instructions

    1. Note that the provided URL should be in the following format jdbc:oracle:thin:@hostname:port_number:instance_name.

PreviousMicrosoft SQL ServerNextOverview

Last updated 1 year ago

Go to and login to your account.

Bravo! now you can use the data source you've created across all your models in Aporia.

🍪
👏
Aporia platform