Monitors

This guide will show you how to automatically add monitors to your models from code using the Python SDK. For more information on the various types of monitors in Aporia, please refer to the documentation on monitors.

Defining monitors

To define a new monitor, create an aporia.Monitor(...) object.

Step 1: Choose monitor type and detection method

When designing a new monitor, the first decision you have to make is:

  • What's the monitor type you'd like to create?

    • Examples: Data drift, Missing values, Performance degradation, etc.

    • This step is similar to the following step in the UI:

  • What's the detection method you'd like to use?

    • Examples: Anomaly Detection over Time, Change in Percentage, Absolute values, Compared to Training, Compared to Segment, etc.

    • This step is similar to the following step in the UI:

To begin, start by creating a monitor object, with your chosen monitor_type and detection_method, and add it to your model.

See Detection Methods Overview and Supported Monitor Types / Detection Methods for the an overview on different monitor types and their supported detection methods.

import aporia.as_code as aporia

stack = aporia.Stack(...)

data_drift = aporia.Monitor(
    "Data Drift - Last week compared to Training",
    monitor_type=aporia.MonitorType.DATA_DRIFT,
    detection_method=aporia.DetectionMethod.COMPARED_TO_TRAINING,
    ...
)

my_model = aporia.Model(
    "My Model",
    type=aporia.ModelType.RANKING,
    icon=aporia.ModelIcon.RECOMMENDATIONS,
    ...,
    
    monitors=[data_drift],
)

stack.add(my_model)

Step 2: Choose focal and baseline datasets

The next step is to choose the dataset in which your monitor will be evaluated - this is called the focal dataset. In most detection methods, you'll also need to provide a baseline dataset.

For example, if you want to create a data drift monitor to compare the distribution of a feature from the last week to the training set, then focal will be "last week in serving", and baseline will be "training set".

data_drift = aporia.Monitor(
    "Data Drift - Last week compared to Training",
    monitor_type=aporia.MonitorType.DATA_DRIFT,
    detection_method=aporia.DetectionMethod.COMPARED_TO_TRAINING,
    focal=aporia.FocalConfiguration(
        # Last week in serving
        timePeriod=aporia.TimePeriod(count=1, type=aporia.PeriodType.WEEKS)
    ),
    baseline=aporia.BaselineConfiguration(
         # Training dataset
        source=aporia.SourceType.TRAINING
    ),
    ...
)

Baseline is required for any monitor that has a "Compared to" field like in the example below, or in any detection method that is not DetectionMethod.ABSOLUTE:

Here's an example for focal / baseline in an anomaly detection over time monitor:

activity_anomaly_detection = aporia.Monitor(
    "Activity Anomaly Detection",
    monitor_type=aporia.MonitorType.MODEL_ACTIVITY,
    detection_method=aporia.DetectionMethod.ANOMALY,
    focal=aporia.FocalConfiguration(
        # Last day
        timePeriod=aporia.TimePeriod(count=1, type=aporia.PeriodType.DAYS)
    ),
    baseline=aporia.BaselineConfiguration(
        # Last 2 weeks *before* the last day
        source=aporia.SourceType.SERVING,
        timePeriod=aporia.TimePeriod(count=2, type=aporia.PeriodType.WEEKS),
        skipPeriod=aporia.TimePeriod(count=1, type=aporia.PeriodType.DAYS)
    ),
    ...
)

Step 3: Configure monitor

Next, it is time to configure some important parameters for the monitor. For example:

activity_anomaly_detection = aporia.Monitor(
    "Activity Anomaly Detection",
    monitor_type=aporia.MonitorType.MODEL_ACTIVITY,
    detection_method=aporia.DetectionMethod.ANOMALY,
    focal=aporia.FocalConfiguration(
        # Last day
        timePeriod=aporia.TimePeriod(count=1, type=aporia.PeriodType.DAYS)
    ),
    baseline=aporia.BaselineConfiguration(
        # Last 2 weeks *before* the last day
        source=aporia.SourceType.SERVING,
        timePeriod=aporia.TimePeriod(count=2, type=aporia.PeriodType.WEEKS),
        skipPeriod=aporia.TimePeriod(count=1, type=aporia.PeriodType.DAYS)
    ),
    sensitivity=0.3,
    ...
)

The following table describes the required parameters for each monitor type

MonitorDetection MethodRequired Parameters

Model Activity

Anomaly Detection

  • sensitivity (0-1)

  • baseline

Model Activity

Change in Percentage

  • percentage (0-100)

  • baseline

Model Activity

Absolute values

  • min (optional)

  • max (optional)

Data Drift

Anomaly Detection

  • thresholds (aporia.ThresholdConfiguration)

  • features (list[str])

  • raw_inputs (list[str])

  • baseline

Data Drift

Compared to Segment

  • thresholds (aporia.ThresholdConfiguration)

  • features (list[str])

  • raw_inputs (list[str])

  • baseline (On segment)

Data Drift

Compared to Training

  • thresholds (aporia.ThresholdConfiguration)

  • features (list[str])

  • raw_inputs (list[str])

  • baseline (On Training)

Prediction Drift

Anomaly Detection

  • thresholds (aporia.ThresholdConfiguration)

  • predictions (list[str])

  • baseline

Prediction Drift

Compared to Segment

  • thresholds (aporia.ThresholdConfiguration)

  • predictions (list[str])

  • baseline (On segment)

Prediction Drift

Compared to Training

  • thresholds (aporia.ThresholdConfiguration)

  • predictions (list[str])

  • baseline (On Training)

Missing Values

Anomaly Detection

  • sensitivity (0-1)

  • min (0-100) (Optional)

  • raw_inputs (list[str])

  • features (list[str])

  • predictions (list[str])

  • baseline

  • testOnlyIncrease (Optional)

Missing Values

Change in Percentage

  • percentage

  • min (0-100) (Optional)

  • raw_inputs (list[str])

  • features (list[str])

  • predictions (list[str])

  • baseline

Missing Values

Absolute values

  • min (0-100) (Optional)

  • max (0-100) (Optional)

  • raw_inputs (list[str])

  • features (list[str])

  • predictions (list[str])

Missing Values

Compared to Segment

  • percentage

  • min

  • raw_inputs (list[str])

  • features (list[str])

  • predictions (list[str])

  • baseline (On segment)

Model Staleness

Absolute

  • staleness_period (aporia.TimePeriod)

New Values

Percentage

  • new_values_count_threshold (Optional)

  • new_values_ratio_threshold (Optional)

  • baseline

New Values

Compared to Segment

  • new_values_count_threshold (Optional)

  • new_values_ratio_threshold (Optional)

  • baseline (on Segment)

New Values

Compared to Training

  • new_values_count_threshold (Optional)

  • new_values_ratio_threshold (Optional)

  • baseline (on Training)

Values Range

Percentage

  • distance

  • baseline

Values Range

Absolute

  • min (Optional)

  • max (Optiona)

Values Range

Compared to Segment

  • distance

  • baseline (on Segment)

Values Range

Compared to Training

  • distance

  • baseline (on Training)

Performance Degradation

Anomaly Detection

  • metric

  • sensitivity

  • baseline

  • metric-specific parameters

Performance Degradation

Absolute

  • metric

  • min (Optional)

  • max (Optional)

  • metric-specific parameters

Performance Degradation

Percentage

  • metric

  • percentage

  • baseline

  • metric-specific parameters

Performance Degradation

Compared to Segment

  • metric

  • percentage

  • baseline (Compared to Segment)

  • metric-specific parameters

Performance Degradation

Compared to Training

  • metric

  • percentage

  • baseline (Compared to Training)

  • metric-specific parameters

Metric Change

Anomaly Detection

  • metric

  • sensitivity

  • baseline

  • metric-specific parameters

Metric Change

Absolute

  • metric

  • min (Optional)

  • max (Optional)

  • metric-specific parameters

Metric Change

Percentage

  • metric

  • percentage

  • baseline

  • metric-specific parameters

Metric Change

Compared to Segment

  • metric

  • percentage

  • baseline (Compared to Segment)

  • metric-specific parameters

Metric Change

Compared to Training

  • metric

  • percentage

  • baseline (Compared to Training)

  • metric-specific parameters

Custom Metric

Anomaly Detection

  • custom_metric/custom_metric_id

  • sensitivity

  • baseline

Custom Metric

Absolute

  • custom_metric/custom_metric_id

  • min (Optional)

  • max (Optional)

  • baseline

Custom Metric

Percentage

  • custom_metric/custom_metric_id

  • percentage

  • baseline

Metric-specific parameters

  • k: Used for ranking metrics, such as NDCG, MRR, MAP etc.

  • prediction_threshold: Used for binary confusion matrix metrics, such as accuracy, tp_count, recall etc. Used with a numeric prediction (0-1) and a boolean actual.

  • prediction_class: The class for which to calculate per-class metrics, such as accuracy-per-class.

  • average_method: Used for precision/recall/f1_score on multiclass predictions. Values are of aporia.AverageMethod enum (MICRO/MACRO/WEIGHTED)

Step 4: Configure monitor action (e.g alert)

Finally, any monitor requires a severity parameter to describe the severity of the alerts generated by this monitor (low / medium / high).

You can also add an emails parameter for receiving the alert in mail, or the messaging parameter for integration with Webhooks, Datadog, Slack, Teams, etc.

activity_anomaly_detection = aporia.Monitor(
    "Activity Anomaly Detection",
    monitor_type=aporia.MonitorType.MODEL_ACTIVITY,
    detection_method=aporia.DetectionMethod.ANOMALY,
    focal=aporia.FocalConfiguration(
        # Last day
        timePeriod=aporia.TimePeriod(count=1, type=aporia.PeriodType.DAYS)
    ),
    baseline=aporia.BaselineConfiguration(
        # Last 2 weeks *before* the last day
        source=aporia.SourceType.SERVING,
        timePeriod=aporia.TimePeriod(count=2, type=aporia.PeriodType.WEEKS),
        skipPeriod=aporia.TimePeriod(count=1, type=aporia.PeriodType.DAYS)
    ),
    sensitivity=0.3,
    severity=aporia.Severity.MEDIUM,
    emails=[<EMAIL_LIST>],
    messaging={"WEBHOOK": [WEBHOOK_INTEGRATION_ID], "SLACK": [SLACK_INTEGRATION_ID]}
)

Detection Methods Overview

Detection MethodEnum valueDescriptionExmaple

Anomaly Detection over Time

DetectionMethod.ANOMALY

This will train an anomaly detection model to raise an alert if there's an anomaly in metric value with respect to a certain baseline

Missing value ratio of last week compared to week before that

Change in Percentage

DetectionMethod.PERCENTAGE

Detect change in percentage in metric value

Standard deviation changed by >20%

Absolute values

DetectionMethod.ABSOLUTE

Raise an alert when metric value is larger or smaller than a certain value

Accuracy is lower than 0.9

Compared to Segment

DetectionMethod.COMPARED_TO_SEGMENT

Detect changes in metric value between two data segments

Data drift between gender=male to gender=female

Compared to Training

DetectionMethod.COMPARED_TO_TRAINING

Data change in metric value compared to the training set

Prediction drift of last month in serving compared to training

Supported Monitor Types / Detection Methods

The following table describes the various monitor types and their supported detection methods:

Monitor TypeEnum valueSupported detection methods

Model Activity

MonitorType.MODEL_ACTIVITY

  • DetectionMethod.ANOMALY

  • DetectionMethod.PERCENTAGE

  • DetectionMethod.ABSOLUTE

Data Drift

MonitorType.DATA_DRIFT

  • DetectionMethod.ANOMALY

  • DetectionMethod.COMPARED_TO_SEGMENT

  • DetectionMethod.COMPARED_TO_TRAINING

Prediction Drift

MonitorType.PREDICTION_DRIFT

  • DetectionMethod.ANOMALY

  • DetectionMethod.COMPARED_TO_SEGMENT

  • DetectionMethod.COMPARED_TO_TRAINING

Missing Values

MonitorType.MISSING_VALUES

  • DetectionMethod.ANOMALY

  • DetectionMethod.PERCENTAGE

  • DetectionMethod.ABSOLUTE

  • DetectionMethod.COMPARED_TO_SEGMENT

Performance Degradation

MonitorType.PERFORMANCE_DEGRADATION

  • DetectionMethod.ANOMALY

  • DetectionMethod.PERCENTAGE

  • DetectionMethod.ABSOLUTE

  • DetectionMethod.COMPARED_TO_SEGMENT

  • DetectionMethod.COMPARED_TO_TRAINING

Metric Change

MonitorType.METRIC_CHANGE

  • DetectionMethod.ANOMALY

  • DetectionMethod.PERCENTAGE

  • DetectionMethod.ABSOLUTE

  • DetectionMethod.COMPARED_TO_SEGMENT

  • DetectionMethod.COMPARED_TO_TRAINING

Custom Metric

MonitorType.CUSTOM_METRIC_MONITOR

  • DetectionMethod.ANOMALY

  • DetectionMethod.PERCENTAGE

  • DetectionMethod.ABSOLUTE

Model Staleness

MonitorType.MODEL_STALENESS

  • DetectionMethod.ABSOLUTE

Value Range

MonitorType.VALUE_RANGE

  • DetectionMethod.PERCENTAGE

  • DetectionMethod.ABSOLUTE

  • DetectionMethod.COMPARED_TO_SEGMENT

  • DetectionMethod.COMPARED_TO_TRAINING

New Values

MonitorType.NEW_VALUES

  • DetectionMethod.PERCENTAGE

  • DetectionMethod.COMPARED_TO_SEGMENT

  • DetectionMethod.COMPARED_TO_TRAINING

Last updated