Performance Degradation
Why Monitor Performance Degradation?
ML models performance often unexpectedly degrade when they are deployed in real-world domains. It is very important to track the true model performance metrics from real-world data and react in time, to avoid the consequences of poor model performance.
Causes of model's performance degradation include:
Input data changes (various reasons)
Concept drift
Comparison methods
For this monitor, the following comparison methods are available:
Customizing your monitor
Configuration may slightly vary depending on the comparison method you choose.
STEP 1: choose the predictions & metrics you would like to monitor
You may select as many prediction fields as you want 😊 the monitor will run on each selected field separately.
STEP 2: choose inspection period and baseline
For the fields you chose in the previous step, the monitor will raise an alert if the comparison between the inspection period and the baseline leads to a conclusion outside your threshold boundaries.
STEP 3: calibrate thresholds
This step is important to make sure you have the right amount of alerts that fits your needs. For anomaly detection method, use the monitor preview to help you decide what is the appropriate sensitivity level.
Last updated