ML models performance often unexpectedly degrade when they are deployed in real-world domains. It is very important to track the true model performance metrics from real-world data and react in time, to avoid the consequences of poor model performance.
Causes of model's performance degradation include:
- Input data changes (various reasons)
- Concept drift
For this monitor, the following comparison methods are available:
Configuration may slightly vary depending on the comparison method you choose.
You may select as many prediction fields as you want 😊 the monitor will run on each selected field separately.
Our performance degradation monitor supports a large variety of metrics that can measure the performance of your model's predictions given their corresponding actuals. You can check the full list of metric supported by Aporia in our glossary.
For the fields you chose in the previous step, the monitor will raise an alert if the comparison between the inspection period and the baseline leads to a conclusion outside your threshold boundaries.
This step is important to make sure you have the right amount of alerts that fits your needs. For anomaly detection method, use the monitor preview to help you decide what is the appropriate sensitivity level.