Prediction drift allows you to monitor a change in the distribution of the predicted label or value.
For example, a larger proportion of credit-worthy applications when your product was launched in a more affluent area. Your model still holds, but your business may be unprepared for this scenario.
For this monitor, the following comparison methods are available:
Configuration may slightly vary depending on the baseline you choose.
You may select as many prediction fields as you want 😊
Note that the monitor will run on each selected field separately.
For the predictions you chose in the previous step, the monitor will compare the inspection period distribution with the baseline distribution. An alert will raise if the monitor finds a drift between these two distributions.
Use the monitor preview to help you choose the right threshold and make sure you have the amount of alerts that fits your needs.
The threshold for categorical predictions is different than the one for numeric predictions. Make sure to calibrate them both if relevant.
For numeric predictions, Aporia detects drifts based on the Jensen–Shannon divergence metric. For categorical predictions, drifts are detected using Hellinger distance.
If you need to use other metrics, please contact us.