NannyML estimates model performance using an algorithm called Confidence-Based Performance Estimation, researched by NannyML core contributors, so you can detect real-world performance drop before you otherwise would.
It can also track the realised performance of your model once targets are available.
NannyML uses Data Reconstruction with PCA to detect multivariate data drift, and for univariate data drift it uses tests that measure the observed drift, and a p-value that shows how likely it would be to get the observed sample if there was no drift.
Model output drift uses the same univariate methodology as for a continuous feature. All of these together help you to identify what is changing in your data and your model.
Target drift is monitored by calculating the mean occurrence of positive events as well as the chi-squared statistic from the 2-sample Chi-Squared test of the target values for each chunk.
ranker = nml.Ranker.by(['alert_count', 'performance_drops', 'feature_importance'])
ranked_features = ranker.rank(drift_results, model_metadata, only_drifting=True)
Because NannyML can estimate performance, it allows you to get alerts on data drift that impact performance. These are tailored to draw your attention to statistically significant events, helping avoid alert fatigue.
You can use our ranker to list changes according to their significance and likely impact, allowing you to prioritise problems. This means you can link drops in performance to data drift that causes it.
NannyML can be set up in seconds on your own local or cloud environments, ensuring your data stays in your control and model monitoring fully complies with your security policies.
$ pip install nannyml
Integrates with any classification model, regardless of language or format. More problem types will be supported in future.
NannyML turns the machine learning flow into a cycle, empowering data scientists to do meaningful and informed post-deployment data science to monitor and improve models in production through iterative deployments.
Check out the open source code in our Github, as well as our detailed readme, code examples and other guides.
Get involved with conversations as part of our community of users, contributors and friends, in our Slack.