The Post Deployment Data Science Blog
All things data science and machine learning, post deployment Run by nannyML
Monitoring Custom Metrics without Ground Truth
Setting up custom metrics for your machine learning models can bring deeper insights beyond standard metrics. In this tutorial, we’ll walk through the process step-by-step, showing you how to create custom metrics tailored to classification and regression models.
Reverse Concept Drift Algorithm: Insights from NannyML Research
This blog explores concept drift and how it impacts machine learning models. We'll discuss the algorithms and experiments we conducted to detect and measure its impact and how we arrived at the Reverse Concept Drift Algorithm.
Which Multivariate Drift Detection Method Is Right for You: Comparing DRE and DC
In this blog, we compare two multivariate drift detection methods, Data Reconstruction Error (DRE) and Domain Classifier (DC), to help you determine which one is better suited for your needs.
Common Pitfalls in Monitoring Default Prediction Models and How to Fix Them
Learn common reasons why loan default prediction models degrade after deployment in production, and follow a hands-on tutorial to resolve these issues.
Prevent Failure of Product Defect Detection Models: A Post-Deployment Guide
This blog dissects the core challenge of monitoring defect detection models: the censored confusion matrix. Additionally, I explore how business value metrics can help you articulate the financial impact of your ML models in front of non-data science experts.
How to Monitor a Credit Card Fraud Detection ML Model
Learn common reasons why fraud detection models degrade after deployment in production, and follow a hands-on tutorial to resolve these issues.
Keep your Model Performance Breezy: Wind Turbine Energy Model Monitoring
Explore how NannyML tools can help maintain prediction reliability in wind energy prediction models.
Why Relying on Training Data for ML Monitoring Can Trick You
The most commonly repeated mistake while choosing a reference dataset is using the training data. This blog highlights the drawbacks of this decision and guides you on selecting the correct reference data.
Using Concept Drift as a Model Retraining Trigger
Discover how NannyML’s innovative Reverse Concept Drift (RCD) algorithm optimizes retraining schedules and ensures accurate, timely interventions when concept drift impacts model performance.
Retraining is Not All You Need
Your machine learning (ML) model’s performance will likely decrease over time. In this blog, we explore which steps you can take to remedy your model and get it back on track.
Getting Up To Speed With NannyML’s OSS Library Optimizations (2024)
Discover the latest optimizations to speed up your ML monitoring and maintain top performance with NannyML's improved open-source tools!
A Comprehensive Guide to Univariate Drift Detection Methods
Discover how to tackle univariate drift with our comprehensive guide. Learn about key techniques such as the Jensen-Shannon Distance, Hellinger Distance, the Kolmogorov-Smirnov Test, and more. Implement them in Python using the NannyML library.
Stress-free Monitoring of Predictive Maintenance Models
Prevent costly machine breakdowns with NannyML’s workflow: Learn to tackle silent model failures, estimate performance with CBPE, and resolve issues promptly.
Effective ML Monitoring: A Hands-on Example
NannyML’s ML monitoring workflow is an easy, repeatable and effective way to ensure your models keep performing well in production.
Population Stability Index (PSI): A Comprehensive Overview
What is the Population Stability Index (PSI)? How can you use it to detect data drift using Python? Is PSI the right method for you? This blog is the perfect read if you want answers to those questions.
Multivariate Drift Detection: A Comparative Study on Real-world Data
This blog introduces covariate shift and various approaches to detecting it. It then deep-dives into the various multivariate drift detection algorithms with NannyML on a real-world dataset.
Detect Data Drift Using Domain Classifier in Python
A comprehensive explanation and practical guide to using the Domain Classifier method for detecting multivariate drift.
Monitoring Strategies for Demand Forecasting Machine Learning Models
Demand forecasting cases are one of the most challenging models to monitor post-deployment.
Harnessing the Power of AWS SageMaker & NannyML PART 1: Training and Deploying an XGBoost Model
A walkthrough on how to train, deploy and continuously monitor ML models using NannyML and AWS SageMaker.
How to monitor ML models with NannyML SageMaker Algorithms
A walkthrough on how to deploy NannyML Monitoring Algorithms via AWS Marketplace and SageMaker
How to Deploy NannyML in Production: A Step-by-Step Tutorial
Let’s dive into the process of setting up a monitoring system using NannyML with Grafana, PostgreSQL, and Docker.
91% of ML Models degrade in time
A closer look to a paper from MIT, Harvard and other institutions showing how ML model’s performance tend to degrade in time.
Bad Machine Learning models can still be well-calibrated
You don’t need a perfect oracle to get your probabilities right.
Detecting Covariate Shift: A Guide to the Multivariate Approach
Good old PCA can alert you when the distribution of your production data changes.