The Post Deployment Data Science Blog

All things data science and machine learning, post deployment Run by nannyML

Which Multivariate Drift Detection Method Is Right for You: Comparing DRE and DC

Which Multivariate Drift Detection Method Is Right for You: Comparing DRE and DC

In this blog, we compare two multivariate drift detection methods, Data Reconstruction Error (DRE) and Domain Classifier (DC), to help you determine which one is better suited for your needs.

From Crisis to Control: NannyML's Role in Accurate Energy Demand Forecasting
•

From Crisis to Control: NannyML's Role in Accurate Energy Demand Forecasting

Common Pitfalls in Monitoring Default Prediction Models and How to Fix Them
•

Common Pitfalls in Monitoring Default Prediction Models and How to Fix Them

Learn common reasons why loan default prediction models degrade after deployment in production, and follow a hands-on tutorial to resolve these issues.

Prevent Failure of Product Defect Detection Models: A Post-Deployment Guide
•

Prevent Failure of Product Defect Detection Models: A Post-Deployment Guide

This blog dissects the core challenge of monitoring defect detection models: the censored confusion matrix. Additionally, I explore how business value metrics can help you articulate the financial impact of your ML models in front of non-data science experts.

How to Monitor a Credit Card Fraud Detection ML Model
•

How to Monitor a Credit Card Fraud Detection ML Model

Learn common reasons why fraud detection models degrade after deployment in production, and follow a hands-on tutorial to resolve these issues.

Keep your Model Performance Breezy: Wind Turbine Energy Model Monitoring
•

Keep your Model Performance Breezy: Wind Turbine Energy Model Monitoring

Explore how NannyML tools can help maintain prediction reliability in wind energy prediction models.

Why Relying on Training Data for ML Monitoring Can Trick You
•

Why Relying on Training Data for ML Monitoring Can Trick You

The most commonly repeated mistake while choosing a reference dataset is using the training data. This blog highlights the drawbacks of this decision and guides you on selecting the correct reference data.

Estimating Model Performance Without Labels

Estimating Model Performance Without Labels

Using Concept Drift as a Model Retraining Trigger
•

Using Concept Drift as a Model Retraining Trigger

Discover how NannyML’s innovative Reverse Concept Drift (RCD) algorithm optimizes retraining schedules and ensures accurate, timely interventions when concept drift impacts model performance.

Retraining is Not All You Need
•

Retraining is Not All You Need

Your machine learning (ML) model’s performance will likely decrease over time. In this blog, we explore which steps you can take to remedy your model and get it back on track.

Getting Up To Speed With NannyML’s OSS Library Optimizations (2024)
•

Getting Up To Speed With NannyML’s OSS Library Optimizations (2024)

Discover the latest optimizations to speed up your ML monitoring and maintain top performance with NannyML's improved open-source tools!

A Comprehensive Guide to Univariate Drift Detection Methods
•

A Comprehensive Guide to Univariate Drift Detection Methods

Discover how to tackle univariate drift with our comprehensive guide. Learn about key techniques such as the Jensen-Shannon Distance, Hellinger Distance, the Kolmogorov-Smirnov Test, and more. Implement them in Python using the NannyML library.

Stress-free Monitoring of Predictive Maintenance Models
•

Stress-free Monitoring of Predictive Maintenance Models

Prevent costly machine breakdowns with NannyML’s workflow: Learn to tackle silent model failures, estimate performance with CBPE, and resolve issues promptly.

Effective ML Monitoring: A Hands-on Example
•

Effective ML Monitoring: A Hands-on Example

NannyML’s ML monitoring workflow is an easy, repeatable and effective way to ensure your models keep performing well in production.

Don’t Drift Away with Your Data: Monitoring Data Drift from Setup to Cloud
•

Don’t Drift Away with Your Data: Monitoring Data Drift from Setup to Cloud

Population Stability Index (PSI): A Comprehensive Overview
•

Population Stability Index (PSI): A Comprehensive Overview

What is the Population Stability Index (PSI)? How can you use it to detect data drift using Python? Is PSI the right method for you? This blog is the perfect read if you want answers to those questions.

We used data drift signals to estimate model performance — so you don't have to
•

We used data drift signals to estimate model performance — so you don't have to

Multivariate Drift Detection: A Comparative Study on Real-world Data
•

Multivariate Drift Detection: A Comparative Study on Real-world Data

This blog introduces covariate shift and various approaches to detecting it. It then deep-dives into the various multivariate drift detection algorithms with NannyML on a real-world dataset.

Detect Data Drift Using Domain Classifier in Python
•

Detect Data Drift Using Domain Classifier in Python

A comprehensive explanation and practical guide to using the Domain Classifier method for detecting multivariate drift.

Guide: How to evaluate if NannyML is the right monitoring tool for you
•

Guide: How to evaluate if NannyML is the right monitoring tool for you

Can we detect LLM hallucinations? — A quick review of our experiments
•

Can we detect LLM hallucinations? — A quick review of our experiments

Automating post-deployment Data Collection for ML Monitoring
•

Automating post-deployment Data Collection for ML Monitoring

How to Estimate Performance and Detect Drifting Images for a Computer Vision Model?
•

How to Estimate Performance and Detect Drifting Images for a Computer Vision Model?

Detecting Concept Drift: Impact on Machine Learning Performance

Detecting Concept Drift: Impact on Machine Learning Performance

When should I retrain my model?

Are your NLP models deteriorating post-deployment? Let’s use unlabeled data to find out
•

Are your NLP models deteriorating post-deployment? Let’s use unlabeled data to find out

Monitoring Strategies for Demand Forecasting Machine Learning Models
•

Monitoring Strategies for Demand Forecasting Machine Learning Models

Demand forecasting cases are one of the most challenging models to monitor post-deployment.

Harnessing the Power of AWS SageMaker & NannyML PART 1: Training and Deploying an XGBoost Model
•

Harnessing the Power of AWS SageMaker & NannyML PART 1: Training and Deploying an XGBoost Model

A walkthrough on how to train, deploy and continuously monitor ML models using NannyML and AWS SageMaker.

Monitoring a Hotel Booking Cancellation Model Part 1: Creating Reference and Analysis Set
•

Monitoring a Hotel Booking Cancellation Model Part 1: Creating Reference and Analysis Set

How to monitor ML models with NannyML SageMaker Algorithms
•

How to monitor ML models with NannyML SageMaker Algorithms

A walkthrough on how to deploy NannyML Monitoring Algorithms via AWS Marketplace and SageMaker

A deep dive into nannyML quickstart
•

A deep dive into nannyML quickstart

Tutorial: Monitoring Missing and Unseen values with NannyML
•

Tutorial: Monitoring Missing and Unseen values with NannyML

Understanding the EU AI Act as a Data Scientist
•

Understanding the EU AI Act as a Data Scientist

5 Levels of MLOps Maturity
•

5 Levels of MLOps Maturity

Monitoring Workflow for Machine Learning Systems
•

Monitoring Workflow for Machine Learning Systems

Don’t let yourself be fooled by data drift
•

Don’t let yourself be fooled by data drift

Tutorial: Monitoring an ML Model with NannyML and Google Colab
•

Tutorial: Monitoring an ML Model with NannyML and Google Colab

How to detect data drift with hypothesis testing
•

How to detect data drift with hypothesis testing

Hint: forget about the p-values

How to Deploy NannyML in Production: A Step-by-Step Tutorial
•

How to Deploy NannyML in Production: A Step-by-Step Tutorial

Let’s dive into the process of setting up a monitoring system using NannyML with Grafana, PostgreSQL, and Docker.

91% of ML Models degrade in time

91% of ML Models degrade in time

A closer look to a paper from MIT, Harvard and other institutions showing how ML model’s performance tend to degrade in time.

Understanding Data Drift: Impact on Machine Learning Model Performance

Understanding Data Drift: Impact on Machine Learning Model Performance

Bad Machine Learning models can still be well-calibrated
•

Bad Machine Learning models can still be well-calibrated

You don’t need a perfect oracle to get your probabilities right.

What makes model monitoring in production hard?
•

What makes model monitoring in production hard?

6 ways to address data distribution shift
•

6 ways to address data distribution shift

3 Common Causes of ML Model Failure in Production
•

3 Common Causes of ML Model Failure in Production

Detecting Covariate Shift: A Guide to the Multivariate Approach

Detecting Covariate Shift: A Guide to the Multivariate Approach

Good old PCA can alert you when the distribution of your production data changes.

Usage statistics in NannyML
•

Usage statistics in NannyML

Data Drift Detection for Continuous Variables: Exploring Kolmogorov-Smirnov Test
•

Data Drift Detection for Continuous Variables: Exploring Kolmogorov-Smirnov Test

Estimating Model Performance without Ground Truth
•

Estimating Model Performance without Ground Truth

It’s possible, as long as you keep your probabilities calibrated

Three things I learned while containerizing a Python API
•

Three things I learned while containerizing a Python API

AI is building paperclips, here is our secret plan on how to stop it
•

AI is building paperclips, here is our secret plan on how to stop it

Automation vs Prediction in AI: how do they differ?
•

Automation vs Prediction in AI: how do they differ?

Monitoring as a first step to observability
•

Monitoring as a first step to observability

The AI Pyramid of Needs
•

The AI Pyramid of Needs