Implementing predictive maintenance models can significantly increase machinery productivity. However, these machine-learning models tend to fail in time, delivering suboptimal predictions that can lead to many business dollars lost. In this talk, we will focus on using ML monitoring as a critical tool for measuring the quality of the model, identifying root causes, and resolving the issues. We will address three specific challenges of monitoring predictive maintenance models. In the first part, we will cover how to deal with delayed and partial (AKA censored) target data using performance estimation algorithms that quantify the impact of covariate shift on the ML metrics and allow us to estimate them even without access to target data. The second part will focus on dealing with low data volume per machine using Bayesian model evaluation and monitoring. This helps to identify model performance issues quickly and is particularly useful when deploying new predictive maintenance models. The third part will deal with data quality issues and cover simple checks and more advanced covariate drift detection techniques to identify low-quality data quickly.
April 11, 2024 4:00 PM
Wojtek Kuberski