Webinar details

All Webinars

Register to attend!
Thank you! Your registration has been received!
Oops! Something went wrong while submitting the your registration.

Strategies for Monitoring LLM Hallucination

The talk will cover why and how to monitor LLMs deployed to production. We will focus on the state-of-the-art solutions for detecting hallucinations, split into two types:

1. LLM self-evaluation

2. Uncertainty Quantification

In the LLM self-evaluation part, we will cover using (potentially the same) LLM to quantify the quality of the answer. We will also cover state-of-the-art algorithms such as SelfCheckGPT and LLM-eval. In the Uncertainty Quantification part, we will discuss algorithms to leverage token probabilities to estimate the quality of model responses. This includes simple accuracy estimation and more advanced methods for estimating Semantic Uncertainty or any classification metric. You will build an intuitive understanding of the LLM monitoring methods, their strengths and weaknesses, and learn how to set up an LLM monitoring system easily.

calendar_icon

May 9, 2024 2:00 PM

microphone_icon

Wojtek Kuberski

NannyML Logo
The Open Source library for post deployment data science