December 22, 2022

AI-powered underwriting can ruin your risk profile unnoticed – Here is why

Progress in AI is astoundingly fast. What was a research breakthrough 5 years ago is a well-established industry standard today. Properly implemented, it can bring immense value, and this has been clear for a while. Industries such as banking, insurance and telecom are paving the road to widespread adoption, often leveraging tools for development and deployment. However, the real use of AI in production is nowhere near the level you read about online. What’s causing decision makers to hit the brakes?

Like any world-changing technology, it comes with a new set of risks that are only slowly becoming apparent. That is the main reason they stay bottled in the Proof of Concept stage. People learn (sometimes the hard way) that once you release your model into the real world, things can go very wrong, very quickly. The two most important elements causing this are silent failures and a lack of insights over the intelligent systems that’s making decisions.

Let’s take insurance. A firm rolls out an AI-powered underwriting system on their website to sign new clients without human intervention. 6 months go by until someone notices that this model has been offering extremely low premiums to certain clients, since the model was not trained on this demographic. This is an example of silent failure. It’s silent because it will do its thing and at first glance everything looks fine, but the mistakes made by the model can be very costly. We’re covering the silent failures in an in-depth series technical series, and will focus on the necessity of oversight here.

Let’s now talk about the lack of insight. Consider another case of a shift in pricing, this time higher rather than lower. That doesn’t sound like a big problem, does it? Well, it is. The firm attracted a new client base in the high risk, low-income segment which pays higher premiums. Previously these people were not able to get insurance at the firm, but now they signed up through the website. A human agent would have realised this, but the AI system did not. This is not a failure of AI, rather a sign that something is wrong with the customer acquisition strategy. This insight allows you to quickly react to mistakes made there, before increasing systemic risk even more.

Let’s take another example. Imagine you’re the head of retention. Together with your team, you try to detect who is likely to churn and prevent it before it’s too late. While doing this, you automatically keep a finger on the pulse and notice trends and irregularities. Fast forward a couple years where the retention process has been automated using AI. Now, without proper oversight and monitoring, you’re likely to miss important events such as slight but crucial change you your customers’ preferences. Or new indicators & reasons for churn that your model was not trained on.

Robust monitoring is more than simply checking whether an intelligent system still works and won’t cause a disaster. It can also provide actionable business insights from it. Continuing with our retention campaign, let’s say you notice a sharp decrease in the effectiveness of discounts. This can prompt you to dig deeper and ultimately adjust not only retention strategy but also influence marketing and pricing of the product.

A good, robust AI monitoring system in place is paramount to prevent nasty surprises. A great system will provide you with actionable insight on top of that. That being said, the step after the analysis — addressing these exposed issues — will remain a human task for the foreseeable future.

Continue reading

Our newsletter
Get great AI insights every month.
Leave your email address below and we'll keep you posted about all the great AI insights we have to offer.
No spam!