Read the transcript of my podcast below with James Taylor, the CEO and principal consultant of Decision Management Solutions. In this podcast we discuss AI and decisioning: what works, how to deal with the black box of AI, and most important of all, how not to crash and burn trying to implement artificial intelligence. Give it a read below:
Peter Schooff: Hello, this is Peter Schooff , editor-in-chief of Data Decisioning dot com. And today I have the great pleasure to speak once again with James Taylor, the CEO and principal consultant of Decision Management Solutions. James has pretty much been the decision guy for a long while. And he's the leading expert and how to use decision modeling, business rules and analytic technology to build decision management systems. And that's exactly what we're going to discuss in this podcast today. So James, first of all, thanks so much for joining me on another podcast, even though it was 10 years ago, I think was our last one.
James Taylor: Well it's great to be back, Peter. Great to chat with you again.
PS: So I was looking at your site recently, Decision Management Solutions, I saw something about the two types of AI. So first of all, can you just explain what the two types of AI are?
JT: Yeah, sure. So we tend to find it very helpful to differentiate between AI whose primary purpose is to help you make a decision, a business decision of some kind. So you can think of that kind of AI as like an extension of predictive analytics, machine learning, focusing on how to make predictions, how to make recommendations, how to help people make a decision. And we tend to differentiate that from the kind of user experience AI--chat bots, conversational UIs, which are really focused on sort of representing language and handling people's inquiries in an intelligent way.
We really think of that more as a user-experience technology, how do I interact with my customers and use AI for that, and that's distinctly different from using AI to make business decisions. So we sort of like to separate those two because we don't find it helpful to sort of lump them into the same projects or the same exercises.
PS: I definitely like that breakdown, it's very useful. Now nothing's hotter right now then artificial intelligence, AI, but I haven't seen it much in the context (except on your site) of decision management. So how can AI help DM?
JT: Yeah, well, I think one of the reasons you see that is right now that AI is too hot, right? So everyone is like, oh, I can take my whole business decision, should I pay this medical claim that's coming in as a scanned image, and I'm going to take it from take scanned image and tell me if I should pay it, I'm going to turn one great big Uber AI to do this whole thing. Well, it doesn't work number one, and number two, it makes it seem like it's a replacement for all these other technologies that already work.
So what we've done with a lot of clients and start and say: Look, think about small a small i right? You're trying to build an artificial intelligent decisioning system, right? Some digital decisioning system and apply decision management. So you can digitize a complex business decision. That doesn't mean you have to only use AI. What you can do is, you can say-well, I'm going to break my problem down into pieces and I'm going a look at those pieces.
Well, this piece here, that's eligibility. Well I already know how to define who's eligible, who's not eligible for my products. Why would I train an AI to do that? Why wouldn't I just write a set of rules to tell me who's eligible who's not eligible. I already have structured data that's producing good risk scores that's producing accurate predictive models about my customers or about my business. I should use those too, why would I replace them?
And then there were pieces where I kind of do need to use AI, it would make a difference in my interaction if I understood how cranky you were as a customer. So this email where you're asking me about this claim. Sure, I can tell you who's supposed to pay it, if you're policies' enforced, or is there rules, or I can make a prediction about the risk of the claim, that's predictive analytics. But depending on what the tone of your email is, I might respond differently. Well, that requires me to train an AI to sort of think about, okay, well, are you cranky or not? And so what we're looking at is saying look, break the problem down, think about digital decisioning. And what you find is, there are decisions in that environment, sort of sub-decisions, if you like, which are too hard to write the rules for, and for which traditional predictive analytic techniques don't really cut it.
And then we start saying, well, so instead of making that manually, and telling the system the answer, why don't you if we can train an AI to do that, let me see if there's a way to integrate AI into this digital decision to improve it, to enhance it, to make it cover a broad array of possible scenarios. And we see that as a very powerful way both to succeed with AI by not biting off more AI than you can deal with, but also dealing with all sorts of issues about trust and transparency, and all sorts of other things. So we think it's a win win, but it requires people to sort of step back from the AI ledge, as it were, and instead of flinging themselves off this cliff, to say, well, okay, yeah, how can AI in a more constructive way?
PS: That's excellent. Now, as you said, lowercase a lowercase i. So how do companies get started tomorrow using AI to help decisions?
JT: Well, I would apply the same mantra I've applied to these things for a very long time. Which is just to begin with the decision in mind. You have to say, well, what business decision am I actually trying to get better at? And how, in fact, do I make that decision, both today, but also, you want to engage a little bit of sort of design thinking. How would I like to make this decision? What would be ideal for this?
Once you start doing that, we're big proponents of decision modeling. So we'll often run design thinking sessions with clients. And as part of that design thinking session, what we're trying to do is build a logical decision model that says, here's how we would like to decide. And when we go through that exercise, you almost always find that there are decisions in that model--bits of that decision, if you like--that are clearly very rules based, I can write the rules. There are other bits that are analytical, I can build an analytic for that. There are other bits that really, I still want to have somebody do, because there's a personal interaction perhaps that drives it. But there are other pieces where I could train an AI.
By breaking the problem down first, and focusing on the outcome, what is it I am trying to achieve? I'm trying to achieve a more accurate, more complete straight-through processing decision on that claim. I'm trying to decide what offer to make you that's most compelling so you'll buy more product. Whatever that decision is.
But to start there, and if you look at some of the research is out there--people like McKinsey, they started differentiating between leaders of AI and analytics and followers. Overwhelmingly the leaders think through that stuff first, then decide what analytics and AI they need, and then they go off and try and find the data that will build the analytics and AI that they need to solve their business problem.
And the followers tend to pile up all their data and see what analytics and AI they can build. Then they presumably hope that if they build enough analytics, or enough AI algorithms, that somehow this will improve their business, in some, as yet, to be defined way.
PS: And that goes all the way back to putting the cart before the horse.
JT: Exactly. I need to have a business problem which solving will generate an ROI, then I can make an investment.
PS: Yeah, so now use case, let's go to the real world, you don't have to use the name of the company. But where it has this benefited a company in the real world.
JT: So I'll give you an example. So perhaps one of the most common ones is this whole idea of next best offer next best action, right? So working with an insurance client that has an application that their agents use to sell insurance and we want to upsell people, we want to add additional add-on products to the ones that are already being sold.
So obviously, we have a bunch of rules, because we have a bunch of rules in there that say, when we look at that decision, we have to decide which products you are allowed to buy, we've got to know which products you've already bought, which add-ons go with which of the products you have, obviously we can't sell you a product when you're only allowed to have one of them. You know, all that kind of stuff. So there's all this suitability and eligibility stuff. You model those decisions up, they're all very clearly all rules based, right? And then you start saying, well, I know I could make a different offer depending on what customer segment you're in. Well that's a traditional analytic that they've already got, they've already built the analytics. So it's sort of product sequencing and segmentation, traditional analytics.
So you start adding that, and now that's starting to make more accurate decisions, and that group has started to say, okay, so we have some of these models, but our propensity models, our likely-to-buy models are not robust enough. And we have more data about people's behavior and what they buy, and what they don't buy, and what the think of it, and surveys and texts, and other things. And so they're starting to say, okay, we want to apply machine learning and AI technologies to these new data sets that haven't historically driven our predictive analytics so that we can make a more targeted offer to people while they're in this interaction with agents. And then what's interesting is because they separated the whole thing out as a separate decisioning piece, they were also able to get to their customer portal and add exactly the same kind of cross-sell upsell logic for an existing customer who's logging in to change their address or something. Yeah, run that same logic and say, oh, have you thought about this additional product? Get your agent to call you because it's the same basic decision.
And so by focusing on the decision, they were able to sort of get started quickly, with simple stuff, they're starting to add all the analytical work they've already historically done, like most marketing departments around segmentation and customer sequencing, but then they've established a clear frame to start applying AI and machine learning to improve that result. So we're excited about it.
But it's also breaks the problem down. It's a much smaller problem, right? It's, you're not trying to solve the whole thing with a single AI project. And if you look at like, Tom Davenport's new book, he's done a lot of research and all these big moonshot AI projects, they mostly don't work. What works is lots of smaller AI projects. So break the problem down then solve the problems one piece at a time.
PS: Well, you certainly touched on it but let's paint a bigger picture. What are the biggest mistakes you've seen people make and how do you avoid them?
JT: The biggest single one is they try and replace the whole decision with a single AI. And this has two effects. First, it makes it a really high bar, right? And secondly, even if you can clear the bar, right, even if you could build an AI that would take a claim as input and say, pay this claim, don't pay that claim. Someone's going to say, well, why should I pay James's claim and not pay Peter's claim? You know, some of the AIs are getting better at explaining themselves and everything else. But the thing is, AI is still probabilistic, right? So what if the AI says, well, I'm 99% certain James's policy is not enforced. But what do you mean 99% certain--either this policy is enforced or it's not enforced.
This is not a probabilistic issue if you're an insurance company, right. And so, not only are the problems very big and hard, but even if you solve them, it's not obvious you can get your business users to adopt them. And so what we found is that by breaking the problem down and solving smaller problems means that now you've got these little bits of AI, each of them can explain what they did more reasonably, because it's a smaller thing that they did. And you can integrate that with a sort of an overall discussion of--and therefore I decided to do X, right?
There's a sort of traditional digital decisioning, sort of logging of all of that, wrapped around this analytics. So you're not quite as black box. I wrote a blog post: I said, "Don't bite off more AI than you can trust." Build small AIs. You know, okay. I can trust this AI it's doing a good job of predicting this or predicting that, okay, and I've got a well defined, well logged framework wrapped around that. Well, now, I don't worry so much about what the black box of the AI was, because I know exactly what I did with it.
PS: That makes sense. Now, for our listeners, what what would you say is the absolute key takeaway, if you can boil it down to one key takeaway, what would you say that you really want people to remember from this podcast?
JT: AI is a technology, and technology led projects have a more or less zero percent success rate in enterprises. There is no technology out there that has ever succeeded in transforming a company with technology led projects. Data warehouses didn't do it, data didn't do it. You have to lead your project with a business problem, there has to be a business value that you are going to achieve.
If you are applying AI, there are two candidates. You can improve the interactions with your consumers by giving them a language based interface, that's the language kind of AI. Or you can use AI to improve a business decision that you are trying to automate. So if you don't know which business decision you are trying to automate and you don't really understand how that decision is supposed to work, how you need it to work, you are not going to succeed in applying AI to it.
So you have to begin with a business value statement. You have to understand what better looks like. And you have to understand which decision you are going to make different. Because if you don't change your decision making, you're going to do the same thing, the same way, and you will not get a better result. You have to know where to begin.
So that's, for me, it's always put the decision first, begin with a decision. You have to understand that decision making, otherwise, as I like to say, you might as well pile the money up and set fire to it because at least you'll be warm.
PS: Cool. Absolutely fantastic information, James. I knew this was going to be a excellent breakdown of exactly what I wanted to know about this stuff. So this is Peter Schooff of Data Decisioning speaking with James Taylor of Decision Management Solutions.