- Artificial Intelligence
- Internet of Things
- Big Data
- RT Analytics
- Machine Learning
- Use Cases
- Black Box
- Business Semantics
- Business Analysis
- Cognitive Computing
- Data Strategy
Read the transcript of my podcast with James Taylor, the CEO and Principal Consultant of Decision Management Solutions. James is author of the book, Digital Decisioning: Using Decision Management to Deliver Business Impact from AI. In this transcript, we discuss how companies need to have a ruthless focus on decisions in the age of AI and machine learning, how decisioning is often the one thing missing from a companies digital strategy, and finally, how, with all these new technologies, if you're not focusing on solving a business problem, you're creating one. It follows:
Peter Schooff: Welcome to another Data Decisioning podcast. This is Peter Schooff, editor of Data Decisioning, and today I'm very pleased to be joined by James Taylor, the CEO and Principal Consultant of Decision Management Solutions. James has been pretty much the top decision guru for as long as I can remember, and he's the leading expert in how to use decision modeling, business rules, and machine learning to deliver Digital Decisioning. James has a book out, “Digital Decisioning: Using Decision Management to Deliver Business Impact From AI”, which is a great title, and it's exactly what we're going to discuss today. James, you're the first person to do a second podcast on Data Decisioning. So thank you so much for joining me.
James Taylor: Well, thank you very much for having me back, Peter.
PS: As I said, your latest book is titled Digital Decisioning. Tell me about it. I think it has a pretty interesting story behind it.
JT: It's a new book, but it's based on an old book. Seven or eight years ago, I wrote a book called “Decision Management Systems”, which was about trying to get people to think about how to use business rules and predictive analytics and data mining to start to automate decision making and to start to really focus their thoughts on how they might embed those kinds of things into a more transactional context. I was struck about a year ago, by how much of the advice in the book was still current, was still a best practice, but how out of date much of the terminology was. And also, how much people have moved on and how willing people were to think about being data driven these days. Data Decisioning has clearly established itself as a best practice, relative to how willing people were to digitize their businesses a few years back.
And so the book is essentially a rewrite of that book, saying, "Okay, we've got all this best practice, we know it all works. It's continued to work over the last seven or eight years. We've done dozens of projects and work with thousands of decisions in that time and it definitely works." You know, there's some minor updates, but fundamentally it's the same advice. But now integrating into it the way people talk about machine learning, the way people talk about AI, I'm really trying to bring that expertise, that best practice, to bear on modern machine learning and AI oriented systems.
PS: Well, so that leads right to the next question. And so here we are in the digital age, exactly how has decisioning changed would you say?
JT: Well, I think one of the first things is that being data driven is taken for granted. It wasn't that long ago, we were still arguing about the relative value of gut decisions and being data driven and trying to persuade people that being data driven was a good idea. I think most of us would say, we won that argument already. Now it's about how! What exactly are the boundary conditions for that and so on?
The second thing I would say is that we have been furiously digitizing our data and our business processes over the last, you know, five years. And for many organizations, really the bit that isn't digitized now is their decision making. You know, they've got digital data, they're storing it all, they’re storing digital content, they're using digital processes to move it all around, and then we're still using people to make all the decisions. So we have one missing piece in our digitization strategy.
And then I think the third thing is we have this whole misuse of the phrase AI. There's a tremendous abuse of the phrase AI. But in particular, an assertion that AI must mean machine learning and deep learning, nothing else counts as AI. And that you have to throw out everything you know, and replace it with AI, all of which is complete nonsense. It's much more nuanced and more complex than that. It's another tool in our toolbox. The digital age enables some of these things, but we don't have to forget everything we already know. So I think there's a lot going on the last few years, but it's really put us in a position to make digital decisioning happen at more companies.
PS: Agree completely. Now. I could have asked this question 8 years ago, but how does a company break down the decisions they have to make in their organization?
JT: Well, I think there's a couple of things. One of the first things you have to do is recognize that there are lots of different kinds of decisions in an organization. There's a tendency, as Peter Drucker once famously said, to think that only executive decisions matter. And I think that's, as he said, a dangerous mistake. So one of the first things you’ve got to do is break it down into strategic decisions that are like one-off big executive kind of stuff, tactical, how-do-we-structure-and-run-the-business kinds of decisions, the sort of portfolio level, and then the very transactional operational decisions. How do I deal with this client? How do I deal with this customer right now?
And that I think helps to focus people because the key value proposition for digital decision making and for machine learning, and AI, is really down in those operational decisions. We also find it helpful if people think about micro-decisions, as we call them, which is this idea of finding places in their business where they make the same decision for everyone when they could in fact, make a different decision for each person. The classic one will be like the content on your website. Or do you have a website? Or do you have a website that responds to the person who's logging in making a decision, a micro-decision, about them, so that you target them with better content and everything else. I think there is a lot of opportunity for companies to start thinking about not only what are the decisions I know I make, and how do I categorize those, but what decisions aren't I making? Where am I failing to make a targeted, focused decision? I think that's important as well.
PS: Yeah, definitely. It's utterly fascinating as well. So how can a company know which decisions can be truly automated and how do you know when to have a person in the loop of the decision making?
JT: Yeah, that's an interesting question. We find that people often overcall how often they need to have someone in the loop, number one, and number two, they sort of get it wrong. They think that what the system should do is make a recommendation that the person can override and what we often find is actually, the system can make a perfectly good decision as long as the human gives it certain kinds of inputs that it can't figure out for itself.
The human is better at telling them that the customers aren't happy. And the fact that the customer is unhappy is a crucial determinant in how the decision should be made. So instead of ignoring it in the automation, and throwing up an answer, and then having the person go, "Well that was a stupid answer because this customer is unhappy." Go ahead and ask the person, is the customer unhappy, and if they say yes or no, then use that as part of the decision making. So we find that there's often a role for humans in decision making, but it's often not this supervisory, "You make a suggestion, I'll override it if I feel like it kind of thing."
And so we find you have to really understand the structure of your decision making before you can make those judgments. So we encourage people, when we're working with them—Look, let's understand the decision making first and let's understand all of it, automated pieces and the manual pieces. And once we understand all of it, then we can draw a suitable automation boundary to figure out which pieces to digitize, which technologies to use, and make it an integrated whole. This idea that you're going to have a digital decision on one side, it's gonna throw stuff over the fence to a human on the other side, I think is, you know, it's just nonsense. It's never going to work.
PS: So essentially start with decision making first, is that what you're saying?
JT: Yeah, exactly. We like to say begin with the decision in mind.
PS: So now AI, the AI that we're talking about, what do you think is the biggest challenge bringing AI into an organization today?
JT: Oh, man, where to start? That's such a long list. I think there's a couple of things. First of all, when people set up AI teams, AI groups, AI programs, they're being technology lead, right? And that's a mistake. Or they say, well, we’ve got to do AI with this data. Well now they're being data led. And they need to be business problem led. They need to be focused on actual business problems, and so that's a problem right there. AI teams, when they're assembled and given budgets and set off to go use AI to do something by their nature are going to be technology led. And we know from history that companies have a very poor track record when they try and solve problems in a sort of technology-centric way.
I think even when you get past that, AI teams are very focused, as I said, on machine learning and deep learning, and these new cool technologies and frameworks, and they're in danger of throwing the baby out with the bathwater. There's tremendous experience building expert systems. using optimization, using scoring and predictive analytics, all of which create systems that, from a business perspective, are artificially intelligent, and therefore that experience ought to be part of it, too.
And then lastly, I think, it follows from some of the other stuff, there is a tendency to go for what Tom Davenport calls in his book, "moonshots". We're going to solve some great big problem with AI. And the reality is you're probably not going to and it's probably not going to work very well. Tom's research suggests that hardly anyone succeeds with these big moonshots. They're very long winded, very expensive, and you're much better off identifying lots of small pieces, rather than one big thing. So we find that the opposite of, what's the challenge...the way to succeed is to stay really focused on the business problem, the decisions you're trying to deal with, mix and match technologies, some of which are what we would now call AI. But some of which are things that are AI technologies, like business rules or optimization that has a much longer track record. And then make sure you do this in a way that is all about continuous improvement. You're not trying to sort of shoot for the moon in one go. You want to put yourself in a position to continually improve. So lots of boring, practical advice, right?
PS: That's what we're looking for here. Now preparing for this podcast, I took a really close look at your book. I love this quote from your book, "The most pervasive pitfall for predictive analytics is not technical, but organizational." Would you elaborate on that quote?
JT: Sure. Yeah. And I think it's equally true when you talk about machine learning, or AI, or anything else. You could replace the word predictive analytics with any kind of insight-oriented technology. The book is written for, and we often jokingly and lovingly call our customers sort of big, boring companies. Right? So this is for big, boring companies. We're not talking about born digital startups who are reinventing themselves. They're big boring companies, big established, stable companies.
And being big and stable and established is part of their value proposition. And for those kinds of organizations, the challenge with predictive analytics and machine learning and AI, is that they have policies, regulations, customer expectations that they can't junk. They can't throw them out and say, "Hey, the new technology says we shouldn't do that anymore. So we're going to stop”, right?" It just doesn't work that way. That's just not how those kind of companies can behave. So when you focus on the technology, how do I make a better prediction, how do I use deep learning, you're ignoring that reality. And that's a mistake.
And the successful AI projects are the ones that embrace that reality and say, "Gosh, this is how we do business today. These are our customer expectations. This is the reality of our lives. How do we use AI to get better at it?" It means lots of small incremental improvements, the kind of stuff that can be adopted by a big organization. Even Amazon. If you read the Amazon annual report, it says, "You know, the kind of improvements being made by AI are often invisible, small, incremental improvements to operational effectiveness." That's where you're going to get the value out of this stuff. And that's why it's an organizational problem and not a tech problem.
PS: That makes total sense. Now, I'm quite interested in predictive analytics. Do you see it becoming ever more important in the future of a company?
JT: Oh, I do. Absolutely. I think it's becoming more and more pervasive. I think that machine learning and newer techniques that make it easier and quicker to build more accurate models, particularly ones that don't require sampling and that can suck in large amounts of data and help you figure out what's predictable, all those things are going to dramatically increase the range of problems for which predictive analytics and machine learning is broadly applicable. And I think it's going to really change the way companies think.
There are a lot of companies out there today who have one or two predictive models, and that's how they think - they think in terms of a few predictive models. And we look a companies and we look at the problems and we look at how we break the problem down and we go, "Man, this company is going to need hundreds of models and that's got to be a corporate wide capability." It's not something owned by the first group that happened to want a model. It's got to be something you get really good at. You've got to stop thinking about this the way a woodworker would think about making a chair—How pretty of a chair can I make? And you have to start thinking like IKEA, how do I flat pack hundreds of chairs because you've got lots and lots of places where you need to put models into production. And you need to keep updating them all the time.
So I think that's going to really change how people think about predictive analytics. I also think that we have to get IT to think differently. They have this terrible habit of differentiating their systems into line of business applications or operational applications, whatever they call them. Oh, and then we have the separate class of analytic applications. Well, no, you don't, you have to have analytics in your line of business applications. The way the order processing system works, when it accepts an order has to include a prediction of how likely it is you're going to hit the ship date. Well, that's not an analytic application, it's part of your core ERP system. So that mindset, that analytics can somehow be left off in its own little space, has got to go.
And this pervasiveness of predictive analytics, machine learning, I think it's going to be the new reality. And the ones who adopt that quickly and embrace it I think are going to find themselves creating dramatically more value than the people who say, "Oh, well, we have an analytics group. They're off. They're doing their own analytic thing. And here we are in it. We just got to keep doing it."
PS: Yeah, well, that's kind of commercializing the data, isn't it? Or I mean, essentially monetizing the data.
JT: Yeah, it is. I'm always a little bit sensitive with the monetizing the data thing, because I hear a lot of people say, “Oh, perhaps there's a new business in our data.” Realize you're a big stodgy insurance company but maybe your data will let you become this cool data-driven company. Well, no, it probably won't. If you're a big stodgy insurance company today, the odds are that you are going to be a big stodgy insurance company tomorrow. The question “How do you monetize your data by getting more value from it?” is not a great question for companies to be asking: “How do I turn this data into operational improvements, profitability improvements, customer satisfaction improvements that will make me money?” That's a good question. I think the idea that I'm going to monetize data by starting a new business, that's a much less likely to be successful.
PS: That's a startup question, you know. So for people listening, if you wanted them to come away with one takeaway, what's the one takeaway you want people listening to this to just sit in their minds once they're done listening?
JT: I think, really, it's that if you want to succeed with AI, with machine learning, with predictive analytics, it's going to take a really, really ruthless focus on your decision making and your decisions. You can't adopt these technologies and deploy them successfully if you just think about the data, if you just think about the processes, or even if you just think about the people. You have to understand how you make decisions and how you want to make decisions. Because these are decision making technologies. We invest in analytics to improve the quality of our decision making.
And people will tell you that and then you say, "So which decisions particularly were you hoping to improve with analytics?” “Well, just decision making, like generally.” Well no, it won't work. You have to have a list, you have to know which decisions, you have to know how you make them. I cannot overstate how important it is that you really, really understand the decision making you're trying to improve. This stuff really works. But it's not a magic bullet that suddenly will figure out where to fire itself. You have to find the right target and aim.
PS: Well said, James. this is Peter Schooff of Data Decisioning, speaking with James Taylor of Decision Management Solutions. Make sure you check out James’ book, “Digital Decisioning: Using Decision Management to Deliver Business Impact from AI. James, it's always wonderful to check in with you. Thank you so much for spending the time.
JT: Thanks for having me.
Listen to the podcast: The Rise of Digital Decisioning: James Taylor Shows the Way