- Artificial Intelligence
- Internet of Things
- Big Data
- RT Analytics
- Machine Learning
- Use Cases
- Black Box
- Business Semantics
- Business Analysis
- Cognitive Computing
- Data Strategy
Many executives see machine learning as the bridge to digital transformation. And they're right. As long as they're willing to put in the work. Work is the key because AI-as-machine-learning does not happen by magic.
That's good news for two reasons! First, as a business executive, you are essential to making machine learning work. And the work that machine learning needs is all about business and engineering context — which is exactly what you know and everything you've worked hard to master. Secondly, when you do get machine learning to work, that's a competitive edge. If it was easy, it wouldn't be much of a competitive edge!
So how can you capture the context that machine learning needs? That's what this podcast is all about. It's all-to-rare insight into the secrets of machine learning success, from two leading practitioners who do this every day. [The Editors]
Note: This transcript has been edited for clarity.
Peter Schooff: Welcome to another Data Decisioning podcast. Today, I am thrilled to be speaking with Ryan Trollip and Charlotte DeKeyrel of Decision Management Solutions. Ryan is the CTO and VP of Services for Decision Management Solutions. Ryan has over 20 years in consulting leadership and has been leading decision management implementations for over 15 years. Charlotte DeKeyrel is a decision modeler with Decision Management Solutions. An experienced decision modeler with many international projects already under her belt Charlotte’s background is in math and engineering.
As I'm sure everyone is aware, there has been a massive surge of interest around artificial intelligence and machine learning. Many business leaders see AI and ML as the key to transforming their business. But the reality has been something else entirely with many companies struggling with implementation, ROI, and the famous “black box problem”. So this is exactly what decision management is designed to address and that's what we're going to discuss on this podcast. So, first of all, Ryan and Charlotte, thanks so much for joining me.
Charlotte DeKeyrel: Thank you very much.
Ryan Trollip: Thank you. Peter.
Peter Schooff: As I mentioned in my intro, the wave of enthusiasm for artificial intelligence and machine learning has been building for years — you might say it's hit peak interest. But all this enthusiasm hasn't entirely delivered the expected results, has it?
Ryan Trollip: Not to be too cynical, but I think that there has been an initial wave of hype that was associated with ML and AI. Some of that is justified in that there have been some advancements that have certainly moved the field forward and made it much more interesting and applicable to certain types of scenarios. But to take the old maxim: for every problem, there's a solution that is simple and neat but potentially wrong — where folks think if it's such a great tool for certain problems, wouldn’t it be a great tool for all problems?!
We have to apply the right types of technologies and the right tools to the right jobs in these cases. A lot of practitioners and venders have convinced themselves that it (machine learning) is good for everything — and that's not necessarily the case. But if applied in the right places, ML is certainly very powerful and you can get really good results with that.
Peter Schooff: I definitely agree. And you know, if you've been around tech long enough, you get used to seeing the next big thing every few years. But I would truly argue that machine learning really is the next big thing and will be for a long time. But what would you say are some of the key reasons for this gap between expectations and reality?
Ryan Trollip: I think it’s the hammer-and-nail problem that I talked about before, one of those problems where, applying it in the correct places is critical to get the full power of it. But also not ignoring your knowledge within the organization. Folks tend to start with data first and they go and look at a whole bunch of data and try and find those sort of nuggets in the data where they can come up with an easy machine learning model. And those aren't always necessarily the best place to start in a business!
So being able to frame out the best places to start is important. Align those places with the business objectives rather than simply mining the data. And that takes interacting with subject matter experts. It takes interacting with business leaders and understanding the knowledge within the organization and the context to which you're applying those decisions because you're trying to move the needle, especially some sort of KPI. That KPI might be increasing profit or reducing risk or whatever the case may be.
You're leveraging decision-making to make an impact in the business. And where does the business want to make an impact? So folks don't always apply it to the areas of business which will benefit substantially.
"Well, we begin with the decision. In reality, only after framing the problem can we figure out where machine learning makes sense."
Peter Schooff: Now how would you say decision management and decision modeling together can empower machine learning to deliver the results that so many folks in the C-suite would like?
Charlotte DeKeyrel: Well, we begin with the decision in mind. So in reality only after framing the problem can we identify the areas where machine learning makes sense. So only by first eliciting and documenting the decision requirements, capturing the decision logic or the rules, and then running actual data through the decision model will you — with the largest chance of success — be able to implement those machine learning effort successfully.
Starting too early in the decision modeling process or not using it at all only increases the risk that there will be a wasted effort on machine learning. A poor start can be that it's the wrong area to put forth the effort, or you haven't correctly identified the dependencies, or even the data that the machine learning model would actually need to yield helpful results.
Peter Schooff: Right. Now you're speaking to Data Decisioning, and we believe decisioning is truly the bullseye. So now let's take a step back and can you give us a brief history of decision modeling and rules?
Ryan Trollip: We've been at this game for a long, long time. And like with most disciplines, they evolve over time and we learn and adapt and as new techniques and discoveries and technologies become available, we have to sometimes step back and relook at things. Initially, if I go way, way back to the early days of automation, we were looking at decisions as sort of data or stored configurations, if you will, rather than having it in code. So externalizing basic configurations for things and gateways and processes and so forth.
"And we found that the value was really in externalizing those decisions and not so much in the fancy algorithms."
And then those evolved into business rules, management systems. I think even at the time folks went towards applying fancier algorithms to that. Like Rete and other types of forward and backward chaining. But at the end of the day, what was most useful was just a simple sequential logic execution. And we found that the value was really in externalizing those decisions and not so much in the fancy algorithms initially. It was, "Hey, can we get these out where people can see them and make changes to those and improve them over time in a business friendly way."
And then that added a tremendous amount of value as an initial step. But as we moved on, and in certain use cases, drove more of an analytical approach to that. For example in marketing, and next-best-offer type use cases, there's a lot of rich data that can be used in analytics. And we started to look at how to leverage that within your typical rules problems. So not just looking at regulation eligibility or subject matter knowledge, but actually looking at data, at interactions of customers and personas. And coming up with models that would predict or give some sort of propensity for them to buy something or to engage with something. And then started leveraging that into those business rules management system problems. That added an additional boost to the effectiveness, and those became very effective as use cases and in delivering ROI for companies.
And then the third wave of decision modeling was when more advancements had been made in machine learning, more models have become available and models became more accurate. We're starting to look at training and separating models out as APIs and that we can call and bring decisions back into a central operational system that's implemented within business rules. So that's how it's evolved over time. We're getting more and more use out of analytical and machine learning type decisions and I think we're still at the infancy of that.
Peter Schooff: I agree we're at the infancy of an integration of machine learning and decision modeling. As I’ve watched the development of your messaging, you could almost say I was slightly skeptical at the beginning, but then then as you break it down — operational decisions really are fundamental and you can really can pull them out. And now with machine learning and predictive analytics, it seems to me the combo could truly be a business superpower.
Now let's go back to AI and machine learning. There's been a lot of talk about one of the biggest hurdles to ML is the black box problem, which is essentially how do they make a certain decision or certain prediction and if you don't know that, how can you trust it? So how does decision modeling help with the black box problem?
Charlotte DeKeyrel: So one of the many benefits to decision modeling and decision management is full traceability, full transparency throughout the entirety of the model. So if you need to know why an outcome was what it was, you can work backwards from the output data through the executed rules to identify exactly what rules fired, and back to the model that details all of the dependencies in the context designed by the experts in the first place.
"And the client said, 'If I can't see what's going on in there, I'm not doing this.'"
Ryan Trollip: Exactly! And a good example of this is when we were on a client site working in Asia on an automation project. We were chatting to the COO of the organization about a decision automation project and she was like very skeptical. She had a reputation for being very adverse to technology because obviously they're the ones who get called in on the weekends, right? So any new technology — she was very, very skeptical about and they had recently done a project with an AI firm that came in and did a bunch of machine learning and they decided not to implement it at all because it was a black box . She was like, "If I can't see what's going on in there, I'm not doing this." So she scrapped the whole thing.
So the first thing we did was make sure that when Charlotte went in and modeled that out, she kept full visibility into everything that was going on. We could see every single sub-decision that was executed and we could also trace even the features or variables and how they contributed to machine learning outcomes as well. So we applied various technologies; there's some out there from various vendors that you can extract that visibility from even the black box models. And so when we showed that to her, we showed her all of the explanations and details, she was happy and they moved forward with that particular use case.
Peter Schooff: Fantastic. You know it’s been said that if you automate something good you can get something great – but if you automate a problem, you might end up with a really big problem on your hands. So you really do need to have some transparency into the black box. So now we've heard you both mention business rules and decision modeling. This is both part of the decision technology opportunity. Can you explain how decision modeling and business rules are different?
Charlotte DeKeyrel: Sure, think about a process, for example a merger and acquisitions, an M&A kind of situation. What you would probably do is compare their process with your process side-by-side to see how they work and if they'd be compatible. You don't necessarily have to automate it, but you just want to see what's going on. Being able to read the process or read the problem is a big part of the issue.
Business rules are a part of decision modeling. The difference between business rules derived with and without decision modeling, is the organization, the understandability, and the normalization that you can do significantly better when a decision model is involved in the process. So oftentimes, if you think about a normal kind of business rule that you may see out in the wild — they're messy, they're often complex. Like if-then statements with kind of random numbers thrown in there, apparently like “a two is bad” — and maybe there is some eligibility and some complexity or compliance rules — all just kind of embedded into this one monster if-then-and-or statement. And it would take a lot of knowledge to be able to just not only read but understand what that rule was trying to tell you.
Whereas, if you were to break down the problem into its component bits, classify those kind of rogue numbers that are thrown in there: well, a two is bad because it has this business context behind it. And the complexity could be understood by people who understood the complexity sides of things and the eligibility, likewise. But you know, in the format of a decision table, as opposed to, these kind of ridiculous if-then statement type rules, it's much easier to read because you've broken down all of the little bits, all the classification, all of the sub-decisions are pulled out, so each individual table — while yes, there may be more places to look to understand the logic — each one is much more readable and understandable so you know exactly what's happening.
So if I "if-then" these two things, because you still read a decision table in a similar kind of manner — you know, you take this column, you say if this column and this other column then I would do this action. Then you would take that action and see where it fits into the decision that's impacted by it. So you can much, much better understand a well-organized rule than you can a nonorganized rule.
Peter Schooff: Amazing. It sounds like a way of revealing the important but almost hidden knowledge of how you conduct business. So now one thing we have found at Data Decisioning is that there's no place in a business for generic AI or ML. AI and ML pretty much need to be domain specific and, as you mentioned earlier, Ryan, subject matter expertise is important. So how does the question of domain knowledge work when you bring decision modeling and business rules into the picture?
"Decision models are ideally built in direct collaboration with the subject matter experts themselves."Charlotte DeKeyrel
Charlotte DeKeyrel: Sure. Decision models are ideally built in direct collaboration with the subject matter experts themselves. So this helps to ensure that the dependencies that you've identified are real and the logic reflects how the business should act. And it removes all of the subjective, you know, Agent A would have made this decision, whereas Agent B would have made a different one given the same inputs. So it removes all of those subjective kind of discrepancies and defines some of the best practices. This is how it should be done for the company based off those subject matter experts' experience and knowledge. And by embedding their knowledge and experience within the model and the rules in the first place, you raise the corporate IQ of everyone.
"By embedding their knowledge and experience within the model and the rules in the first place, you raise the corporate IQ of everyone."
So if these subject matter experts were to leave one day, take on a new role and suddenly have no time, or retire or whatever it may be, then without them, then without having captured the innards of their brain, you're lost. Whereas, if you document them and everyone has access to that expertise, everyone around them benefits. So the business becomes less susceptible to fluctuations in staffing and everyone who is impacted by these decisions knows why they came the way they did.
Peter Schooff: Well that is a really, really great argument right there. So what is the criteria for judging whether decision modeling is going to be a good match for a specific business problem?
Ryan Trollip: So this is actually easier than it seems. Just to step back a little first — you have to classify the types of decisions within an organization. And the way that we think about those is, everybody tends to think when you talk about decisions in a business or an organization they mean the strategic type decisions the executives are making. But those aren't really the types of decisions that we automate. You may create some explainer analytics that will help those kinds of decisions, but really when we're talking about decision automation, we're talking about more the operational-level decisions. So not strategic, not tactical, but operational, transactional type things.
"The more volume you have in a decision, the more value you're going to have automating."Ryan Trollip
So like do I approve a loan? Do I pay this claim? Or do I underwrite this policy? Those kinds of decisions that are day-to-day, high-volume decisions. And volume is the key there because the more volume you have in a decision, the more value you're going to have automating that in terms of just an additive value because you make so many of them every day. That's the first key to assessing the value of decision modeling for any given business situation.
The second key is change. How much does that decision change over time? You don't necessarily have to have a high volume of change in order to justify automating decision. But it certainly helps because when you externalize that — whether it be in machine learning or business rules — you create a separate change lifecycle to your typical technology cycle.
Many companies complain about how long it takes to get a change through IT. It's a very common complaint where IT has to have rigor and structure around their processes to safely deploy this sort of thing. So separating out operational decision models allows for a completely separate change cycle. Much faster. Controlled by the business more. It still has all the safety and rigor involved but that certainly speeds it up.
And then the third key, really the last factor we look at, is visibility. Charlotte mentioned corporate IQ. It's a phrase I like because just modeling by itself, without even doing full decision automation, raises the corporate IQ. Being able to share that knowledge is extremely valuable for training AND also just improving your decisions over time. Those are the three primary criteria, but there are more. But I think if you have at least one of those three, it's enough to look at as a potential use case.
Peter Schooff: From what you and Charlotte have shared here, Ryan, it seems that decision modeling is perfect for continuous improvement as well.
Ryan Trollip: Absolutely.
Peter Schooff: So now you said we're in the infancy of ML and decision modeling and I agree. Can you just give us a little look into the future? What can we expect in the future between ML and decision technology?
Ryan Trollip: OK, let’s get out our crystal ball here.
Peter Schooff: I know predicting the future is hard.
Ryan Trollip: Well I think the one major area where we will probably see the most traction off the top of my head is where we already have operational systems that have their decisions abstracted. This is where we tend to get real-world results pretty quickly because we can quite easily model out and identify where machine learning will make the most impact. We can trace it, we can look at the output from the dashboards and that associated with those decisions and say, wow, if we plugged the machine learning decision in here we could boost the numbers substantially.
And an example of that would be, if you're underwriting insurance policies for folks but you're rejecting anybody that had cancer in the past. Well maybe you can get more refined than that. If you looked at the data and you say, well, if they haven't had a remission in two years and various other attributes associated with that, they're actually lower risk and we're just blanketly rejecting them all. So we can underwrite more people if we plug in that decision, and because we already have a decision that we know how many people were going to be able to add.
So that's where I see a larger expansion coming is in that area. And then also in finding and expanding on different types of models. Especially in feature engineering and that area of machine learning. Once that becomes more pervasive in the tools that are available to automate feature engineering and get better at discovering additional models, we can help to marry existing decision problems with those to operationalize it better. Also I see there's a lot of value being extracted today in explainer AI and explainer machine learning to support executive decisions and those types of things. But the real large ROI, because of that transaction volume, is going to come in the operational space and that's where I think investment will move to.
Peter Schooff: Definitely. And like with your insurance example an earlier podcast said like basically customer service is going to get down to the individual.
Ryan Trollip: Yes, exactly. And one other area that I've been working on which may become more pervasive, it's not for every use case, but is, in certain use cases, where you may need to retrain models. So as your data changes, you have to retrain the models in a certain period of time. And what we can do even today, but it's not very pervasive yet, is automatically have that retrain as the data is coming in. So you have essentially a roll up of data in something like HTAP [hybrid transactional and analytical processing] data environment, average transactional and analytical processing, and it's essentially rolling up all of the variables and pre-preparing them and pretraining the model as the data in the environment changes. So when you actually make that decision it's always very accurate and very up to date.
"Don't be scared of going into a machine learning projects. You can get real ROI from them if you focus on a few key things as you approach it."
You can do that today and there are some folks that are doing that. It's not very pervasive as far as I've seen but I think there's definitely more value there. It's tricky because you need to have the infrastructure in place to be able to have the data ready. And a lot of people have approached data in a different way where they sort of warehoused it and then you have to do a lot of batch processes to roll that up. Having that real-time roll up is a big part of that, right?
Peter Schooff: Yes. And real-time data is becoming extremely important now. This is a huge topic. Nothing's hotter than machine learning and AI right now. And decision management and ML is like a real opportunity for so many different businesses. To finish the podcast, what is the key takeaway each of you want listeners to come away with from this podcast?
Ryan Trollip: For myself, I think don't be scared of going into a machine learning projects. You can get real ROI from them if you focus on a few key things as you approach it. One being — making sure that it's aligned with your business objectives, and making sure you model that out within a decision model so that you can actually operationalize it in a real decision over time. Yeah, I think those are the key takeaways there.
Peter Schooff: Charlotte, would you like to take a stab at it?
Charlotte DeKeyrel: Sure. So again, begin with the decision in mind. Decision modeling helps to frame the problem. Trying to attempt machine learning without understanding your problem and all of its dependencies is just a bag of cats, if you will. So if you know what your problem is, you stand a chance of solving it.
Ryan Trollip: And don't ignore your subject matter experts. Everybody looks at data, but you have a gold mine with your subject matter experts in your organization. You need to take advantage of that. There's a lot of very valuable decisions, because they make a lot of inductive-type decisions machine learning can make.
Peter Schooff: Fantastic. This is Peter Schooff of Data Decisioning speaking with Ryan Trollip and Charlotte DeKeyrel of Decision Management Solutions. The founder of Decision Management Solutions, James Taylor, recently recorded a webinar, Three Things to Maximize Machine Learning ROI, which covers a lot of things we discussed on this podcast and which you should definitely check out. I think this is really an exciting time for you guys Decision Management Solutions. Thanks so much for joining me on this podcast.
Podcast and Show Notes: Real ROI from Machine Learning? Decision Modeling Is the Answer
Ryan has spent his entire career focused on delivering value through digital automation and has over 25 years of experience in leading business automation delivery, and more than 15 years of specialized overseeing and building decision automation services, practices, and software. These practices delivered large, complex decision automation solutions. Ryan is currently the founder of DecisionAutomation.ORG, CTO at Decision Management Solutions, and formerly the head of Enterprise Architecture, Advisory consulting (Business Architecture), and Decision Management Practices.
Charlotte DeKeyrel is a lead Decision Modeler with Decision Management Solutions. She has years of experience modeling decisions for automation all over the world in a wide variety of industries ranging from insurance to military planning. Charlotte has a background in engineering, a degree in Mathematics, and an analytical outlook that helps her to approach problems from a logical standpoint while always on the look-out for opportunities for improvement.