Summary
Machine learning is a powerful set of technologies, holding the potential to dramatically transform businesses across industries. Unfortunately, the implementation of ML projects often fail to achieve their intended goals. This failure is due to a lack of collaboration and investment across technological and organizational boundaries. To help improve the success rate of machine learning projects Eric Siegel developed the six step bizML framework, outlining the process to ensure that everyone understands the whole process of ML deployment. In this episode he shares the principles and promise of that framework and his motivation for encapsulating it in his book "The AI Playbook".
Announcements
Parting Question
Machine learning is a powerful set of technologies, holding the potential to dramatically transform businesses across industries. Unfortunately, the implementation of ML projects often fail to achieve their intended goals. This failure is due to a lack of collaboration and investment across technological and organizational boundaries. To help improve the success rate of machine learning projects Eric Siegel developed the six step bizML framework, outlining the process to ensure that everyone understands the whole process of ML deployment. In this episode he shares the principles and promise of that framework and his motivation for encapsulating it in his book "The AI Playbook".
Announcements
- Hello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.
- Your host is Tobias Macey and today I'm interviewing Eric Siegel about how the bizML approach can help improve the success rate of your ML projects
- Introduction
- How did you get involved in machine learning?
- Can you describe what bizML is and the story behind it?
- What are the key aspects of this approach that are different from the "industry standard" lifecycle of an ML project?
- What are the elements of your personal experience as an ML consultant that helped you develop the tenets of bizML?
- Who are the personas that need to be involved in an ML project to increase the likelihood of success?
- Who do you find to be best suited to "own" or "lead" the process?
- What are the organizational patterns that might hinder the work of delivering on the goals of an ML initiative?
- What are some of the misconceptions about the work involved in/capabilities of an ML model that you commonly encounter?
- What is your main goal in writing your book "The AI Playbook"?
- What are the most interesting, innovative, or unexpected ways that you have seen the bizML process in action?
- What are the most interesting, unexpected, or challenging lessons that you have learned while working on ML projects and developing the bizML framework?
- When is bizML the wrong choice?
- What are the future developments in organizational and technical approaches to ML that will improve the success rate of AI projects?
Parting Question
- From your perspective, what is the biggest barrier to adoption of machine learning today?
- Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.
- Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
- If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.
- To help other people find the show please leave a review on iTunes and tell your friends and co-workers.
- The AI Playbook: Mastering the Rare Art of Machine Learning Deployment by Eric Siegel
- Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die by Eric Siegel
- Columbia University
- Machine Learning Week Conference
- Generative AI World
- Machine Learning Leadership and Practice Course
- Rexer Analytics
- KD Nuggets
- CRISP-DM
- Random Forest
- Gradient Descent
[00:00:10]
Unknown:
Hello, and welcome to The Machine Learning Podcast. The podcast about going from idea to delivery with machine learning.
[00:00:20] Unknown:
Your host is Tobias Massey, and today I'm interviewing Eric Siegel about how the Biz ML approach can help improve the success rate of your ML projects. So, Eric, can you start by introducing yourself?
[00:00:30] Unknown:
Tobias, thanks so much for having me on the show. So I've been in the field of machine learning for 30 years. I taught the graduate courses of machine learning and artificial intelligence at Columbia University where I was a professor. Before that, I got my PhD there, and I've been an independent consultant applying machine learning for business use cases for 20 years. Along the way, I founded the now long running Machine Learning Week, formerly predictive analytics world conference series. It's got a new sister conference, generative AI world, 1st week in June in Phoenix.
And I'm the instructor of a popular well rated online course, machine learning leadership and practice, end to end mastery. My first book, predictive analytics, was a bestseller adopted at 100 of universities. And my new book is The AI Playbook.
[00:01:26] Unknown:
And do you remember how you first got started working in machine learning?
[00:01:30] Unknown:
Well, I got fan I got I got, intrigued and fascinated with the with all the AI concepts as a kid, and I had a pen pal who had a personal computer and was a philosophy professor when I was a kid, a distant relative. But, basically, when I started grad school in 91, that's when I really was like, it's gotta be machine learning. That's the only way to computers to really ramp up on sort of complex tasks. And I I've always been focused on supervised learning, really. And as an independent consultant, though, I've been on business use cases, right, where there's a very clear at least, potentially, very clear return on investment. You can really measure the effectiveness, the the way it improves operations.
You know, and, you know, I I I my father's a doctor, but I was like, you know, you can't measure the false negative and false positive costs. I mean, how are you supposed to balance the 2 subjective and but but things are really concrete oftentimes in in business applications. And I love the idea of making business value out of what is really the coolest kind of technology, which is learning from the data to find patterns and hold in general over cases never before seen. So that's sort of the evolution of my interest.
[00:02:36] Unknown:
And as you mentioned, you just recently wrote a book, the AI playbook, where you are espousing this process and technical framework for going from idea to delivery with machine learning projects. You've coined the phrase biz ml to encapsulate that idea. I'm wondering if you can give a bit of context about what that is and some of the story behind how you came to that formulation.
[00:02:59] Unknown:
Yeah. It's the 6 step practice playbook paradigm framework for running machine learning projects end to end so that they successfully deploy. You know, today, there is no established standardized practice for running these projects that's that's well known to business professionals, and it's really a business practice. In fact, in general, business professionals don't even realize there needs to be a specialized, very particular kind of business process that's well understood and collaboratively executed in order to make sure these models actually get deployed. And in fact, most new machine learning projects fail to achieve deployment. And I as as, in fact, has been shown from research I participated in, where we surveyed data scientists. I did that in partnership with Rexor Analytics.
A year prior, did it with kd nuggets, similar results. Data scientists make the model that doesn't doesn't get green lit. The stakeholders get get cold feet. But there's lots of executive surveys showing the same thing. Recently, IBM came out with the results of their industry research showing that the average return on investment for AI projects is basically nothing, very low in the in that it's actually lower than the cost of capital. So BizML is is the buzzword to try to help evangelize the understanding of just the need for something like this. Right? And it it consists of 6 steps. And since since your listeners are pretty technical, I can just say, hey. Look. This actual breakdown of 6 steps is is nothing terribly new to the degree that people actually ever formulate something like this. It's just that no formulations caught hold. So let's give it a nice buzzword.
And probably much more importantly, let's establish an understanding that the business professionals have got to ramp up on a semi technical understanding so that they can collaborate deeply end to end across the steps.
[00:04:51] Unknown:
And as you mentioned there, this is in contrast to the way that a lot of teams might typically approach the delivery process of a machine learning project where they say, oh, I made this model. Now I just need to put it in production and everything's great. I'm wondering if you can give some some contrast to the way that you think about this procedural step of going from I've come up with the idea. I've run it by the business leaders. I've gotten buy in. Now I'm going to actually do the technical work of development and deployment and some of the ways that that might contrast with the way that your typical technical ML team might natively approach the the process of deploying a machine learning model into a production context.
[00:05:38] Unknown:
Yeah. I mean, you've basically outlined why we need BizML or something like it. You've you've described the phenomenology, the syndrome that repeatedly happens. The the the way they're that teams natively conduct it is failing routinely. When you get buy in from the business side, it's not. Buy in isn't buy in because they didn't quite get what they're green lighting. They didn't quite get exactly how probabilities are going to alter and therefore improve large scale operations, the very operations they need to protect with their life because that's what's making the company stay alive at this particular moment.
They don't understand yet that how do you quantify the performance of a predictive model. They may understand. Sure. It doesn't predict like a magic crystal ball, but that predicting better than guessing is oftentimes, very valuable for improving large scale operations. But they don't know what that means concretely. How do you translate that concept into metrics that mean something to business stakeholders? You know? Not just lift and gains and precision recall. Even accuracy is just a technical metric. We gotta get people on the same page as far as, business metrics like profit, ROI, number of customers saved, number of dollars saved.
In fact, that metrics part is sort of 2 of 3 main semi technical things that we all gotta get our business stakeholders, our clients, our bosses on the same page that everybody has to understand, everybody. What's predicted, how well, that's the metrics, and what's done about it. And the what's predicted and what's done about it, that defines the use case, but that's literally only the first of 6 steps. The first step is to define the last step, which is deployment, that use case. But then the way I formulated it is step 2 is to define the prediction goal. That is get much more specific about the dependent variable, not just, hey. We're gonna predict customer churn, but we're gonna predict for all customers who've been around at least 4 months who's gonna decrease their spend by at least 80% in the next 3 months and not increase their spend accordingly in another channel because that doesn't count as a defection, etcetera, etcetera. All the caveats and qualifiers that fully define what's gonna end up being an actionable pertinent dependent variable that for you to model on, Those are that's just semi technical. Right? It's detailed. It's the kind of thing business stakeholders generally don't tend to get involved in, but they must.
That those are business driven decisions. They're based on pragmatics and how exactly if you're gonna integrate probabilities and have those probabilities directly, actionably alter large scale operational decisions, we better get on the same page of exactly what their probabilities of. What are you predicting? So that end to end practice, basically, the first 3 steps are pre preproduction, and they're they revolve around those details, what's predicted, what's how well, what's done about it, not quite in that order, but it's, you know, what's the deployment goal, which is the first 2 or the the the main pair, what's predicted, what's done about it. The prediction goal where you're getting much more specific about what's predicted, and then metrics, how well it predicts both business and technical metrics. And then the last 3 are what everybody already knows if you're if you're a data engineer or data scientist, which is prep the data, train the model, and deploy it. Yes. You need to manage monitor thereafter, up update, refresh the model, and such. But let's actually get them deployed. Let's define a practice that culminates with deployment that's understandable and executed with business stakeholders.
[00:09:07] Unknown:
I think it's also interesting to dig into that question of deployment because oftentimes, you say, oh, I've built this model. It can predict this thing based on these input values, but that doesn't necessarily encapsulate all of the other work that's necessary to productionize the model or the different systems that need to be able to interoperate with that model to get the desired outcome. And I'm wondering what are some of the shortcomings that you have seen in that conceptualization of what deployment even means.
[00:09:38] Unknown:
Yeah. I mean, deployment you know? So I start out with an example from my own because early early in my consulting, and I had a a large, successful online dating company. And I convinced them, hey. You should have me do your churn modeling, and, you know, we can retain a lot of the customers the paying premium level customers that you're losing. They're like, sounds good. I mean, they had a lot of cash. They had a lot of revenue going on. It was a small company. So then I, you know, put it in the PowerPoint, and I was like, look. This is the lift of the model, and this is the kind of thing you could if you targeted here's a profit curve for targeting a retention marketing campaign, you know, giving a discount, and that cost something because people who were not gonna cancel are gonna consume the discount anyway and did that arithmetic, showed the graph, churned it out in Excel because technical tools don't let you do profit curves in general.
And then I got the typical response. Right? I mean, the power is stuck in pow PowerPoint. They're like, oh, that seems interesting. You want us to do something about it? You want us to start a new initiative, a new operation, a new marketing campaign? They're up to their eyeballs just keeping operations alive. I had pitched them on a general idea, but they and they had accepted without really getting schooled by me and what that implementation would actually involve if we're gonna if it were gonna actually achieve value. And that's the general syndrome that to this day is repeated over and over again to 1 degree or or another.
[00:11:10] Unknown:
Another framing of this that I think is interesting to explore is what are the contexts in which this BISM L framing needs to be adopted, and what are the technical scopes of the types of ML or AI projects that benefit from this process where maybe if you're doing scientific research and you just wanna build an ML model to see if it can do some sort of prediction or, maybe for, like, protein discovery, you just need an AI model that can churn through a bunch of probabilities and give you some sort of result. Maybe that doesn't need as much of the process upfront, and I'm wondering if you can just give some of the caveats of when and how to bring this formulation to bear.
[00:11:57] Unknown:
Yeah. I mean, this is a business practice that's meant to leverage machine learning to capitalize on it in a in an enterprise commercial deployment or could be other kinds of organizations. It can be health care for political campaigns. The Obama campaign very in 2, 012 very famously used uplift modeling, persuasion modeling, but and they got it deployed very much. But, yeah, to put you know, I'm not handling anything and everything with BizML that 1 might refer to as AI. So we're really talking about predictive use cases, right, which is where which is the established use cases over decades, and it's where most of the money still is, where the proven wins can be and often are. And only and it's older than, let's say, generative AI, but it's not old school. It's where it's it's still largely on tap, especially with all these project failures, and where where very much the failures are more because you don't get deployed much more often than failures where you prematurely deploy. There's, like, Zillow. There's some examples of that, but that's not the main syndrome that we're facing these days. But, yeah, if you're trying to get insights or capture patterns and then inspect them with human eyes to get insights or you're con you're or you're performing research and development. That's different. I'm talking about getting it deployed, implemented, and integrated at a enterprise level.
I'm talking profit.
[00:13:21] Unknown:
And as you mentioned, you have been working in this industry for a long time. You've been working in consulting a fair bit. And I'm wondering what are the aspects of your personal experience in this space that have been most formative in your conceptualization of this BizML process and some of your experience of putting it into implementation that have helped to reinforce the ideas that you put out in the book.
[00:13:48] Unknown:
Yeah. I mean, I just feel like I see projects fail to reach deployment, repeatedly. And I have you know, I I included that personal experience of my own consulting with the online dating site that I just mentioned in the book as well as a as a successful project for targeting ads. It's that's also included in the book, and I've got positive experiences from UPS and FICO. In general and that online dating client case example was from a long time ago, and I just don't feel like the world has changed that much, that the industry has changed on the business conception, the way it's sold the business, and the way it's conducted from a business and operational standpoint, from an organizational standpoint. It's it's it's a long time coming. You know? I mean, talking to a technical audience here. So a lot of people may have heard of Krisp DM. Have you heard of Krisp DM?
[00:14:39] Unknown:
I think I've heard come across the phrase, but I forget the specifics.
[00:14:44] Unknown:
Yeah. So that's that's the problem. So that was the may by far, the only 1 that gained some traction as far as far as formalizing a business organizational practice. It stands for cross industry standard process for data mining. It's 30 years old, and it's called data mining. It's old. Right? That's outdated. And it's, it never caught hold. It didn't succeed definitely in the sense that business, professionals have never heard of it. And, of course, this is a business practice. They're the ones, 1st and foremost, who need to know it if they're going to be running ML projects through to successful value capture and deployment.
So it's just like it just becomes clear. I mean, I've been in the field 30 years. It just becomes clear. This is a long time coming. Somebody's gotta do it. Let's we gotta evangelize not just the the next best thing technically, but but the understanding and organizational practice to much to to really improve what right now is a somewhat dismal success rate of reaching deployment. I mean, the world's gotten really good. The industry's gotten really good at at sweeping failures in this sense that, you know, getting the cold feet and no deployment at the end of the project under the rug, you know, largely because the AI hype kinda helps with that. It's not sustainable. The executives are catching wise. The, you know, the industry reports are coming out, and they may not be quite as loud as the buzz and hype right now. But in general, the stuff's gonna hit the fan. We gotta make this change. Now it's not just my own practice, though. I've been operating kind of an industry as a as a thought leader now for so long. Been running the conference series since 2009.
The majority of example case studies from both of my books, including the first 1, predictive analytics, which has a color table of a 100 and, is it 40 a 147 mini case studies of of deployed machine learning. Right? These come from the conference, but now we have a whole operationalization business, BizML track in the conference for this very reason. And so it I've been, you know, basically with that, essentially, networking, living in just having access to so many other people's experience, and the theme just comes back over and over again. Let's get our business stakeholders on the same page. Let's ramp them up on the concrete understanding of what defines a project, which is what's predicted, how well, what's done about it.
It's semi technical understanding they must ramp ramp up on and so that they can then collaborate deeply, weigh in on the project specifics end to end across the practice, and then make an informed, hopefully, positive decision about whether and exactly how to deploy the model.
[00:17:27] Unknown:
So in order to put this process into practice, what are who are the personas that need to be involved in the overall project scope where largely it has been a very technically oriented team. It has been the data scientists, the, chief information officer, the chief data officer who manages the project scope and the execution, who are the people who are often missing out on that in those conversations, and who are the personas that need to be involved from day 1 through to finalization of the project in order for it to stand a higher likelihood of success?
[00:18:06] Unknown:
Well, the project has a client. Right? And the the customer is always right. You know? I mean, that's the part that's missing. I mean and the customers don't think they're right because they think they don't understand it, but they don't know the that what they don't know is that what they don't know is not hard to know. They don't need to learn how to change a spark plug. They just need to learn how to steer momentum, friction, the rules of the road, expectations of driver. You have to run a machine learning project. It's the same thing. You need to get a certain level of expertise. So what are we talking about? We're talking about the stakeholder. Right? If the if the model's meant to be deployed and, in general, with the enterprise project they are, I mean, otherwise, what's the point? You're trying to improve operations. What are those operations?
Who runs those operations? Who's in charge of the effectiveness and efficiency of those operations of targeting, marketing fraud detection, financial risk management, where to where to drill for oil, which satellite to inspect just potentially running out of battery. You know, this predictive use cases are, you know, where you stand to improve pretty much all the main large scale operations that we conduct as organizations. But someone is in charge of that, and it's not a data scientist. Right? It's your client. It's your boss. It's your internal client. It's that stakeholder, and there's other stakeholders that are gonna be involved 1 way or another. Anybody and everybody involved in or touched by the project needs to get a basic sense. Right? And this is stuff that, you know, it's shrouded in quasi secrecy because we keep it so technical, but it can't be.
We need to, you know, unveil this, and, you know, it's a lot more understandable, accessible, interesting, and exciting than high school algebra. And they need to see that and, you know, read a book on it. In fact, I'll pitch my book, which, you know, along the way of covering the those 6 steps of BizML across 6 main chapters is actually the main contribution ultimately, hopefully, of this book is that it's ramping up the business reader to understand that semitechnical know how and background that they need in order to participate.
[00:20:13] Unknown:
And, organizationally, a lot of times, the MLT might be siloed from the rest of the business because of the fact that what they do seems to be such a black art that nobody else can understand it. I'm curious, what are some of the ways that businesses need to think about the organizational structures that need that that need to be addressed in order to bring these people together more natively to ensure that there is better collaboration, better machine learning and AI projects are thought of are actually more likely to be conducive to the overarching goals of the business.
[00:21:00] Unknown:
Yeah. I mean, I you know, first of all, the first step in solving a problem is recognizing that you have a problem. And so what you said, this concept that people are seeing it as a black art, that's the problem. And as soon as they realize that, the solution starts to spell itself out. It depends on the organization. But you need an analytics translator and or whoever's actually running the project, whether they actually have a data science background or not, they need to be in touch with the value proposition. You know, the the the keeping as a blackguard and keeping as the sort of most coolest, bestest, rocket science ever, which it is, by the way, but then, you know, therefore, it's automatically valuable, that's that's the misunderstanding.
It's only valuable if you act on it, integrate it if it actually changes and thereby improves operations. And recognizing that is basically getting away from hype. To the degree that people see it as a mysterious black box is the degree to which it's hyped in the sense of mismanaged expectation, overpromising. And the antidote to hype is to focus on concrete value. So this has to be reframed as an operations improvement project that uses critically and necessarily machine learning and a predictive model. But first, it's a business project, and second, it's a data or analytics or tech project in general. So it's it is a mind shift, but not the craziest 1 ever. It's just like, let's get a big concrete, realistic, and down to earth con tangible.
Let's get real, man. That's it.
[00:22:29] Unknown:
As you mentioned, the business stakeholders don't need to know all of the low level technical details about whether you're using random forests or gradient descent. They just need to know this is the impact that the model is going to have. These are the pieces of information that it needs to have access to. I'm curious what is an appropriate level of detail and depth to present to the business stakeholders in order to ensure that they can communicate most effectively about the problems that they see, the solutions that they would like to see applied to the project?
[00:23:05] Unknown:
Yeah. Great question. I mean, listen. They just need to learn what's predicted, how well, what's done about it. So literally, the ins and outs of a model. Literally, the input and the output. What's it predicting? What's the meaning of a probability? You know? It's not a magic crystal ball. It's not gonna make highly confident predictions in general. So putting a number on it, that makes sense. The idea that it's a number between 0a100 or 0 and 1, that's fine. That shouldn't take too long to to comprehend. And yet the inside of that model, the inner workings, whether it's a random forest, no. They don't need to get involved, and I don't know where the spark plug is in my car, by the way, really.
But I know the concept of of an internal combustion, and it's cool. And I liked learning about it as a kid. People should learn about ensembles like Random Forest. It's a really cool scientific thing, and then they can in in 15 minutes, they can learn something really cool about sort of not the wisdom of a crowd of people, but the wisdom of a crowd of models. And it's it's delightful. Right? They should get some sense of it. But operationally, yeah, they just need to sort of learn. If this isn't the rocket science part, it's it's learning how to use or capitalize on the rocket science, which ultimately just comes down to predictions, otherwise known as probabilities.
So probability of what? How is that gonna be used exactly and precisely? How you're gonna translate from a prediction into an action, which is often a very simple direct translation. That's not a scientific part, but it's pragmatic. And where do you draw the line? Am I gonna am I going to hold 0.2% or 0.3% of all transactions on the basis of potential fraud? Right? You have to decide exactly where you draw that line. That's a business decision that that's gonna be driven by the trade offs that the model evaluation shows. So you need to be bridging that gap. If a stakeholder is not looking at any numbers, then they're not doing their job because they're in charge of improving a large scale operation.
[00:25:08] Unknown:
And so once you have defined the scope of the project, you have everybody involved, What are the personas that you see as being best suited to owning the overall delivery of the project? Who should be the leader of an ML project? Is it the technical lead? Is it the business stakeholder? I'm just wondering from a management and visibility perspective how best to ensure that all of the necessary steps are getting done and that the appropriate requirements are gathered, the technical approach is sound, etcetera?
[00:25:43] Unknown:
Yeah. I'm actually agnostic about that. It depends on the context and situation, germination of the idea, the organizational structure, and it can or could be somebody with 1 with more business background or more technical background. It depends. The way I put it in the book is the answer to that question of who should lead the project is it should be you. Right? Okay. It doesn't have to be you if you don't want to. But if you want something done well, do it yourself, or at least ensure someone else is. The point is that this practice must be followed, that we need to get a collaboration between people who are quants, you know, data scientists, data science experts, modeling experts, and business leaders who are have their eye on on the ball, understand the business pragmatics, the constraints in deployment, what metrics matter.
It's it's gotta be a collaboration that unifies those 2 sides where there's generally such a big gulf between them. So that's the deal. Somebody's got to lead a project that's going to bring the sides together, whichever side that person may. People tend to be a little bit more on 1 side or the other, probably a lot more on 1 side than the other in general. But either kind, you know, can lead the project.
[00:27:08] Unknown:
And talking to the organizational patterns, what are some of the ways that you have seen them hinder the progression of an ML project and some of the anti patterns that teams should watch out for and be aware of as they go through these different steps from idea to delivery?
[00:27:26] Unknown:
Yeah. I mean, I think that, there's a misconception that, hey. Look. We define the business objective, so we're all good. You know, I know that technology is meant to have business value. And I've defined it. We're gonna do churn modeling in order to target a retention marketing campaign. But that's literally only the first of 6 steps. So and that's, you know, and that's, like, that's exemplified by my story with the online dating site where I'm like, that was basically what I pitched, and they said, go for it. But in fact, you gotta get much more concrete and collaborative across all the definitions at a real level of detail, not the detail about exactly which modeling method you're gonna use, but what what's the definition of the dependent variable, which, by the way, even forgetting the business practice, obviously, you need to define the dependent variable, the the thing you're trying to predict, very carefully and very precisely and specifically in a way that's aligned with the intended deployment and how that deployment is meant to pursue particular business objectives and and business metrics.
That's that's not standard process. It's kind of a ad hoc thing. The idea of really fully fleshing out all the aspects, qualifiers, and caveats that fully define the dependent variables definition. That's a whole chapter in the book, and that's sort of a main there's a lot of stuff in here, which is not big time rocket science that still even the most senior data scientists haven't really been trained on or been given material to think it through and make it concrete. So there's just sort of a business pragmatic, realism driven, value focused way of reorienting around how we're leveraging this technology.
It's a culture shift, but I I I'm hoping that if we rally around just the idea that we need an understood, well defined business paradigm or playbook that must be collaboratively executed, that will be well on our way to the necessary culture shift that will really help drastically improve the success rate of deployment driven machine learning projects.
[00:29:45] Unknown:
And then once you have the business stakeholders involved, you have your project defined, you are you've started down the path of building the model. What are some of the misconceptions about the work that is involved in building and deploying the models or the capabilities of the models once they're defined that might derail the overall project as you get closer to that deployment step?
[00:30:13] Unknown:
Yeah. I think that, there's this problem that I call the accuracy fallacy, which is that we so often rely on accuracy. The metrics have to be concrete, and they also have to turn to business metrics. That's the only way to get stakeholders to understand what it means in a concrete, meaningful, specific way to have value from an imperfect model, to have an improvement to a a metric like profit and ROI from a model that is not a magic crystal ball, but but does predict better than guessing. I think that, you know, there's when you ask about what are the misconceptions, it's kinda like there's 2 layers of fog. The first layer of fog is the general AI hype. And especially with the advent of generative AI, which, by the way, large language models can be used as predictive models. You can you can there's there's a variety of ways that you can get it to say, hey. What are the chances that this social media post is misinformation?
So then you're getting a probability. It's a much bigger model and much more costly, but it might since it's so language heavy, it might be better for certain tasks like that. But in general in general, generative AI, what it does is makes first drafts. So we have to understand, okay. Look. This thing is not gonna be autonomous, and you need a human in the loop. You need to review everything that it's written. You need to you need to vet it or every picture that it that it that it renders, etcetera. So there's that first layer of fog is sort of understanding, okay, what is the capabilities now? And I contend that the overall on the public stage on the national state international stage right now, the misconception is the the the false narrative is that with the advent of these incredible generative AI models that are so impressive and seemingly human like, that means that we're concretely taking a step toward AGI, artificial general intelligence computer capable of anything a person can do. And let's call it what it is, artificial humans, I don't think that we have concrete steps in that.
I'm not saying that we never that it's technically or in principle impossible, but I don't think I think the speculation is the same as it was in 1950. Like, I I don't, and it's hard to hold that those 2 things in your head at the same time. This thing is incredible. It's so human like. It can deal with concepts around human languages in a in a way that's meaningful on a certain level and very impressive. And yet and yet, the the limit on its actual capabilities is very much there. So that's the first layer of fog. The second is even if you eschew the standard large scale hype that's happening here and you focus on concrete value of a predictive use case, for example, And you're like, I'm gonna do churn modeling. I'm gonna target my or I'm gonna target, with response modeling and target marketing. I'm gonna improve my fraud detection by giving the, auditors a better pool of more likely fraudulent transactions, or whatever it is.
That's literally the first of 6 steps, and that that's only the first good step and that there does need to be a lot more concrete, detailed work bringing together the 2 sides, the business tech sides. And that's the thing that's missing.
[00:33:29] Unknown:
In terms of the motivations and goals for the book that you wrote to formulate this BizML framing, what is the overarching impact that you hope for it to have? And in terms of those personas who need to be involved in the overall
[00:33:55] Unknown:
I'm only slightly tongue in cheek there because I I'm only slightly tongue in cheek there because I am really intending if both sides are gonna collaborate, they gotta speak the same language, get on the same page. And there's a lot of things that data scientists haven't quite thought through yet in terms of organizational process and business value, defining a dependent variable, prepping the data accordingly, because that's where you manifest the dependent variable's definition? What kind of semi technical, understanding your business stakeholder, your boss, your clients need to get in order to participate.
My hope is that the success rate of machine learning deployments, improves greatly. Right now, it's, I don't know. People say it's only 20%. Well, that's actually very hard to measure. But I'll tell you 1 in particular data point is in in the more recent round of industry research I participated in with Rexxer Analytics on their data science survey, and I was able to convince them to add more questions about deployment success. Only 22% of data scientists said that with machine learning projects meant to incur a new capability, only 22% said that their initiatives usually deploy. Across all machine learning projects, only 43% said that.
And this this is this isn't this is just 1 data point. There's a lot of different surveys, but they they don't directly measure how many projects fail or succeed. They're sort of like, how often does data do data scientists say their projects usually succeed? It's a little 1 degree removed from that. But the fact is it's pretty dismal. And the fact is it shouldn't be. I mean, you've got you've got leaders, leading organizations, big tech, a handful of very particular case studies like UPS and FICO that I cover in the book. But at at large, across organizations outside of those handful of leaders, you've got routine failure.
It's endemic, and it shouldn't be. We just need to catch up. We need to get a a much more broad part of the population understanding what it really takes to bridge that gap and get these things deployed.
[00:36:02] Unknown:
And in your experience of working in this space, formulating this BizML process, working with customers and clients and people in your network to onboard them into this way of thought? What are some of the most interesting or innovative or unexpected ways that you've seen those ideas put into action?
[00:36:21] Unknown:
Well, you know, my my my favorite story is the 1 I lead and then actually kinda wrap up the book with, which is UPS, that dramatically improved the efficiency of their delivery of 16, 000, 000 packages a day. And Jack Levis so to sort of answer your previous question, here you have a guy who's not a data scientist, but obviously very savvy analytically. His title was senior director of process management, and he called it the project. He called it an, operations research project. Didn't call it machine learning. That job title and that term operations research is so boring. How are we gonna evangelize the world on the excitement of machine learning? Well, I don't know. But, look, sometimes value is more sexy. Right? Maybe the real sexy thing is the most stodgy projects.
And UPS, this company is now more than a 100 years old with, you know, established in its ways of conducting operations. How are you gonna incur that change? Here, he is not an executive. Right? He and it it's a great story about how he he convinced executives above him, and then ultimately, in deployment, had to effectively, with a team of more than 700 helping with operationalization, convince all the dock order workers loading the trucks. So this was the gist of the project was predict to predict tomorrow's deliveries in order to optimize them. So you've got a bunch of known packages at a shipping center, and you're trying to decide how to allocate them, the trucks that and and then load the trucks so they're ready for their early morning departures.
But there's a bunch of unknowns. You there's a bunch of packages that still may be coming in. You're not certain about them. So augment the known packages with a set of predicted and and tentatively presumed packages. Now you've got a more complete picture, and then the op optimization system has worked with a more complete picture to decide which truck gets which package. Then they have to change loading dock behavior because they're used to looking at the address and saying, oh, these 2 packages definitely go together. And they have to kinda override that, prescribe their behavior, and then you prescribe the driver's behavior. So in combination with that system that predicts packages and acts on those predictions with another system that prescribes the driving directions, the win is amazing. Every year, UPS is now, because of the system, saving a 185, 000, 000 miles of driving $350, 000, 000 and a 185, 000 metric tons of emissions.
[00:38:56] Unknown:
Yeah. Those are definitely impressive statistics, and it's always great to see some of the impact that a conceptually simple, idea can have when you're dealing with anything at large scale where even if something will only save you a couple of pennies incrementally on a single unit when you scale that up to 1, 000, 000 of times a day, then it can have a massive impact.
[00:39:22] Unknown:
Right. Exactly. So you might have something something that targets marketing better in comparison than no targeting at all. It could improve the profit of a marketing campaign by a factor of 5. Whereas with UPS, I don't think they improve their profit by a factor of 5. But again, yeah, if you if you increase the fuel efficiency of all the jet engines by 1%, right, I mean, that's a lot.
[00:39:45] Unknown:
Absolutely. In your own work of ideating this process, putting it into action, working with people in the industry, what are some of the most interesting or unexpected or challenging lessons that you've learned personally?
[00:40:00] Unknown:
Well, that's a great question. I mean, first and foremost, I'd say that the challenges and problems are are BizML is my reaction to the challenges, and it's my formulation of where you saw things succeed. So it's sort of like rather than being a problem to implement it, it's the antidote to the problems. And then broadly, you've got those 2 levels of fog. And then and by focusing on value rather than you know? Another way to frame the hype these days and I I kinda here here's my turn of phrase that that a few people have responded very well to. It's like we're more excited about the rocket science than the launch of the rocket.
It's a sort of fetishization of data science. And a data scientist like me is gonna automatically tend towards that. I mean, that's why I got into the field 30 years ago. I thought learning from data to predict was the coolest kind of science and technology. And then we're so excited about it. And since it's sort of the best, most advanced, and potentially most generally applicable form of technology, you know, everyone's gonna be excited and hang their hat on it and say, look. We're doing the best thing. Of course, we're doing the right thing. But you're not unless you kinda follow through on the swing by way of deployment. So, you know, I think what I'm trying to do here is say, look. There are plenty of use cases or or or case studies where companies did get it right. Let's look at what they did in contrast to what companies often or more generally do and differentiate it and say, okay. Look.
We gotta get collaborate on these details across these steps and these phases. We gotta break down the life cycle in a way that everybody understands and is involved in. And, you know, that's that's the only way we're gonna get to deployment.
[00:41:50] Unknown:
And so for people who are in the space of machine learning, they're excited about their model, they wanna see what it can do in the real world, what are the cases where the BizML framing is the wrong choice?
[00:42:05] Unknown:
Well, first of all, the first question is sort of whether ML is the right choice. Right? I mean, if you shouldn't be using ML, then you shouldn't be using BizML or any kind of technology or process related to ML. And that's that's a legitimate question. So there is and sort of that first of 2 walls of or layers of fog, you know, you've got the tendency these days for executives to say, we gotta do AI. We gotta do more AI, and let's use AI. Instead of focusing on a particular business problem or optimization opportunity, you know, the the use case where the value is gonna actually be driven.
But if you're gonna use pursue a predictive use case where and and prediction in general is what you get from a from a model. Whether you're predicting what should the next word be that I'm writing, and that's how generative AI works. Although it's token, but on that level of detail similar level of detail as next word. How should I change this pixel in the next iteration as I'm rendering this image? That's how generative AI operates with a model to predictably create, generate new images, video, sound, music, etcetera. Or predicting who's gonna click buy, lie, or die, which satellite's gonna run out of battery, and which transaction is gonna be fraudulent for enterprise use cases.
If you're doing that, then you have to reverse plan for deployment. How's that gonna be useful? And then how do you get there? And you just sort of have to be involved in all the steps along the way. So, really, what I've done is just broken down what's almost entirely self evident. The idea of of defining a life cycle in and of itself is not new, but the idea of having a formalized breakdown that's understood in general, that has a brand associated with it. My beautiful 5 letter buzzword, BizML, attempt to do is an attempt to do that, and getting and by by having this, getting stakeholders on the same side of even understanding that there needs to be a specialized particular business practice to run ML projects successfully through to deployment in the first place. Right? That that's what's gonna happen. Now BizML is structured for those types of predictive use cases. Generative AI could only borrow from it so much. Right? So broadly speaking, you've got a similar thing. You do need to be very much value driven with generative.
You have to decide, okay. Exactly what operation? Like, you've got this team who are manually writing a 100 letters to customers a day. You know, could first drafts be written by generative? Be very specific about the operation. What would be a measure of success? How are you gonna integrate it to to ensure that it works and that you're not sending out bad letters, etcetera. So it is about being concrete, reverse planning, But the particulars of BizML are very much for predictive where you have to define the dependent variable. I don't call it that. Right? This is for business readers. I call it the prediction goal or the model's output.
So, yeah, it's it's it is it is more finely scoped specifically for predictive use cases.
[00:45:14] Unknown:
And as you continue to work in this space, work with stakeholders, technical teams, what are some of the future developments that you see or would like to see in both the organizational and the technical approaches to ML that you see as improving the overall success rate of AI projects?
[00:45:35] Unknown:
Well, I think that as this kind of thing gets adopted, and it will. Look. Even if BizML I'm in so in love with my buzzword. But even if it's not the buzzword that the world chooses, something like this is gonna catch on. We need to have common language understanding of vernacular around around this type of process. And once we do, and there's a lot more uniformity across the industry of successful deployments and value driven project planning from the get go, My sort of vision is that instead of, you know, instead of being like, hey. Look. We create a business, and then we figure out where there's an improvement to operations, and then we create a modeling project accordingly.
The thing is that in that way, you're kind of playing catch up. You're like, well, what data happens to be collected right now from transactions? You know, how are operations run, and how are we gonna somehow ham fist, something that that renders predictive scores from a model into that and then integrates that predictive score or predict or probability. It's gonna start to become the other way around where businesses more and more are planned around machine learning because that just becomes such a standard part of what it means to effectively run large scale operations. So I think that it's sort of a broad vision. It's a real reorientation.
So instead of machine learning playing catch up, it's just part of enterprise planning and startups, etcetera, from the get go. The other main thing I think that's missing is is in metrics. How good is it? People don't talk about how good AI is. Data scientists only work almost not not only not entirely strictly, but almost only with technical metrics that only tell you the relative performance of a model in comparison to a baseline like random guessing. They don't tell you anything about the direct they don't tell you anything directly about the absolute business value, like profit or ROI, that a model could deliver depending on how and whether it's deployed.
And that's got to change. So on the technical side, across data scientists, we need to start working with business metrics.
[00:47:47] Unknown:
Are there any other aspects of this overall process of idea to delivery and deployment for machine learning projects or your experience working in the space or your overall goals for your recent book that we didn't discuss yet that you'd like to cover before we close out the show?
[00:48:06] Unknown:
Not really. I just want to see, I you know? Look. This means data scientists have gotta get a little bit out of their comfort zone just the same as there's a gulf between the tech and biz. Right? And both sides have to get out of their comfort zone a bit. It's like you need to have more meetings with normal people, you know, with the with the stakeholder, with your client, with your boss. You need to get concrete, and you need to help them ramp up on this. You need to you know, if your uncle gives my book to you, the AI playbook, do what I say in the opening FAQ where it says, well, I'm a technical reader. There's 3 chapters you have to read, and then skim the rest, and then give it to the boss because they're the ones who need this, and they're not gonna get on the same page as you unless you reach out sort of reach across the aisle in this way. We've gotta bridge this gap and be more value driven.
[00:49:03] Unknown:
Alright. Well, for anybody who wants to get in touch with you and follow follow along with the work that you're doing, I'll have you add your preferred contact information to the show notes. And my usual final question is asking what you about what you see as the barrier to adoption for machine learning. We've spent the whole conversation today talking about that. So I just wanna say that I appreciate you taking the time today to join me and sharing your experience and formulation of how to actually effectively deliver on machine learning projects and helping the business realize the benefits of that. So, thank you for that, and I hope you enjoy the rest of your day. You too. Thanks for having me, Tobias.
[00:49:47] Unknown:
Thank you for listening, and don't forget to check out our other shows, the Data Engineering podcast, which covers the latest in modern data management, and podcasts dot in it, which covers the Python language, its community, and the innovative ways it is being used. You can visit the site at the machine learning podcast.com to subscribe to the show, sign up for the mailing list, and read the show notes. And if you've learned something or tried out a project from the show, then tell us about it. Email hosts at themachinelearningpodcast.com with your story. To help other people find the show, please leave a review on Apple Podcasts and tell your friends and coworkers.
Hello, and welcome to The Machine Learning Podcast. The podcast about going from idea to delivery with machine learning.
[00:00:20] Unknown:
Your host is Tobias Massey, and today I'm interviewing Eric Siegel about how the Biz ML approach can help improve the success rate of your ML projects. So, Eric, can you start by introducing yourself?
[00:00:30] Unknown:
Tobias, thanks so much for having me on the show. So I've been in the field of machine learning for 30 years. I taught the graduate courses of machine learning and artificial intelligence at Columbia University where I was a professor. Before that, I got my PhD there, and I've been an independent consultant applying machine learning for business use cases for 20 years. Along the way, I founded the now long running Machine Learning Week, formerly predictive analytics world conference series. It's got a new sister conference, generative AI world, 1st week in June in Phoenix.
And I'm the instructor of a popular well rated online course, machine learning leadership and practice, end to end mastery. My first book, predictive analytics, was a bestseller adopted at 100 of universities. And my new book is The AI Playbook.
[00:01:26] Unknown:
And do you remember how you first got started working in machine learning?
[00:01:30] Unknown:
Well, I got fan I got I got, intrigued and fascinated with the with all the AI concepts as a kid, and I had a pen pal who had a personal computer and was a philosophy professor when I was a kid, a distant relative. But, basically, when I started grad school in 91, that's when I really was like, it's gotta be machine learning. That's the only way to computers to really ramp up on sort of complex tasks. And I I've always been focused on supervised learning, really. And as an independent consultant, though, I've been on business use cases, right, where there's a very clear at least, potentially, very clear return on investment. You can really measure the effectiveness, the the way it improves operations.
You know, and, you know, I I I my father's a doctor, but I was like, you know, you can't measure the false negative and false positive costs. I mean, how are you supposed to balance the 2 subjective and but but things are really concrete oftentimes in in business applications. And I love the idea of making business value out of what is really the coolest kind of technology, which is learning from the data to find patterns and hold in general over cases never before seen. So that's sort of the evolution of my interest.
[00:02:36] Unknown:
And as you mentioned, you just recently wrote a book, the AI playbook, where you are espousing this process and technical framework for going from idea to delivery with machine learning projects. You've coined the phrase biz ml to encapsulate that idea. I'm wondering if you can give a bit of context about what that is and some of the story behind how you came to that formulation.
[00:02:59] Unknown:
Yeah. It's the 6 step practice playbook paradigm framework for running machine learning projects end to end so that they successfully deploy. You know, today, there is no established standardized practice for running these projects that's that's well known to business professionals, and it's really a business practice. In fact, in general, business professionals don't even realize there needs to be a specialized, very particular kind of business process that's well understood and collaboratively executed in order to make sure these models actually get deployed. And in fact, most new machine learning projects fail to achieve deployment. And I as as, in fact, has been shown from research I participated in, where we surveyed data scientists. I did that in partnership with Rexor Analytics.
A year prior, did it with kd nuggets, similar results. Data scientists make the model that doesn't doesn't get green lit. The stakeholders get get cold feet. But there's lots of executive surveys showing the same thing. Recently, IBM came out with the results of their industry research showing that the average return on investment for AI projects is basically nothing, very low in the in that it's actually lower than the cost of capital. So BizML is is the buzzword to try to help evangelize the understanding of just the need for something like this. Right? And it it consists of 6 steps. And since since your listeners are pretty technical, I can just say, hey. Look. This actual breakdown of 6 steps is is nothing terribly new to the degree that people actually ever formulate something like this. It's just that no formulations caught hold. So let's give it a nice buzzword.
And probably much more importantly, let's establish an understanding that the business professionals have got to ramp up on a semi technical understanding so that they can collaborate deeply end to end across the steps.
[00:04:51] Unknown:
And as you mentioned there, this is in contrast to the way that a lot of teams might typically approach the delivery process of a machine learning project where they say, oh, I made this model. Now I just need to put it in production and everything's great. I'm wondering if you can give some some contrast to the way that you think about this procedural step of going from I've come up with the idea. I've run it by the business leaders. I've gotten buy in. Now I'm going to actually do the technical work of development and deployment and some of the ways that that might contrast with the way that your typical technical ML team might natively approach the the process of deploying a machine learning model into a production context.
[00:05:38] Unknown:
Yeah. I mean, you've basically outlined why we need BizML or something like it. You've you've described the phenomenology, the syndrome that repeatedly happens. The the the way they're that teams natively conduct it is failing routinely. When you get buy in from the business side, it's not. Buy in isn't buy in because they didn't quite get what they're green lighting. They didn't quite get exactly how probabilities are going to alter and therefore improve large scale operations, the very operations they need to protect with their life because that's what's making the company stay alive at this particular moment.
They don't understand yet that how do you quantify the performance of a predictive model. They may understand. Sure. It doesn't predict like a magic crystal ball, but that predicting better than guessing is oftentimes, very valuable for improving large scale operations. But they don't know what that means concretely. How do you translate that concept into metrics that mean something to business stakeholders? You know? Not just lift and gains and precision recall. Even accuracy is just a technical metric. We gotta get people on the same page as far as, business metrics like profit, ROI, number of customers saved, number of dollars saved.
In fact, that metrics part is sort of 2 of 3 main semi technical things that we all gotta get our business stakeholders, our clients, our bosses on the same page that everybody has to understand, everybody. What's predicted, how well, that's the metrics, and what's done about it. And the what's predicted and what's done about it, that defines the use case, but that's literally only the first of 6 steps. The first step is to define the last step, which is deployment, that use case. But then the way I formulated it is step 2 is to define the prediction goal. That is get much more specific about the dependent variable, not just, hey. We're gonna predict customer churn, but we're gonna predict for all customers who've been around at least 4 months who's gonna decrease their spend by at least 80% in the next 3 months and not increase their spend accordingly in another channel because that doesn't count as a defection, etcetera, etcetera. All the caveats and qualifiers that fully define what's gonna end up being an actionable pertinent dependent variable that for you to model on, Those are that's just semi technical. Right? It's detailed. It's the kind of thing business stakeholders generally don't tend to get involved in, but they must.
That those are business driven decisions. They're based on pragmatics and how exactly if you're gonna integrate probabilities and have those probabilities directly, actionably alter large scale operational decisions, we better get on the same page of exactly what their probabilities of. What are you predicting? So that end to end practice, basically, the first 3 steps are pre preproduction, and they're they revolve around those details, what's predicted, what's how well, what's done about it, not quite in that order, but it's, you know, what's the deployment goal, which is the first 2 or the the the main pair, what's predicted, what's done about it. The prediction goal where you're getting much more specific about what's predicted, and then metrics, how well it predicts both business and technical metrics. And then the last 3 are what everybody already knows if you're if you're a data engineer or data scientist, which is prep the data, train the model, and deploy it. Yes. You need to manage monitor thereafter, up update, refresh the model, and such. But let's actually get them deployed. Let's define a practice that culminates with deployment that's understandable and executed with business stakeholders.
[00:09:07] Unknown:
I think it's also interesting to dig into that question of deployment because oftentimes, you say, oh, I've built this model. It can predict this thing based on these input values, but that doesn't necessarily encapsulate all of the other work that's necessary to productionize the model or the different systems that need to be able to interoperate with that model to get the desired outcome. And I'm wondering what are some of the shortcomings that you have seen in that conceptualization of what deployment even means.
[00:09:38] Unknown:
Yeah. I mean, deployment you know? So I start out with an example from my own because early early in my consulting, and I had a a large, successful online dating company. And I convinced them, hey. You should have me do your churn modeling, and, you know, we can retain a lot of the customers the paying premium level customers that you're losing. They're like, sounds good. I mean, they had a lot of cash. They had a lot of revenue going on. It was a small company. So then I, you know, put it in the PowerPoint, and I was like, look. This is the lift of the model, and this is the kind of thing you could if you targeted here's a profit curve for targeting a retention marketing campaign, you know, giving a discount, and that cost something because people who were not gonna cancel are gonna consume the discount anyway and did that arithmetic, showed the graph, churned it out in Excel because technical tools don't let you do profit curves in general.
And then I got the typical response. Right? I mean, the power is stuck in pow PowerPoint. They're like, oh, that seems interesting. You want us to do something about it? You want us to start a new initiative, a new operation, a new marketing campaign? They're up to their eyeballs just keeping operations alive. I had pitched them on a general idea, but they and they had accepted without really getting schooled by me and what that implementation would actually involve if we're gonna if it were gonna actually achieve value. And that's the general syndrome that to this day is repeated over and over again to 1 degree or or another.
[00:11:10] Unknown:
Another framing of this that I think is interesting to explore is what are the contexts in which this BISM L framing needs to be adopted, and what are the technical scopes of the types of ML or AI projects that benefit from this process where maybe if you're doing scientific research and you just wanna build an ML model to see if it can do some sort of prediction or, maybe for, like, protein discovery, you just need an AI model that can churn through a bunch of probabilities and give you some sort of result. Maybe that doesn't need as much of the process upfront, and I'm wondering if you can just give some of the caveats of when and how to bring this formulation to bear.
[00:11:57] Unknown:
Yeah. I mean, this is a business practice that's meant to leverage machine learning to capitalize on it in a in an enterprise commercial deployment or could be other kinds of organizations. It can be health care for political campaigns. The Obama campaign very in 2, 012 very famously used uplift modeling, persuasion modeling, but and they got it deployed very much. But, yeah, to put you know, I'm not handling anything and everything with BizML that 1 might refer to as AI. So we're really talking about predictive use cases, right, which is where which is the established use cases over decades, and it's where most of the money still is, where the proven wins can be and often are. And only and it's older than, let's say, generative AI, but it's not old school. It's where it's it's still largely on tap, especially with all these project failures, and where where very much the failures are more because you don't get deployed much more often than failures where you prematurely deploy. There's, like, Zillow. There's some examples of that, but that's not the main syndrome that we're facing these days. But, yeah, if you're trying to get insights or capture patterns and then inspect them with human eyes to get insights or you're con you're or you're performing research and development. That's different. I'm talking about getting it deployed, implemented, and integrated at a enterprise level.
I'm talking profit.
[00:13:21] Unknown:
And as you mentioned, you have been working in this industry for a long time. You've been working in consulting a fair bit. And I'm wondering what are the aspects of your personal experience in this space that have been most formative in your conceptualization of this BizML process and some of your experience of putting it into implementation that have helped to reinforce the ideas that you put out in the book.
[00:13:48] Unknown:
Yeah. I mean, I just feel like I see projects fail to reach deployment, repeatedly. And I have you know, I I included that personal experience of my own consulting with the online dating site that I just mentioned in the book as well as a as a successful project for targeting ads. It's that's also included in the book, and I've got positive experiences from UPS and FICO. In general and that online dating client case example was from a long time ago, and I just don't feel like the world has changed that much, that the industry has changed on the business conception, the way it's sold the business, and the way it's conducted from a business and operational standpoint, from an organizational standpoint. It's it's it's a long time coming. You know? I mean, talking to a technical audience here. So a lot of people may have heard of Krisp DM. Have you heard of Krisp DM?
[00:14:39] Unknown:
I think I've heard come across the phrase, but I forget the specifics.
[00:14:44] Unknown:
Yeah. So that's that's the problem. So that was the may by far, the only 1 that gained some traction as far as far as formalizing a business organizational practice. It stands for cross industry standard process for data mining. It's 30 years old, and it's called data mining. It's old. Right? That's outdated. And it's, it never caught hold. It didn't succeed definitely in the sense that business, professionals have never heard of it. And, of course, this is a business practice. They're the ones, 1st and foremost, who need to know it if they're going to be running ML projects through to successful value capture and deployment.
So it's just like it just becomes clear. I mean, I've been in the field 30 years. It just becomes clear. This is a long time coming. Somebody's gotta do it. Let's we gotta evangelize not just the the next best thing technically, but but the understanding and organizational practice to much to to really improve what right now is a somewhat dismal success rate of reaching deployment. I mean, the world's gotten really good. The industry's gotten really good at at sweeping failures in this sense that, you know, getting the cold feet and no deployment at the end of the project under the rug, you know, largely because the AI hype kinda helps with that. It's not sustainable. The executives are catching wise. The, you know, the industry reports are coming out, and they may not be quite as loud as the buzz and hype right now. But in general, the stuff's gonna hit the fan. We gotta make this change. Now it's not just my own practice, though. I've been operating kind of an industry as a as a thought leader now for so long. Been running the conference series since 2009.
The majority of example case studies from both of my books, including the first 1, predictive analytics, which has a color table of a 100 and, is it 40 a 147 mini case studies of of deployed machine learning. Right? These come from the conference, but now we have a whole operationalization business, BizML track in the conference for this very reason. And so it I've been, you know, basically with that, essentially, networking, living in just having access to so many other people's experience, and the theme just comes back over and over again. Let's get our business stakeholders on the same page. Let's ramp them up on the concrete understanding of what defines a project, which is what's predicted, how well, what's done about it.
It's semi technical understanding they must ramp ramp up on and so that they can then collaborate deeply, weigh in on the project specifics end to end across the practice, and then make an informed, hopefully, positive decision about whether and exactly how to deploy the model.
[00:17:27] Unknown:
So in order to put this process into practice, what are who are the personas that need to be involved in the overall project scope where largely it has been a very technically oriented team. It has been the data scientists, the, chief information officer, the chief data officer who manages the project scope and the execution, who are the people who are often missing out on that in those conversations, and who are the personas that need to be involved from day 1 through to finalization of the project in order for it to stand a higher likelihood of success?
[00:18:06] Unknown:
Well, the project has a client. Right? And the the customer is always right. You know? I mean, that's the part that's missing. I mean and the customers don't think they're right because they think they don't understand it, but they don't know the that what they don't know is that what they don't know is not hard to know. They don't need to learn how to change a spark plug. They just need to learn how to steer momentum, friction, the rules of the road, expectations of driver. You have to run a machine learning project. It's the same thing. You need to get a certain level of expertise. So what are we talking about? We're talking about the stakeholder. Right? If the if the model's meant to be deployed and, in general, with the enterprise project they are, I mean, otherwise, what's the point? You're trying to improve operations. What are those operations?
Who runs those operations? Who's in charge of the effectiveness and efficiency of those operations of targeting, marketing fraud detection, financial risk management, where to where to drill for oil, which satellite to inspect just potentially running out of battery. You know, this predictive use cases are, you know, where you stand to improve pretty much all the main large scale operations that we conduct as organizations. But someone is in charge of that, and it's not a data scientist. Right? It's your client. It's your boss. It's your internal client. It's that stakeholder, and there's other stakeholders that are gonna be involved 1 way or another. Anybody and everybody involved in or touched by the project needs to get a basic sense. Right? And this is stuff that, you know, it's shrouded in quasi secrecy because we keep it so technical, but it can't be.
We need to, you know, unveil this, and, you know, it's a lot more understandable, accessible, interesting, and exciting than high school algebra. And they need to see that and, you know, read a book on it. In fact, I'll pitch my book, which, you know, along the way of covering the those 6 steps of BizML across 6 main chapters is actually the main contribution ultimately, hopefully, of this book is that it's ramping up the business reader to understand that semitechnical know how and background that they need in order to participate.
[00:20:13] Unknown:
And, organizationally, a lot of times, the MLT might be siloed from the rest of the business because of the fact that what they do seems to be such a black art that nobody else can understand it. I'm curious, what are some of the ways that businesses need to think about the organizational structures that need that that need to be addressed in order to bring these people together more natively to ensure that there is better collaboration, better machine learning and AI projects are thought of are actually more likely to be conducive to the overarching goals of the business.
[00:21:00] Unknown:
Yeah. I mean, I you know, first of all, the first step in solving a problem is recognizing that you have a problem. And so what you said, this concept that people are seeing it as a black art, that's the problem. And as soon as they realize that, the solution starts to spell itself out. It depends on the organization. But you need an analytics translator and or whoever's actually running the project, whether they actually have a data science background or not, they need to be in touch with the value proposition. You know, the the the keeping as a blackguard and keeping as the sort of most coolest, bestest, rocket science ever, which it is, by the way, but then, you know, therefore, it's automatically valuable, that's that's the misunderstanding.
It's only valuable if you act on it, integrate it if it actually changes and thereby improves operations. And recognizing that is basically getting away from hype. To the degree that people see it as a mysterious black box is the degree to which it's hyped in the sense of mismanaged expectation, overpromising. And the antidote to hype is to focus on concrete value. So this has to be reframed as an operations improvement project that uses critically and necessarily machine learning and a predictive model. But first, it's a business project, and second, it's a data or analytics or tech project in general. So it's it is a mind shift, but not the craziest 1 ever. It's just like, let's get a big concrete, realistic, and down to earth con tangible.
Let's get real, man. That's it.
[00:22:29] Unknown:
As you mentioned, the business stakeholders don't need to know all of the low level technical details about whether you're using random forests or gradient descent. They just need to know this is the impact that the model is going to have. These are the pieces of information that it needs to have access to. I'm curious what is an appropriate level of detail and depth to present to the business stakeholders in order to ensure that they can communicate most effectively about the problems that they see, the solutions that they would like to see applied to the project?
[00:23:05] Unknown:
Yeah. Great question. I mean, listen. They just need to learn what's predicted, how well, what's done about it. So literally, the ins and outs of a model. Literally, the input and the output. What's it predicting? What's the meaning of a probability? You know? It's not a magic crystal ball. It's not gonna make highly confident predictions in general. So putting a number on it, that makes sense. The idea that it's a number between 0a100 or 0 and 1, that's fine. That shouldn't take too long to to comprehend. And yet the inside of that model, the inner workings, whether it's a random forest, no. They don't need to get involved, and I don't know where the spark plug is in my car, by the way, really.
But I know the concept of of an internal combustion, and it's cool. And I liked learning about it as a kid. People should learn about ensembles like Random Forest. It's a really cool scientific thing, and then they can in in 15 minutes, they can learn something really cool about sort of not the wisdom of a crowd of people, but the wisdom of a crowd of models. And it's it's delightful. Right? They should get some sense of it. But operationally, yeah, they just need to sort of learn. If this isn't the rocket science part, it's it's learning how to use or capitalize on the rocket science, which ultimately just comes down to predictions, otherwise known as probabilities.
So probability of what? How is that gonna be used exactly and precisely? How you're gonna translate from a prediction into an action, which is often a very simple direct translation. That's not a scientific part, but it's pragmatic. And where do you draw the line? Am I gonna am I going to hold 0.2% or 0.3% of all transactions on the basis of potential fraud? Right? You have to decide exactly where you draw that line. That's a business decision that that's gonna be driven by the trade offs that the model evaluation shows. So you need to be bridging that gap. If a stakeholder is not looking at any numbers, then they're not doing their job because they're in charge of improving a large scale operation.
[00:25:08] Unknown:
And so once you have defined the scope of the project, you have everybody involved, What are the personas that you see as being best suited to owning the overall delivery of the project? Who should be the leader of an ML project? Is it the technical lead? Is it the business stakeholder? I'm just wondering from a management and visibility perspective how best to ensure that all of the necessary steps are getting done and that the appropriate requirements are gathered, the technical approach is sound, etcetera?
[00:25:43] Unknown:
Yeah. I'm actually agnostic about that. It depends on the context and situation, germination of the idea, the organizational structure, and it can or could be somebody with 1 with more business background or more technical background. It depends. The way I put it in the book is the answer to that question of who should lead the project is it should be you. Right? Okay. It doesn't have to be you if you don't want to. But if you want something done well, do it yourself, or at least ensure someone else is. The point is that this practice must be followed, that we need to get a collaboration between people who are quants, you know, data scientists, data science experts, modeling experts, and business leaders who are have their eye on on the ball, understand the business pragmatics, the constraints in deployment, what metrics matter.
It's it's gotta be a collaboration that unifies those 2 sides where there's generally such a big gulf between them. So that's the deal. Somebody's got to lead a project that's going to bring the sides together, whichever side that person may. People tend to be a little bit more on 1 side or the other, probably a lot more on 1 side than the other in general. But either kind, you know, can lead the project.
[00:27:08] Unknown:
And talking to the organizational patterns, what are some of the ways that you have seen them hinder the progression of an ML project and some of the anti patterns that teams should watch out for and be aware of as they go through these different steps from idea to delivery?
[00:27:26] Unknown:
Yeah. I mean, I think that, there's a misconception that, hey. Look. We define the business objective, so we're all good. You know, I know that technology is meant to have business value. And I've defined it. We're gonna do churn modeling in order to target a retention marketing campaign. But that's literally only the first of 6 steps. So and that's, you know, and that's, like, that's exemplified by my story with the online dating site where I'm like, that was basically what I pitched, and they said, go for it. But in fact, you gotta get much more concrete and collaborative across all the definitions at a real level of detail, not the detail about exactly which modeling method you're gonna use, but what what's the definition of the dependent variable, which, by the way, even forgetting the business practice, obviously, you need to define the dependent variable, the the thing you're trying to predict, very carefully and very precisely and specifically in a way that's aligned with the intended deployment and how that deployment is meant to pursue particular business objectives and and business metrics.
That's that's not standard process. It's kind of a ad hoc thing. The idea of really fully fleshing out all the aspects, qualifiers, and caveats that fully define the dependent variables definition. That's a whole chapter in the book, and that's sort of a main there's a lot of stuff in here, which is not big time rocket science that still even the most senior data scientists haven't really been trained on or been given material to think it through and make it concrete. So there's just sort of a business pragmatic, realism driven, value focused way of reorienting around how we're leveraging this technology.
It's a culture shift, but I I I'm hoping that if we rally around just the idea that we need an understood, well defined business paradigm or playbook that must be collaboratively executed, that will be well on our way to the necessary culture shift that will really help drastically improve the success rate of deployment driven machine learning projects.
[00:29:45] Unknown:
And then once you have the business stakeholders involved, you have your project defined, you are you've started down the path of building the model. What are some of the misconceptions about the work that is involved in building and deploying the models or the capabilities of the models once they're defined that might derail the overall project as you get closer to that deployment step?
[00:30:13] Unknown:
Yeah. I think that, there's this problem that I call the accuracy fallacy, which is that we so often rely on accuracy. The metrics have to be concrete, and they also have to turn to business metrics. That's the only way to get stakeholders to understand what it means in a concrete, meaningful, specific way to have value from an imperfect model, to have an improvement to a a metric like profit and ROI from a model that is not a magic crystal ball, but but does predict better than guessing. I think that, you know, there's when you ask about what are the misconceptions, it's kinda like there's 2 layers of fog. The first layer of fog is the general AI hype. And especially with the advent of generative AI, which, by the way, large language models can be used as predictive models. You can you can there's there's a variety of ways that you can get it to say, hey. What are the chances that this social media post is misinformation?
So then you're getting a probability. It's a much bigger model and much more costly, but it might since it's so language heavy, it might be better for certain tasks like that. But in general in general, generative AI, what it does is makes first drafts. So we have to understand, okay. Look. This thing is not gonna be autonomous, and you need a human in the loop. You need to review everything that it's written. You need to you need to vet it or every picture that it that it that it renders, etcetera. So there's that first layer of fog is sort of understanding, okay, what is the capabilities now? And I contend that the overall on the public stage on the national state international stage right now, the misconception is the the the false narrative is that with the advent of these incredible generative AI models that are so impressive and seemingly human like, that means that we're concretely taking a step toward AGI, artificial general intelligence computer capable of anything a person can do. And let's call it what it is, artificial humans, I don't think that we have concrete steps in that.
I'm not saying that we never that it's technically or in principle impossible, but I don't think I think the speculation is the same as it was in 1950. Like, I I don't, and it's hard to hold that those 2 things in your head at the same time. This thing is incredible. It's so human like. It can deal with concepts around human languages in a in a way that's meaningful on a certain level and very impressive. And yet and yet, the the limit on its actual capabilities is very much there. So that's the first layer of fog. The second is even if you eschew the standard large scale hype that's happening here and you focus on concrete value of a predictive use case, for example, And you're like, I'm gonna do churn modeling. I'm gonna target my or I'm gonna target, with response modeling and target marketing. I'm gonna improve my fraud detection by giving the, auditors a better pool of more likely fraudulent transactions, or whatever it is.
That's literally the first of 6 steps, and that that's only the first good step and that there does need to be a lot more concrete, detailed work bringing together the 2 sides, the business tech sides. And that's the thing that's missing.
[00:33:29] Unknown:
In terms of the motivations and goals for the book that you wrote to formulate this BizML framing, what is the overarching impact that you hope for it to have? And in terms of those personas who need to be involved in the overall
[00:33:55] Unknown:
I'm only slightly tongue in cheek there because I I'm only slightly tongue in cheek there because I am really intending if both sides are gonna collaborate, they gotta speak the same language, get on the same page. And there's a lot of things that data scientists haven't quite thought through yet in terms of organizational process and business value, defining a dependent variable, prepping the data accordingly, because that's where you manifest the dependent variable's definition? What kind of semi technical, understanding your business stakeholder, your boss, your clients need to get in order to participate.
My hope is that the success rate of machine learning deployments, improves greatly. Right now, it's, I don't know. People say it's only 20%. Well, that's actually very hard to measure. But I'll tell you 1 in particular data point is in in the more recent round of industry research I participated in with Rexxer Analytics on their data science survey, and I was able to convince them to add more questions about deployment success. Only 22% of data scientists said that with machine learning projects meant to incur a new capability, only 22% said that their initiatives usually deploy. Across all machine learning projects, only 43% said that.
And this this is this isn't this is just 1 data point. There's a lot of different surveys, but they they don't directly measure how many projects fail or succeed. They're sort of like, how often does data do data scientists say their projects usually succeed? It's a little 1 degree removed from that. But the fact is it's pretty dismal. And the fact is it shouldn't be. I mean, you've got you've got leaders, leading organizations, big tech, a handful of very particular case studies like UPS and FICO that I cover in the book. But at at large, across organizations outside of those handful of leaders, you've got routine failure.
It's endemic, and it shouldn't be. We just need to catch up. We need to get a a much more broad part of the population understanding what it really takes to bridge that gap and get these things deployed.
[00:36:02] Unknown:
And in your experience of working in this space, formulating this BizML process, working with customers and clients and people in your network to onboard them into this way of thought? What are some of the most interesting or innovative or unexpected ways that you've seen those ideas put into action?
[00:36:21] Unknown:
Well, you know, my my my favorite story is the 1 I lead and then actually kinda wrap up the book with, which is UPS, that dramatically improved the efficiency of their delivery of 16, 000, 000 packages a day. And Jack Levis so to sort of answer your previous question, here you have a guy who's not a data scientist, but obviously very savvy analytically. His title was senior director of process management, and he called it the project. He called it an, operations research project. Didn't call it machine learning. That job title and that term operations research is so boring. How are we gonna evangelize the world on the excitement of machine learning? Well, I don't know. But, look, sometimes value is more sexy. Right? Maybe the real sexy thing is the most stodgy projects.
And UPS, this company is now more than a 100 years old with, you know, established in its ways of conducting operations. How are you gonna incur that change? Here, he is not an executive. Right? He and it it's a great story about how he he convinced executives above him, and then ultimately, in deployment, had to effectively, with a team of more than 700 helping with operationalization, convince all the dock order workers loading the trucks. So this was the gist of the project was predict to predict tomorrow's deliveries in order to optimize them. So you've got a bunch of known packages at a shipping center, and you're trying to decide how to allocate them, the trucks that and and then load the trucks so they're ready for their early morning departures.
But there's a bunch of unknowns. You there's a bunch of packages that still may be coming in. You're not certain about them. So augment the known packages with a set of predicted and and tentatively presumed packages. Now you've got a more complete picture, and then the op optimization system has worked with a more complete picture to decide which truck gets which package. Then they have to change loading dock behavior because they're used to looking at the address and saying, oh, these 2 packages definitely go together. And they have to kinda override that, prescribe their behavior, and then you prescribe the driver's behavior. So in combination with that system that predicts packages and acts on those predictions with another system that prescribes the driving directions, the win is amazing. Every year, UPS is now, because of the system, saving a 185, 000, 000 miles of driving $350, 000, 000 and a 185, 000 metric tons of emissions.
[00:38:56] Unknown:
Yeah. Those are definitely impressive statistics, and it's always great to see some of the impact that a conceptually simple, idea can have when you're dealing with anything at large scale where even if something will only save you a couple of pennies incrementally on a single unit when you scale that up to 1, 000, 000 of times a day, then it can have a massive impact.
[00:39:22] Unknown:
Right. Exactly. So you might have something something that targets marketing better in comparison than no targeting at all. It could improve the profit of a marketing campaign by a factor of 5. Whereas with UPS, I don't think they improve their profit by a factor of 5. But again, yeah, if you if you increase the fuel efficiency of all the jet engines by 1%, right, I mean, that's a lot.
[00:39:45] Unknown:
Absolutely. In your own work of ideating this process, putting it into action, working with people in the industry, what are some of the most interesting or unexpected or challenging lessons that you've learned personally?
[00:40:00] Unknown:
Well, that's a great question. I mean, first and foremost, I'd say that the challenges and problems are are BizML is my reaction to the challenges, and it's my formulation of where you saw things succeed. So it's sort of like rather than being a problem to implement it, it's the antidote to the problems. And then broadly, you've got those 2 levels of fog. And then and by focusing on value rather than you know? Another way to frame the hype these days and I I kinda here here's my turn of phrase that that a few people have responded very well to. It's like we're more excited about the rocket science than the launch of the rocket.
It's a sort of fetishization of data science. And a data scientist like me is gonna automatically tend towards that. I mean, that's why I got into the field 30 years ago. I thought learning from data to predict was the coolest kind of science and technology. And then we're so excited about it. And since it's sort of the best, most advanced, and potentially most generally applicable form of technology, you know, everyone's gonna be excited and hang their hat on it and say, look. We're doing the best thing. Of course, we're doing the right thing. But you're not unless you kinda follow through on the swing by way of deployment. So, you know, I think what I'm trying to do here is say, look. There are plenty of use cases or or or case studies where companies did get it right. Let's look at what they did in contrast to what companies often or more generally do and differentiate it and say, okay. Look.
We gotta get collaborate on these details across these steps and these phases. We gotta break down the life cycle in a way that everybody understands and is involved in. And, you know, that's that's the only way we're gonna get to deployment.
[00:41:50] Unknown:
And so for people who are in the space of machine learning, they're excited about their model, they wanna see what it can do in the real world, what are the cases where the BizML framing is the wrong choice?
[00:42:05] Unknown:
Well, first of all, the first question is sort of whether ML is the right choice. Right? I mean, if you shouldn't be using ML, then you shouldn't be using BizML or any kind of technology or process related to ML. And that's that's a legitimate question. So there is and sort of that first of 2 walls of or layers of fog, you know, you've got the tendency these days for executives to say, we gotta do AI. We gotta do more AI, and let's use AI. Instead of focusing on a particular business problem or optimization opportunity, you know, the the use case where the value is gonna actually be driven.
But if you're gonna use pursue a predictive use case where and and prediction in general is what you get from a from a model. Whether you're predicting what should the next word be that I'm writing, and that's how generative AI works. Although it's token, but on that level of detail similar level of detail as next word. How should I change this pixel in the next iteration as I'm rendering this image? That's how generative AI operates with a model to predictably create, generate new images, video, sound, music, etcetera. Or predicting who's gonna click buy, lie, or die, which satellite's gonna run out of battery, and which transaction is gonna be fraudulent for enterprise use cases.
If you're doing that, then you have to reverse plan for deployment. How's that gonna be useful? And then how do you get there? And you just sort of have to be involved in all the steps along the way. So, really, what I've done is just broken down what's almost entirely self evident. The idea of of defining a life cycle in and of itself is not new, but the idea of having a formalized breakdown that's understood in general, that has a brand associated with it. My beautiful 5 letter buzzword, BizML, attempt to do is an attempt to do that, and getting and by by having this, getting stakeholders on the same side of even understanding that there needs to be a specialized particular business practice to run ML projects successfully through to deployment in the first place. Right? That that's what's gonna happen. Now BizML is structured for those types of predictive use cases. Generative AI could only borrow from it so much. Right? So broadly speaking, you've got a similar thing. You do need to be very much value driven with generative.
You have to decide, okay. Exactly what operation? Like, you've got this team who are manually writing a 100 letters to customers a day. You know, could first drafts be written by generative? Be very specific about the operation. What would be a measure of success? How are you gonna integrate it to to ensure that it works and that you're not sending out bad letters, etcetera. So it is about being concrete, reverse planning, But the particulars of BizML are very much for predictive where you have to define the dependent variable. I don't call it that. Right? This is for business readers. I call it the prediction goal or the model's output.
So, yeah, it's it's it is it is more finely scoped specifically for predictive use cases.
[00:45:14] Unknown:
And as you continue to work in this space, work with stakeholders, technical teams, what are some of the future developments that you see or would like to see in both the organizational and the technical approaches to ML that you see as improving the overall success rate of AI projects?
[00:45:35] Unknown:
Well, I think that as this kind of thing gets adopted, and it will. Look. Even if BizML I'm in so in love with my buzzword. But even if it's not the buzzword that the world chooses, something like this is gonna catch on. We need to have common language understanding of vernacular around around this type of process. And once we do, and there's a lot more uniformity across the industry of successful deployments and value driven project planning from the get go, My sort of vision is that instead of, you know, instead of being like, hey. Look. We create a business, and then we figure out where there's an improvement to operations, and then we create a modeling project accordingly.
The thing is that in that way, you're kind of playing catch up. You're like, well, what data happens to be collected right now from transactions? You know, how are operations run, and how are we gonna somehow ham fist, something that that renders predictive scores from a model into that and then integrates that predictive score or predict or probability. It's gonna start to become the other way around where businesses more and more are planned around machine learning because that just becomes such a standard part of what it means to effectively run large scale operations. So I think that it's sort of a broad vision. It's a real reorientation.
So instead of machine learning playing catch up, it's just part of enterprise planning and startups, etcetera, from the get go. The other main thing I think that's missing is is in metrics. How good is it? People don't talk about how good AI is. Data scientists only work almost not not only not entirely strictly, but almost only with technical metrics that only tell you the relative performance of a model in comparison to a baseline like random guessing. They don't tell you anything about the direct they don't tell you anything directly about the absolute business value, like profit or ROI, that a model could deliver depending on how and whether it's deployed.
And that's got to change. So on the technical side, across data scientists, we need to start working with business metrics.
[00:47:47] Unknown:
Are there any other aspects of this overall process of idea to delivery and deployment for machine learning projects or your experience working in the space or your overall goals for your recent book that we didn't discuss yet that you'd like to cover before we close out the show?
[00:48:06] Unknown:
Not really. I just want to see, I you know? Look. This means data scientists have gotta get a little bit out of their comfort zone just the same as there's a gulf between the tech and biz. Right? And both sides have to get out of their comfort zone a bit. It's like you need to have more meetings with normal people, you know, with the with the stakeholder, with your client, with your boss. You need to get concrete, and you need to help them ramp up on this. You need to you know, if your uncle gives my book to you, the AI playbook, do what I say in the opening FAQ where it says, well, I'm a technical reader. There's 3 chapters you have to read, and then skim the rest, and then give it to the boss because they're the ones who need this, and they're not gonna get on the same page as you unless you reach out sort of reach across the aisle in this way. We've gotta bridge this gap and be more value driven.
[00:49:03] Unknown:
Alright. Well, for anybody who wants to get in touch with you and follow follow along with the work that you're doing, I'll have you add your preferred contact information to the show notes. And my usual final question is asking what you about what you see as the barrier to adoption for machine learning. We've spent the whole conversation today talking about that. So I just wanna say that I appreciate you taking the time today to join me and sharing your experience and formulation of how to actually effectively deliver on machine learning projects and helping the business realize the benefits of that. So, thank you for that, and I hope you enjoy the rest of your day. You too. Thanks for having me, Tobias.
[00:49:47] Unknown:
Thank you for listening, and don't forget to check out our other shows, the Data Engineering podcast, which covers the latest in modern data management, and podcasts dot in it, which covers the Python language, its community, and the innovative ways it is being used. You can visit the site at the machine learning podcast.com to subscribe to the show, sign up for the mailing list, and read the show notes. And if you've learned something or tried out a project from the show, then tell us about it. Email hosts at themachinelearningpodcast.com with your story. To help other people find the show, please leave a review on Apple Podcasts and tell your friends and coworkers.
Introduction and Guest Introduction
Eric Siegel's Journey into Machine Learning
The BizML Framework
Challenges in ML Project Deployment
Defining Deployment and Its Importance
Contexts for BizML Adoption
Formative Experiences in ML
Key Stakeholders in ML Projects
Organizational Structures for ML Success
Leadership in ML Projects
Anti-Patterns in ML Projects
Misconceptions in ML Deployment
Goals and Impact of BizML
Interesting Implementations of BizML
Challenges and Lessons Learned
When BizML is Not the Right Choice
Future Developments in ML
Metrics and Business Value in ML
Final Thoughts and Closing Remarks